Pleading your case before the AI ​​judge

Pleading your case before the AI ​​judge

In a 2013 US court case, a man named Paul Zilly was found guilty of stealing a lawn mower.

Initially, he accepted a plea deal of one year in prison and a subsequent supervision order. But an early AI tool assessed him as a high risk offense, and the sentence was extended to two years.

In 2016, the nonprofit investigative site ProPublica investigated about 10,000 defendants in Florida. He found that African-American defendants were more likely to receive a high-risk false-positive flag on the software than white defendants, suggesting that if Zilly had been white — and the software hadn’t identified his ethnicity – then the original sentence would have been allowed to stand.

The case is an example given in a study on the use of AI in the legal system published this month by the Australian Institute of Judicial Administration (AIJA), UNSW Law & Justice, UNSW Allens Hub for Technology , Law and Innovation and the Law Society. from NSW’s Future of Law and Innovation in the Profession (FLIP Stream).

The report — AI Decision Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators — identified examples of AI use in Australia and overseas, ranging from computer-based dispute resolution software to the use of computer code based directly on rules. driven logic, or “AI judges” to help clear a backlog of cases.

In the case of the US software, called COMPAS, the tool is intended to augment the legal process by performing a risk assessment on the likelihood of a violator breaking the law again.

COMPAS includes 137 answers to a questionnaire, starting from “how many times has this person already been arrested as an adult or a minor? to a “do you ever feel discouraged?” much more general?

The code and processes behind COMPAS are secret and not known to the prosecution, defense or judge, but can have real consequences, as the Zilly case illustrates.

Backlog Clearing

Estonia’s justice ministry said it would seek to clear a backlog of cases by using 100 so-called “AI judges”, the intention being to give human judges more time to deal with more complex cases.

According to reports, the project could adjudicate small-scale disputes under 7,000 euros. In the concept, both parties would upload documents and other relevant information, and the AI ​​system will render a decision that can be appealed to a human judge.

“Artificial intelligence, as a concept and as a practice, is becoming increasingly popular in international courts and tribunals. There can be both immense benefits and concerns about compatibility with values fundamentals,” said one of the report’s authors, Professor Lyria Bennett Moses of the University of New South Wales.

Take out the quote Both parties would upload documents and other relevant information, and the AI ​​system would render a decision that could be appealed to a human judge.

“AI in the courts extends from administrative matters, such as automated electronic filing, to the use of inferences based on data about particular defendants in the context of sentencing. Judges, Court members and court administrators need to understand technology well enough to be able to ask the right questions about the use of AI systems.

Professor Bennett Moses has suggested that the use of certain AI tools is “in conflict with important legal values”.

“There are tools, frequently deployed in the United States, that ‘score’ defendants on their likelihood of reoffending. This is not based on an individual psychological profile but rather on data analysis. If people ‘like’ that you have reoffended in the past, then you will be considered likely to reoffend,” she said.

“Variables used in this analysis include things like parental separation (and, if so, age when it happened) – the kind of things that might be statistically correlated with a behavior that is criminal but beyond our own control.The tool is also biased (on some equity measures) against certain racial groups.

Break down language barriers

Not all applications of AI in the legal system are harmful.

Professor Bennett Moses said language barriers were a key area where AI could be of enormous value.

A practical and non-controversial example of a benefit is using natural language processing to convert the audio of what is said by judges, court witnesses and lawyers into text.

This can make accessing court transcripts quicker and easier, especially for people who are hard of hearing. In China, some trials are recorded “in real time” in Mandarin and English.

“I’ve always believed that interesting legal questions lie at the technological frontier, whether it’s AI or other new contexts that the law is called upon to address,” said Professor Bennett Moses.

“My main advice is to be careful, to seek to understand how things work before jumping to conclusions about what the law should do about it. But we need people to ask the right questions and help society to answer it.”

Lachlan Colquhoun is the Australian and New Zealand correspondent for CDOTrends and editor of NextGenConnectivity. He remains fascinated by how companies are reinventing themselves through digital technology to solve existing problems and change their entire business models. You can reach him at [email protected].

Image credit: iStockphoto/style-photography

Similar Posts

Leave a Reply

Your email address will not be published.