New study to delve into the murky world of AI’s effects on law, ethics and morality
Artificial intelligence (AI) is slowly working its way into New Zealand but what that means for our laws and policies is still a great unknown, says The New Zealand Law Foundation.
New tech such as driverless cars, crime prediction software and "AI lawyers" are challenging traditional laws around transport regulation, crime prevention and legal practice.
Now AI is being put under the microscope in a new 'groundbreaking' Law Foundation study, which seeks to examine how these new technologies affect legal, practical and ethical challenges.
The three-year study, supported by a $400,000 grant from the Law Foundation, will be run by project team leader and Associate Law Professor Dr Colin Gavaghan at Otago University. The study will look at AI implications under four broad topics: employment displacement; "machine morality"; responsibility and culpability; and transparency and scrutiny.
"The AI study is among the first to be funded under our ILAPP project. New technologies are rapidly transforming the way we live and work, and ILAPP will help ensure that New Zealand's law and policy keeps up with the pace of change," says Law Foundation executive director Lynda Hagen.
The effects of AI on law are also under investigation in the study. At least one US law firm says it has hired its first AI lawyer, which will research precedents and make recommendations in a bankruptcy practice.
Another set of questions flows from the employment implications of AI. At least one American law firm now claims to have hired its first AI lawyer to research precedents and make recommendations in a bankruptcy practice.
"Is the replacement of a human lawyer by an AI lawyer more like making the lawyer redundant, or more like replacing one lawyer with another one? Some professions – lawyers, doctors, teachers – also have ethical and pastoral obligations. Are we confident that an AI worker will be able to perform those roles?" Gavaghan says.
Going further into the world of crime, prediction technology such as PredPol - now widely used by police in the US - has been accused of not only reinforcing bad practises and racially-biased policing. Courts are also using predictive software when determining if there is cause for likely reoffending.
"Also, because those parameters are often kept secret for commercial or other reasons, it can be hard to assess the basis for some AI-based decisions. This 'inscrutability' might make it harder to challenge those decisions, in the way we might challenge a decision made by a judge or a police officer," Gavaghan says.
Gavaghan believes that driverless cars are also a contentious issue, as Mercedes recently revealed it would programme its cars to priories car occupants over pedestrians when an accident is about to happen.
"This a tough ethical question. Mercedes has made a choice that is reassuring for its drivers and passengers, but are the rest of us OK with it? Human drivers faced with these situations have to make snap decisions, and we tend to cut them some slack as a result. But when programming driverless cars, we have the chance to set the rules calmly and in advance. The question is: what should those rules say?" Gavaghan asks.
Gavaghan will work alongside Associate Professor Ali Knott from the Department of Computer Science and Associate Professor James Maclaurin from the Department of Philosophy, as well as two post-doctoral researchers.
The Law Foundation's Information Law and Policy project (ILAAP) is a $2 million fund dedicated to developing New Zealand law and policy in the areas of IT, artificial intelligence, cybersecurity, data and information.