AI adoption in criminal justice brings opportunities for efficiency and public safety but requires ethical safeguards to prevent risks of bias, misuse, and erosion of trust.
Copyright: theconversation.com – “AI and Criminal Justice: How AI Can Support – Not Undermine – Justice”
Interpol Secretary General Jürgen Stock recently warned that artificial intelligence (AI) is facilitating crime on an “industrial scale” using deepfakes, voice simulation and phony documents.
Police around the world are also turning to AI tools such as facial recognition, automated licence plate readers, gunshot detection systems, social media analysis and even police robots. AI use by lawyers is similarly “skyrocketing” as judges adopt new guidelines for using AI.
While AI promises to transform criminal justice by increasing operational efficiency and improving public safety, it also comes with risks related to privacy, accountability, fairness and human rights.
Concerns about AI bias and discrimination are well documented. Without safeguards, AI risks undermining the very principles of truth, fairness, and accountability that our justice system depends on.
In a recent report from the University of British Columbia’s School of Law, Artificial Intelligence & Criminal Justice: A Primer, we highlighted the myriad ways AI is already impacting people in the criminal justice system. Here are a few examples that reveal the significance of this evolving phenomenon.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
The promises and perils of police using AI
In 2020, an investigation by The New York Times exposed the sweeping reach of Clearview AI, an American company that had built a facial recognition database using more than three billion images scraped from the internet, including social media, without users’ consent.
Policing agencies worldwide that used the program, including several in Canada, faced public backlash. Regulators in multiple countries found the company had violated privacy laws. It was asked to cease operations in Canada.
Clearview AI continues to operate, citing success stories of helping to exonerate a wrongfully convicted person by identifying a witness at a crime scene; identifying someone who exploited a child, which led to their rescue; and even detecting potential Russian soldiers seeking to infiltrate Ukrainian checkpoints.[…]
Read more: www.theconversation.com
AI adoption in criminal justice brings opportunities for efficiency and public safety but requires ethical safeguards to prevent risks of bias, misuse, and erosion of trust.
Copyright: theconversation.com – “AI and Criminal Justice: How AI Can Support – Not Undermine – Justice”
Interpol Secretary General Jürgen Stock recently warned that artificial intelligence (AI) is facilitating crime on an “industrial scale” using deepfakes, voice simulation and phony documents.
Police around the world are also turning to AI tools such as facial recognition, automated licence plate readers, gunshot detection systems, social media analysis and even police robots. AI use by lawyers is similarly “skyrocketing” as judges adopt new guidelines for using AI.
While AI promises to transform criminal justice by increasing operational efficiency and improving public safety, it also comes with risks related to privacy, accountability, fairness and human rights.
Concerns about AI bias and discrimination are well documented. Without safeguards, AI risks undermining the very principles of truth, fairness, and accountability that our justice system depends on.
In a recent report from the University of British Columbia’s School of Law, Artificial Intelligence & Criminal Justice: A Primer, we highlighted the myriad ways AI is already impacting people in the criminal justice system. Here are a few examples that reveal the significance of this evolving phenomenon.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
The promises and perils of police using AI
In 2020, an investigation by The New York Times exposed the sweeping reach of Clearview AI, an American company that had built a facial recognition database using more than three billion images scraped from the internet, including social media, without users’ consent.
Policing agencies worldwide that used the program, including several in Canada, faced public backlash. Regulators in multiple countries found the company had violated privacy laws. It was asked to cease operations in Canada.
Clearview AI continues to operate, citing success stories of helping to exonerate a wrongfully convicted person by identifying a witness at a crime scene; identifying someone who exploited a child, which led to their rescue; and even detecting potential Russian soldiers seeking to infiltrate Ukrainian checkpoints.[…]
Read more: www.theconversation.com
Share this: