Blog

Jun 24, 2020

AI researchers condemn predictive crime software, citing racial bias and flawed methods

Posted by in categories: law enforcement, military, robotics/AI

A collective of more than 1,000 researchers, academics and experts in artificial intelligence are speaking out against soon-to-be-published research that claims to use neural networks to “predict criminality.” At the time of writing, more than 50 employees working on AI at companies like Facebook, Google and Microsoft had signed on to an open letter opposing the research and imploring its publisher to reconsider.

The controversial research is set to be highlighted in an upcoming book series by Springer, the publisher of Nature. Its authors make the alarming claim that their automated facial recognition software can predict if a person will become a criminal, citing the utility of such work in law enforcement applications for predictive policing.

“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” Harrisburg University professor and co-author Nathaniel J.S. Ashby said.

Comments are closed.