But in many ways, the field of AI ethics remains limited. Researchers say they are blocked from investigating many systems thanks to trade secrecy protections and laws like the Computer Fraud and Abuse Act (CFAA). As interpreted by the courts, that law criminalizes breaking a website or platformâs terms of service, an often necessary step for researchers trying to audit online AI systems for unfair biases.
Whittaker acknowledges the potential for the AI ethics movement to be co-opted. But as someone who has fought for accountability from within Silicon Valley and outside it, Whittaker says she has seen the tech world begin to undergo a deep transformation in recent years. âYou have thousands and thousands of workers across the industry who are recognizing the stakes of their work,â Whittaker explains. âWe donât want to be complicit in building things that do harm. We donât want to be complicit in building things that benefit only a few and extract more and more from the many.â
It may be too soon to tell if that new consciousness will precipitate real systemic change. But facing academic, regulatory and internal scrutiny, it is at least safe to say that the industry wonât be going back to the adolescent, devil-may-care days of âmove fast and break thingsâ anytime soon.
Continue reading “Meet the Researchers Fighting to Make Sure Artificial Intelligence Is a Force for Good” »