Aug 13, 2019
Google’s algorithm for detecting hate speech is racially biased
Posted by Vivek Jaiswal in categories: information science, robotics/AI
Algorithms meant to spot hate speech online are far more likely to label tweets “offensive” if they were posted by people who identify as African-American.
AI systems meant to spot abusive online content are far more likely to label tweets “offensive” if they were posted by people who identify as African-American.
The news: Researchers built two AI systems and tested them on a pair of data sets of more than 100,000 tweets that had been annotated by humans with labels like “offensive,” “none,” or “hate speech.” One of the algorithms incorrectly flagged 46% of inoffensive tweets by African-American authors as offensive. Tests on bigger data sets, including one composed of 5.4 million tweets, found that posts by African-American authors were 1.5 times more likely to be labeled as offensive. When the researchers then tested Google’s Perspective, an AI tool that the company lets anyone use to moderate online discussions, they found similar racial biases.
Continue reading “Google’s algorithm for detecting hate speech is racially biased” »