This article was published on March 24, 2020

Speech recognition technology is racist, study finds

Amazon, Google, Apple, Microsoft, and IBM's tech all made way more errors with African American voices


Speech recognition technology is racist, study finds Image by: freestocks.org

New evidence of voice recognition’s racial bias problem has emerged.

Speech recognition technologies developed by Amazon, Google, Apple, Microsoft, and IBM make almost twice as many errors when transcribing African American voices as they do with white American voices, according to a new Stanford study.

All five systems produced these error rates even when the speakers were of the same gender and age, and saying the exact same words.

We can’t know for sure if these technologies are used in virtual assistants, such as Siri and Alexa, as none of the companies disclose this information. If they are, the products will be offering a vastly inferior service to a huge chunk of their users — which can have a major impact on their daily lives.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Speech recognition is already used in immigration rulings, job hiring decisions, and court reporting. It’s also crucial for people who can’t use their hands to access computers.

[Read: Why we need more women to build real-world AI products, explained by science]

With the technology set to expand rapidly in the coming years, any racial biases could have severe consequences on careers and lives.

Independent audits needed

The researchers tested each company’s technology with more than 2,000 speech samples from recorded interviews with African Americans and white Americans.

On average, the systems misunderstood 35% of the words spoken by African Americans and 19% of those spoken by white Americans.

These error rates were highest for African American men — particularly if they used a lot of vernacular.

These botches are likely the result of a common problem with AI: The tech relies on data provided by white people.

Sharad Goel, a Stanford professor of computational engineering who oversaw the research, believes the findings show the need for independent audits of new tech.

“We can’t count on companies to regulate themselves,” she said.

That’s not what they’re set up to do. I can imagine that some might voluntarily commit to independent audits if there’s enough public pressure. But it may also be necessary for government agencies to impose more oversight. People have a right to know how well the technology that affects their lives really works.

The tests were conducted last spring, and it’s possible that the companies have reduced their racial biases since then. If they haven’t, they’re producing yet another form of AI that discriminates against people of color.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with