Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on February 11, 2021

Are AI investors shorting Black lives?


Are AI investors shorting Black lives? Image by: Pictures of Money

Artificial intelligence often doesn’t work the same for Black people as it does for white people. Sometimes it’s a matter of vastly different user experiences, like when voice assistants struggle to understand words from Black voices. Other times, such as when cancer detection systems don’t account for race, it’s a matter of life and death.

So who’s fault is it?

Setting aside intentionally malicious uses of AI software, such as facial recognition and crime prediction systems for law enforcement, we can assume the problem is with bias.

When we think about bias in AI, we’re usually reminded of incidents such as Google’s algorithm mislabeling images of Black persons as animals or Amazon’s Rekognition system misidentifying several sitting Black members of US Congress as criminals.

But bias isn’t just obviously racist ideations hidden inside the algorithm. It usually manifests unintentionally. It’s a safe bet to assume, barring sabotage, the people at Amazon’s AI department aren’t trying to build racist facial recognition software. But they do, and it took the company’s leadership far too long to admit it.

Amazon argues that its software works the same for all faces when users set it to the proper threshold for accuracy. Unfortunately, the higher the accuracy threshold is set in a facial recognition system, the lower the odds the system will match faces in the wild with faces in a database.

Cops use these systems set at a threshold low enough to get a hit when they scan a face, even if that means setting it lower than Amazon’s peer-reviewed parameters for minimum acceptable accuracy.

But, we already new facial recognition was inherently biased against Black faces. And we know that cops in the US and other nations still use it, which means our governments are funding the research on the front end and purchasing it on the back end.

This means the reality of false arrests for Black people is, in current status quo and practice, an acceptable risk as long as it results in some valid ones too. That’s a shitty business model.

Basically, the rules of engagement in the global business world dictate that you can’t build a car that’s been proven to be less safe for Black people. But you can program a car with a computer vision system that’s been proven less reliable at recognizing Black pedestrians than white ones and regulators won’t bat an eye.

The question is why? And the answer’s simple: because it makes money.

Even when every human in the loop has good intentions, bias can manifest at an unstoppable scale in almost any AI project that deals with data related to humans.

Google and other companies have released AI-powered mammogram screening systems that don’t work as well on Black breasts as white ones. Think about that for a second.

The developers, doctors, and researchers who worked on those programs almost certainly did so with the best interests of their clients, patients, and the general public at heart. Let’s assume we all really hate cancer. But it still works better for white people.

And that’s because the threshold for commercialization in the artificial intelligence community is set far too low all the way around. We need to invest heavily in cancer research, but we don’t need to commercialize biased AI: research and business are two different things. 

The doctor using a cancer screening system has to trust the marketing and sales team from the company selling it. The sales and marketing team have to trust the management team. The management team has to take the word of development team. The dev team has to take it on good faith that the research team accounted for bias. The research team has to also take it on faith that the company they bought the datasets from (or the publicly available dataset they downloaded) used diverse sources.

And nobody has any receipts because of the privacy issues involved when you’re dealing with human data.

Now, this isn’t always the case. Very rarely, you can trace the datasets all the way back to real people and see exactly how diverse the training data really is. But here’s the problem: those verifiable datasets are almost always too small to train a system robust enough to, for example, detect the demographic nuances of cancer distributions or understand how to differentiate shadows from features in Black faces.

That’s why, for example, when the FDA decides whether an AI system is ethical to use, it just requires companies to provide small batch studies showing the software in use, not prove the diversity of the data used to train the AI.

Any AI team worth their salt can come up with a demo that shows their product working under the best of circumstances. Then all they have to do is support the demo with the results of previous peer-review (where other researchers use the same datasets to come to the same conclusions). Meanwhile, in many cases, the developers themselves have no clue what’s actually in the datasets other than what they’ve been told – much less the regulators.

In my experience as an AI journalist – that being, someone who has been pitched tens of thousands of stories – the vast majority of all commercial AI entities claim to check for bias. Yet, scant an hour can pass without a social media company, big tech, or government having to admit they’ve somehow managed to use algorithms that were racially biased and are working to solve the problem.

But they aren’t. Because the problem is that those entities have commercialized a product that works better for white people than Black people. 

From inception to production, everyone involved in bringing an AI product to life can be focused on building something for the greater good, but the moment a human decides to sell, buy, or use an AI system for non-research purposes that they know works better for one race than another: they’ve decided that there is an acceptable amount of racial bias. That’s the definition of systemic racism derived from racist privilege.

But what’s the real harm?

Hearkening back to the mammogram AI problem, when one class or race of people get better treatment than others because of inherent privilege, it creates an unjust economy. In other words: if the bar for commercial acceptability is “if it works for whites but not Blacks,” and it’s easier to develop systems with bias than without, then it becomes more lucrative to focus on developing systems that don’t work well for Black people than it does to develop systems that work equally well for Black people. This is the current state of commercial artificial intelligence.

And it will remain that way as long as VC’s, big tech, and governments continue to set the bar for commercialization so low. Until things change, they’re effectively “shorting” Black lives by profiting from systems that work better for whites. 

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top