This article was published on July 7, 2019

How big biased datasets make social inequalities worse


How big biased datasets make social inequalities worse

At almost every point in our day, we interact with digital technologies which collect our data. From the moment our smart phones wake us up, to our watch tracking our morning run, every time we use public transport, every coffee we purchase with a bank card, every song skipped or liked, until we return to bed and let our sleep apps monitor our dreaming habits – all of these technologies are collecting data.

This data is used by tech companies to develop their products and provide more services. While film and music recommendations might be useful, the same systems are also being used to decide where to build infrastructure, for facial recognition systems used by the police, or even whether you should get a job interview, or who should die in a crash with an autonomous vehicle.

Every digital device collects data on you and your habits. Fizkes/Shutterstock

Despite huge databases of personal information, tech companies rarely have enough to make properly informed decisions, and this leads to products and technologies that can enhance social biases and inequality, rather than address them.

Microsoft apologized after its chatbot started spewing hate speech. “Racist” soap dispensers failed to work for people of colour. Algorithm errors caused Flickr to mislabel concentration camps as “jungle gyms”. CV sorting tools rejected applications from women and there are deep concerns over police use of facial recognition tools.

These issues aren’t going unnoticed. A recent report found that 28% of UK tech workers were worried that the tech they worked on had negative consequences for society. And UK independent research organisation NESTA has suggested that as the darker sides of digital technology become clearer, “public demand for more accountable, democratic, more human alternatives is growing”.

Traditional solutions are making things worse

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Most tech companies, big and small, claim they’re doing the right things to improve their data practices. Yet, it’s often the very fixes they propose that create the biggest problems. These solutions are often borne from the very same ideas, tools and technologies that got us into this mess to begin with. The master’s tools, as Audre Lorde said, will never dismantle the master’s house. Instead, we need a radically different approach from collecting more data about users, or plugging gaps with more education about digital technology.

The reason biases against women or people of color appear in technology are complex. They’re often attributed to data sets being incomplete and the fact that the technology is often made by people who aren’t from diverse backgrounds. That’s one argument at least – and in a sense, it’s correct. Increasing the diversity of people working in the tech industry is important. Many companies are also collecting more data to make it more representative of the people who use digital technology, in the vain hope of eliminating racist soap dispensers or recruitment bots that exclude women.

The problem is that these are social, not digital, problems. Attempting to solve those problems through more data and better algorithms only serves to hide the underlying causes of inequality. Collecting more data doesn’t actually make people better represented, instead it serves to increase how much they are being surveilled by poorly regulated tech companies. The companies become instruments of classification, categorizing people into different groups by gender, ethnicity and economic class, until their database looks balanced and complete.

These processes have a limiting effect on personal freedom by eroding privacy and forcing people to self-censor – hiding details of their lives that, for example, potential employers may find and disapprove of. Increasing data collection has disproportionately negative affects on the very groups that the process is supposed to help. Additional data collection leads to the over-monitoring of poorer communities by crime prediction software, or other issues such as minority neighborhoods paying more for car insurance than white neighborhoods with the same risk levels.

Big Data is watching you. Enzozo/Shutterstock

People are often lectured about how they should be careful with their personal data online. They’re also encouraged to learn how data is collected and used by the technologies that now rule their lives. While there are some merits to helping people better understand digital technologies, this approaches the problem from the wrong direction. As noted by media scholar, Siva Vaidhyanathan, this often does little more than place the burden of making sense of manipulative systems squarely onto the user, who is actually often still left powerless to do anything.

Access to education isn’t universal either. Inequalities in education and access to digital technologies means that it’s often out of reach from just those communities that are most negatively affected by social biases and the digital efforts to address them.

Social problems need social solutions

The tech industry, the media and governments have become obsessed with building ever bigger data sets to iron out social biases. But digital technology alone can never solve social issues. Collecting more data and writing “better” algorithms may seem helpful, but this only creates the illusion of progress.

Turning people’s experiences into data hides the causes of social bias – institutional racism, sexism and classism. Digital and data driven “solutions” distract us from the real issues in society, and away from examining real solutions. These digital tasks, as French philosopher Bernard Stiegler noted, only serve to increase the distance between technological systems and social organizations.

We need to slow down, stop innovating, and examine social biases not within the technology itself, but in society. Should we even build any of these technologies, or collect any of this data at all?

Better representation in the tech industry is vital, but their digital solutions will always fall short. Sociology, ethics, and philosophy have the answers to social inequality in the 21st century.The Conversation

This article is republished from The Conversation by Doug Specht, Senior Lecturer in Media and Communications, University of Westminster under a Creative Commons license. Read the original article.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with