This article was published on January 8, 2020

Scientists developed #MeTooBots to fight workplace sexual harassment — but we shouldn’t rely on it


Scientists developed #MeTooBots to fight workplace sexual harassment — but we shouldn’t rely on it

For the ongoing series, Code Word, we’re exploring if — and how — technology can protect individuals against sexual assault and harassment, and how it can help and support survivors.

It’s almost impossible to name an industry that doesn’t rely on email or other workplace messaging platforms like Slack. While instant, online messaging has undoubtedly revolutionized the workplace — defining itself as a collaborative tool that encourages openness within teams regardless of the company size — it’s created a new, and easier way for perpetrators to discreetly harass coworkers during work hours.

While the responsibility ultimately falls on HR departments to resolve any harassment claims raised at work, Slack takes no responsibility for harassment that takes place on the platform. Currently, there’s no way to mute, hide, or block users on the platform and the company’s “Acceptable Use Policy” defines what is and isn’t allowed — failing to mention its stance on harassment or unwelcome, inappropriate sexual comments. 

To help prevent workplace harassment, programmers from the Chicago-based AI firm NexLP have developed #MeTooBots. The tool, which is currently used by 50 corporate clients, automatically monitors and flags cyberbullying and sexual harassment found in conversation between colleagues via online chat platforms and shared work documents. 

As first reported by The Guardian, the creators of #MeTooBots face a number of challenges to effectively tackle sexual harassment since it comes in many forms, many of which are subtle and context-dependent. Jay Leib, the Chief Executive of NexLP, told The Guardian: “I wasn’t aware of all the forms of harassment. I thought it was just talking dirty. It comes in so many different ways. It might be 15 messages … it could be racy photos.”

The tool uses an algorithm trained to identify sexual harassment, once flagged, the reported comment or conversation is sent to the company’s HR manager. What the bot considers as non-consensual sexual comments is unknown, however Leib told The Guardian that the tool searches for “anomalies in the language, frequency, or timing of communication patterns across weeks, while constantly learning how to spot harassment.”

While a bot that acts as a second witness to potential sexual harassment could potentially deter this kind of behavior, it doesn’t address the source of the issue — workplace culture — and comes with its own loopholes. For example, offenders may learn how to trick the software and turn to other chat platforms that aren’t currently monitored by the harassment-preventative bots. 

#MeTooBots may be another example of how we rely on AI too heavily to fix human and deeply-embedded societal issues, not to mention the implications of how the data that’s collected is protected and distributed. 

While it’s promising to see technology take on a role that could potentially prevent harassment in the workplace, something that 37 percent of women in the tech reported last year in the US  — not to mention the unreported cases. We can’t use AI as a quick fix to a cultural issue that women have been subject to for decades. 

As 37 percent of women in tech reported last year they’d experience harassment in the workplace, it’s clear there is need for action on all fronts. However, we’ll have to keep in mind that technology is a tool and not a solution in itself. We can’t use AI as a quick fix to a cultural issue that women have been subject to for decades, but it’s promising to see the tech industry turn its attention to solving the problem.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with