This article was published on April 29, 2020

Facebook’s teaching AI to lie like a human


Facebook’s teaching AI to lie like a human

Facebook AI researchers today announced Blender, a set of state-of-the-art open-source chatbot models. These new models are supposed to be “more human” than previous iterations and provide more robust conversations.

Based on early indications it appears to be as good as or better than competing bots from Google and Microsoft. But we’re not talking leaps and bounds by any measure. Because, essentially, what Facebook’s done is taught Blender’s chatbots to lie moderately better than everyone else’s.

The big idea here is that Facebook’s taken a huge dataset of information and trained on AI on it. Blender chatbots are open-domain, meaning they can’t just be trained to answer a specific series of questions like the ones you see on websites. Instead, these bots are theoretically capable of engaging in conversation on any subject.

Per a company blog post:

This is the first time a chatbot has learned to blend several conversational skills — including the ability to assume a persona, discuss nearly any topic, and show empathy — in natural, 14-turn conversation flows.

Our new recipe incorporates not just large-scale neural models, with up to 9.4 billion parameters — or 3.6x more than the largest existing system — but also equally important techniques for blending skills and detailed generation.

Much like GPT-2, OpenAI’s spooky-good text generator, the number of parameters used by Blender plays a large role on how convincing the AI’s output is. At higher numbers the AI requires more computational power, but it produces demonstrably better results.

However it takes more than just bulk to be the best, and Facebook’s work beyond simply tweaking parameter sizes is impressive.

Per the Blender project page:

Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy.

The result is a chatbot that, more often than not, appears to pull off human-like conversation convincingly. In some ways, it’s capable of passing the Turing Test. This may very well represent the most advanced conversational AI on the planet. And it’s little more than a parlor trick.

Facebook’s AI doesn’t understand a word of what it’s saying. It isn’t “listening” to anyone, it cannot understand language or the meaning behind words. It only makes associations between statements its developers will deem appropriate responses and those its developers won’t.

It cannot “display knowledge” because it doesn’t actually have a knowledge base. It makes associations that appear to make sense. If you, for example, tell it your favorite Smashing Pumpkins song, it can tell claim to like a different song better, but it’s never been exposed to the actual experience of music. It only processes natural language. It doesn’t understand what audio, imagery, or video is.

It also doesn’t have empathy, it cannot parse your feelings or respond with its own. It merely responds with statements its been trained to understand represent appropriate responses. If you, for example, say you’re sad it won’t say “congratulations,” at least not under optimal circumstances.

Basically, Facebook’s AI is getting really good at gaslighting and catfishing. It’s becoming a perfect liar because there’s no other option. AI doesn’t feel or think, so it cannot experience. And, without experience, there’s no humanity. So a robot’s got to lie to get anywhere in a paradigm where success is based on how human-like its conversational abilities are. 

There may come a time when humanity learns to regret its decision to pursue believable subterfuge as a proxy for human conversational intelligence. But, as is almost always the case when it comes to the existential threat of AI, the biggest danger isn’t that robots will use their growing powers to harm us, but that other humans will.

If you want more information on Blender, check out the research paper here

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with