Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on August 6, 2020

Why ‘human-like’ is a low bar for most AI projects

Awww, look! It thinks it's people!


Why ‘human-like’ is a low bar for most AI projects

Show me a human-like machine and I’ll show you a faulty piece of tech. The AI market is expected to eclipse $300 billion by 2025. And the vast majority of the companies trying to cash in on that bonanza are marketing some form of “human-like” AI. Maybe it’s time to reconsider that approach.

The big idea is that human-like AI is an upgrade. Computers compute, but AI can learn. Unfortunately, humans aren’t very good at the kinds of tasks a computer makes sense for and AI isn’t very good at the kinds of tasks that humans are. That’s why researchers are moving away from development paradigms that focus on imitating human cognition.

A pair of NYU researchers recently took a deep dive into how humans and AI process words and word meaning. Through the study of “psychological semantics,” the duo hoped to explain the shortcomings held by machine learning systems in the natural language processing (NLP) domain. According to a study they published to arXiv:

Many AI researchers do not dwell on whether their models are human-like. If someone could develop a highly accurate machine translation system, few would complain that it doesn’t do things the way human translators do.

In the field of translation, humans have various techniques for keeping multiple languages in their heads and fluidly interfacing between them. Machines, on the other hand, don’t need to understand what a word means in order assign the appropriate translation to it.

This gets tricky when you get closer to human-level accuracy. Translating one, two, and three into Spanish is relatively simple. The machine learns that they are exactly equivalent to uno, dos, and tres, and is likely to get those right 100 percent of the time. But when you add complex concepts, words with more than one meaning, and slang or colloquial speech things can get complex.

We start getting into AI’s uncanny valley when developers try to create translation algorithms that can handle anything and everything. Much like taking a few Spanish classes won’t teach a human all the slang they might encounter in Mexico City, AI struggles to keep up with an ever-changing human lexicon.

NLP simply isn’t capable of human-like cognition yet and making it exhibit human-like behavior would be ludicrous – imagine if Google Translate balked at a request because it found the word “moist” distasteful, for example.

This line of thinking isn’t just reserved for NLP. Making AI appear more human-like is merely a design decision for most machine learning projects. As the NYU researchers put it in their study:

One way to think about such progress is merely in terms of engineering: There is a job to be done, and if the system does it well enough, it is successful. Engineering is important, and it can result in better and faster performance and relieve humans of dull labor such as keying in answers or making airline itineraries or buying socks.

From a pure engineering point of view, most human jobs can be broken down into individual tasks that would be better suited for automation than AI, and in cases where neural networks would be necessary – directing traffic in a shipping port, for example – it’s hard to imagine a use-case where a general AI would outperform several narrow, task-specific systems.

Consider self-driving cars. It makes more sense to build a vehicle made up of several systems that work together instead of designing a humanoid robot that can walk up to, unlock, enter, start, and drive a traditional automobile.

Most of the time, when developers claim they’ve created a “human-like” AI, what they mean is that they’ve automated a task that humans are often employed for. Facial recognition software, for example, can replace a human gate guard but it cannot tell you how good the pizza is at the local restaurant down the road.

That means the bar is pretty low for AI when it comes to being “human-like.” Alexa and Siri do a fairly good human imitation. They have names and voices and have been programmed to seem helpful, funny, friendly, and polite.

But there’s no function a smart speaker performs than couldn’t be better handled by a button. If you had infinite space and an infinite attention span, you could use buttons for anything and everything a smart speaker could do. One might say “Play Mariah Carey,” while another says “Tell me a joke.” The point is, Alexa’s about as human-like as a giant remote control.

AI isn’t like humans. We may be decades or more away from a general AI that can intuit and function at human-level in any domain. Robot butlers are a long way off. For now, the best AI developers can do is imitate human effort, and that’s seldom as useful as simplifying a process to something easily automated.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with