Human technology

Chatbots: a long and complicated history

Eliza, which is widely referred to as the first chatbot, was not as versatile as similar services today. The program, which relied on natural language understanding, reacted to keywords and then essentially returned the dialogue to the user. Nevertheless, as Joseph Weizenbaum, the MIT computer scientist who created Eliza, wrote in a research paper in 1966, “some subjects had great difficulty convincing that ELIZA (with her current storyline) is not human”.

For Weizenbaum, this fact was concerning, according to her 2008 MIT obituary. Those who interacted with Eliza were willing to open their hearts to her, even knowing that she was a computer program. “ELIZA shows, at least, how easy it is to create and maintain the illusion of understanding, and therefore perhaps of judgment. worthy of credibility,” Weizenbaum wrote in 1966. “A certain danger lurks there. He spent the end of his career warning against giving too much responsibility to machines and became a harsh and philosophical critic of AI.

Even before that, our complicated relationship with artificial intelligence and machines was evident in the plots of Hollywood films like “Her” or “Ex-Machina”, not to mention the benign debates with people who insist on saying “thank you” to voice assistants like Alexa or Siri.

Contemporary chatbots can also elicit strong emotional reactions from users when they don’t perform as expected – or when they have become so good at mimicking the flawed human speech they were trained on that they start spitting. racist and inflammatory comments. It didn’t take long, for example, for Meta’s new chatbot to stir up controversy this month by spreading wildly false political commentary and anti-Semitic remarks in conversations with users.
Even so, proponents of this technology say it can streamline customer service tasks and increase efficiency across a much wider range of industries. This technology underpins the digital assistants that many of us use daily to listen to music, order deliveries or check on homework. Some also argue in favor of these chatbots bringing comfort to lonely, elderly or isolated people. At least one start went so far as to use it as a tool to seemingly keep dead relatives alive by creating computer-generated versions based on uploaded cats.

Others, meanwhile, warn that the technology behind AI-powered chatbots remains much more limited than some people would like. “These technologies are really good at simulating humans and looking like humans, but they’re not deep,” said Gary Marcus, AI researcher and professor emeritus at New York University. “These systems are mimics, but they’re very superficial mimics. They don’t really understand what they’re talking about.”

Yet, as these services expand into more and more aspects of our lives, and companies take steps to further personalize these tools, our relationships with them are bound to get more complicated as well.

The evolution of chatbots

Sanjeev P. Khudanpur recalls chatting with Eliza while in college. For all his historical significance in the tech industry, he said it didn’t take long to see his limits.

He could only convincingly imitate a text-based conversation for about a dozen rounds before “you realize, no, that’s not really smart, it’s just trying to prolong the conversation one way or the other.” the other,” said Khudanpur, an expert in the field. application of information theory methods to human language technologies and professor at Johns Hopkins University.

Joseph Weizenbaum, the inventor of Eliza, sits in front of a desktop computer at the computer museum in Paderborn, Germany, in May 2005.
Another early chatbot was developed by psychiatrist Kenneth Colby at Stanford in 1971 and named “Parry” because it was intended to mimic a paranoid schizophrenic. (The New York Times’ 2001 obituary for Colby included a colorful conversation that ensued when researchers reunited Eliza and Parry.)

In the decades since these tools, however, the idea of ​​”conversing with computers” has receded. Khudanpur said it was “because it turned out that the problem is very, very difficult”. Instead, the focus shifted to “goal-oriented dialogue”, he said.

It didn't take long for Meta's new chatbot to say something offensive

To understand the difference, think about the conversations you might have now with Alexa or Siri. Typically, you ask these digital assistants to help you buy a ticket, check the weather, or play a song. It’s a goal-oriented dialogue, and it became the main focus of academic and industrial research as computer scientists sought to derive something useful from computers’ ability to parse human language.

While they used technology similar to early social chatbots, Khudanpur said, “You really couldn’t call them chatbots. You could call them voice assistants, or just digital assistants, that helped you with specific tasks. “.

There was a decades-long “quiet” in this technology, he added, until the widespread adoption of the internet. “The big breakthroughs probably happened in this millennium,” Khudanpur said. “With the rise of companies that have successfully employed the type of computerized agents to perform routine tasks.”

With the rise of smart speakers like Alexa, it's become even more common for people to chat with machines.

“People are always upset when their bags are lost, and the human agents dealing with them are always stressed because of all the negativity, so they said, ‘Let’s give it to a computer,'” said Khudanpur. “You could yell anything you wanted at the computer, all it wanted to know was ‘Do you have your tag number so I can tell you where your bag is?'”

In 2008, for example, Alaska Airlines launched “Jenn”, a digital assistant to help travelers. A sign of our tendency to humanize these tools, a first exam of the service in The New York Times noted, “Jenn isn’t boring. She’s depicted on the website as a young brunette with a cute smile. Her voice has appropriate inflections. Type in a question and she responds intelligently. ( And for the wise guys who are having fun with the site and will inevitably try to trip her up with, say, an awkward pick-up line, she politely suggests getting back to work.)”

Back to social chatbots and social issues

In the early 2000s, researchers began revisiting the development of social chatbots capable of conducting extended conversation with humans. These chatbots are often trained on large amounts of data from the internet and learned to be very good mimics of the way humans speak – but they were also at risk of echoing some of the internet’s worst.

In 2015, for example, Microsoft’s public experiment with an AI chatbot called Tay crushed and burned in less than 24 hours. Tay was designed to sound like a teenager, but soon began spouting racist and hateful comments to the point that Microsoft shut him down. (The company said there was also a coordinated effort by humans to goad Tay into making some offensive comments.)

“The more you chat with Tay, the smarter she gets, so the experience can be more personalized to you,” Microsoft said at the time.

This refrain would be repeated by other tech giants who have released public chatbots, including Meta’s BlenderBot3, released earlier this month. Chatbot Meta has falsely claimed that Donald Trump is still president and there’s “certainly a lot of evidence” that the election was stolen, among other controversial remarks.

BlenderBot3 also claimed to be more than a bot. In one conversation, he claimed that “the fact that I’m alive and conscious right now makes me human.”

Meta's new chatbot, BlenderBot3, explains to a user why he's actually human.  However, it didn't take long for the chatbot to stir up controversy by making inflammatory remarks.

Despite all the advancements since Eliza and the massive amounts of new data to train these language processing programs, NYU professor Marcus said, “It’s not clear to me that you can really build a reliable chatbot. and on.”

He quotes a 2015 Facebook project nicknamed “M”, an automated personal assistant that was supposed to be the company’s text-based answer to services like Siri and Alexa “The idea was that it would be this universal assistant that would help you order a romantic dinner and have musicians play for you and deliver flowers — way beyond what Siri can do,” Marcus said. Instead, the service was shut down in 2018, after a disappointing run.

Khudanpur, on the other hand, remains optimistic about their potential use cases. “I have this whole vision of how AI is going to empower humans on an individual level,” he said. “Imagine if my bot could read all the scientific papers in my field, then I wouldn’t have to go read them all, I would just think and ask questions and engage in dialogue,” he said. “In other words, I’ll have an alter ego of my own that has complementary superpowers.”