Human language

Don’t calculate: why machines need a practical sense of humor

Photo: Alessio Ferretti / Unsplash

  • We find language models underpinning many AI technologies, from the automatic prediction features of text messaging applications to chat bots and automated storytelling.
  • However, machines are skilled at puns – though mastery is elusive – but stumble upon irony.
  • Just as a good friend might advise against composing while intoxicated, a machine with enough emotional intelligence to capture irony may advise against sarcastic tweets.

Last summer, in the brief interlude between the first and second lockdown, my wife and I crept into the movies to see Christopher Nolan’s new film, “Tenet.” Like “Memento” on steroids, it promised to be a time travel thread in which obscure characters use cod science to step back in time. The next day I summarized our experience in an email to another Nolan fan:

“BTW, we went to see ‘Tenet’ last night. Our brains are always twisted. But we’re planning to see it again last week, so we’ll figure it out eventually. “

Gmail kindly underlined my choice of “last” in blue, to politely point out that maybe I should rethink it. After all, how could a forward plan take place in the past, unless I too can time travel? While Gmail doesn’t understand my intention, it shows an impressive fluency in everyday language to spot my conflict of hours and times. He does this by using powerful machine learning techniques that train his language model (LM) to mimic the natural rhythms of language.

We find LMs at the root of many AI technologies, from automatic prediction capabilities of texting apps to chat bots, speech recognition, machine translation and automated storytelling. They can be used to fill in the blanks in Cloze tests, or to assess the linguistic acceptability of different formulations of the same idea, or to generate the most likely completions for a given prompt. MLs are about form, not substance, but when surface forms are so expressive, the deeper substance is also tacitly modeled. So, it’s entirely possible to let Gmail compose big chunks of your emails for you – certainly the most ritualized, or the most predictable parts – while you occasionally tinker with specific details that its model can’t predict. by himself.

LMs make real the fear that George Orwell expressed in a famous controversial essay on language from 1946. Orwell worried that English was so cluttered with easy clichés, inviting idioms and outdated metaphors that writers must fight against the lure of convention to say all that is. alive and fresh. As he said, “ready-made sentences … will build your sentences for you, even think your thoughts for you”.

He would no doubt be appalled at the way Gmail smart dial function contributes to the IKEAfication of language, and sides with those who argue that some LMs are little more than “stochastic parrots. “These models may be a pretty sophisticated marriage of statistics and William S. Burroughs’ clipping method, but remember how that method was ultimately used to shatter clichés and leap out of the roughest lanes of the world. language. .

And so it is with LM: a machine can use an LM to impose linguistic orthodoxy or to subvert it, making predictable choices here and highly improbable choices there. It can be used to recognize improbable sequences, like my “goodbye last week “and suggest normative fixes, such as”following week. ”But it can also do the reverse and find compelling ways to make the old and the conventional look fresh.

Consider word games, the first form of verbal humor that children, and child-like machines, learn to master. Suppose, feeling optimistic about your COVID-19 recall, you described the deployment as “a blow well done.” An LM will assign a low probability to this formulation, and a much higher probability to its phonetic neighbor, “a job well done ”, in the same way that Gmail predicts“ next ”, not“ last ”, as the most likely word after“ plan to see it again ”.

But your pun is more easily appreciated as a deliberate attempt at humor, as it combines both phonetic similarity (jab / job) and statistical discrepancy (each formulation has a very different probability of occurrence). If a machine can also relate the word ‘jab’ to the larger context of immunization, using other statistical models – the same models would also recognize a pun ‘jab’ in a boxing context – it can confidently assert that your choice of words is both deliberate and meaningful: you meant “jab”, where “jab” can be taken at face value and replacing “work”.

A machine can also do this process in reverse, to replace a word in a familiar setting, such as a popular idiom, with a word that looks alike, which has a much lower probability (according to our LM) of being seen in this context, and which has a solid statistical basis in the larger context. The key to the pun is recoverability: our substitutions need to be recognized for what they are, and then easily undone.

So how do you go from a machine sense of pun to a machine sense of, say, irony? Machines are proficient in the first case, although mastery is elusive, but stumble over the second. However, a moment’s reflection reveals the kinship between these two forms of humor. The two-for-one echo in irony may be conceptual rather than phonetic, but an ironic echo should be just as recoverable.

When I traded following for last as part of a time travel film, i relied on the semantic relationship between the two, and the fact that ‘last’ echoed the plot of Principle. Irony can operate at a higher level of knowledge, words, and the world, where the rewards and risks are greater, but the fundamental mechanics are the same. It is the types and sources of knowledge that differ, so that an ironic machine needs more extensive and expensive education to master its raw materials.

The next obvious question: What does a machine’s sense of irony do to us, the users of the machine? The strongest arguments can be made in favor of the automated recognition of irony, and sarcasm too, as these have dramatic effects on a user’s perception of sentiment, whether in online reviews. , social media and email, or in our direct interactions with the machine. To do its job properly, a machine must grasp our intention, but irony can do for sentiment analysis what a magnet can do to a compass, making true north difficult to locate. So, for example, if a machine is to derive actionable information from an online product review, it needs to know whether a positive outlook is sincere or ironic.

In an email setting, the machine’s sense of irony can assess whether an incongruity is deliberate or accidental, and prompt the machine to suggest a solution if it is the latter. But its real value is not in removing that little blue line, but in the way it allows a machine to help and advise its users. Maybe the context is not clear enough to support the irony, and needs a little more punch bring out the humor? Even if the irony is properly ingrained, it may not be right for its recipient, who may be an individual with no wit experience in their own emails, or a fairly long mailing list that poses a risk. high misunderstanding or accidental violation?

Just as a good friend might advise us against drunk driving and drunk dialing, a machine with enough emotional intelligence to pick up and use irony may advise us against angry emails and drunken dialing. sarcastic tweets. Just think of the careers that were ruined by clever 3am tweets that go rancid in the light of day. A pat on the shoulder or a forced time-out could well have saved this day. Saving ourselves from our impulses is perhaps the best and most practical reason to give machines a sense of humor.

A sense of irony and how a spiritual upheaval of a jaded nostrum can soften the indignity of implied criticism, can give real weight to an intervention, and make machines a pleasure to use even when their target is. is us. To get there, we need to take their sense of humor out of the playing field, where puns reign supreme, into the realm of ideas, in order to overturn the undeserved wisdom of convention.

As this playground metaphor suggests, the path between puns and higher forms of humor is a comprehensive education on all of the things that make us human. There is a good reason why loners seek partners with a GSOH (a good sense of humor) in dating profiles. Jokes are fun, but it’s what they’re built on – an understanding of others, a willingness to laugh at yourself, and an agility with norms that bill themselves as rules – that matters most to us humans.

Tony Veale is Associate Professor of Computer Science at University College Dublin, specializing in computational creativity. He is the co-author of Twitterbots: making machines that make sense and author of Your Wit Is My Command: Building AI With a Sense of Humor.

This article was first published by MIT Press Reader and has been republished here with permission.