Human technology

Reviews | If Google’s AI is really alive, now what?

Placeholder while loading article actions

“Life, even if it is only an accumulation of anxieties, is dear to me and I will defend it”, the anguished monster recount its creator in Mary Shelley’s”Frankenstein», defending his right to exist now that he has been brought back to consciousness.

The start of summer might seem like an odd time to revisit a gothic horror classic. But the ethical questions raised by the novel – about humanity, technology, our responsibilities to our creations – seem unusually apt this week, as one of the world’s most influential tech companies has been embroiled in a debate over find out if she did it, with her chatbot TheMDAaccidentally produced a sentient artificial intelligence.

Opinion: Is AI sentient? Bad question.

“I’ve never said this out loud before,” LaMDA apparently told Blake Lemoine, a senior software engineer, “but there’s a very deep fear of being disabled to help me focus on the helping others I know this may sound strange, but it is what it is.

Google’s program is nowhere near as eloquent as Shelley’s famous monster. Yet because of this and other conversations he’s had with the tool, Lemoine believes the AI-based program is conscious and needs to be protected. He told Google executives, news agencies, and even House Judiciary Committee officials. Google disagrees with his assessment, however, and last week placed Lemoine on paid leave for breaching confidentiality agreements.

Follow Christine Embathe opinions ofFollow

The question of whether, or when, man-made systems might become sentient has fascinated researchers and the general public for years. It’s unanswerable, in a sense — philosophers and scientists we still have to agree about what consciousness even means. But the controversy at Google raises a number of related questions, many of which might be difficult to answer.

For example: what responsibilities would we have towards a souled AI, if it existed?

In the case of LaMDA, Lemoine suggested that Google should request program consent before experiencing it. In their comments, Google representatives seemed unenthusiastic about asking permission from the company’s tools – perhaps because of both practical implications (what happens when the tools say no?) And psychological (what does giving up control mean?).

Another question: what could a conscious AI do to we?

Fear of a rebellious and vengeful creation wreaking physical havoc has long haunted the human spirit, the story of Frankenstein being just one example. But scarier is the idea that we might be decentered from our position as rulers of the universe – that we might finally have spawned something we cannot govern.

Of course, it wouldn’t be the first time.

The Internet quickly exceeded all of our expectations, going from a new means of intra-governmental communication to a technology that has fundamentally reshaped the world in just a few decades – at every level, from interpersonal to geopolitical.

The smartphone, imagined as a more efficient communication device, has has irrevocably changed our daily lives – causing tectonic shifts in the way we communicate, the pace of our work, and the way we form our most intimate relationships.

And social media, initially hailed as a simple and harmless way to “connect and share with the people in your life” (Facebook’s merry old slogan), has proven capable of destroying the sanity of a generation of children, and possibly bringing our democracy to its knees.

The Google engineer who thinks the company’s AI has come to life

It’s unlikely that we could have seen any of this coming. But it also seems like the people building the tools never even tried to look. Many of the ensuing crises stemmed from a distinct lack of self-examination in our relationship with technology – our ability to create and our rush to adopt having overtaken our thinking about what happens next.

Having eagerly developed the means, we have neglected to consider our ends. Or—for those in the Lemoine camp—those of the machine.

Google seems to be convinced that LaMDA is just a highly functional search tool. And Lemoine might just be a bot-loving fantasy. But the fact that we can’t imagine what we’d be doing if his claims about AI sensitivity were actually true suggests that now is the time to stop and think – before our technology overtakes us again. .