Human communication

View from Washington: AI is neither sentient…nor regulated

The controversy around Blake Lemoine’s claim that Google has “sentient” AI has again exposed ignorance and the need for closer scrutiny of innovation.

The controversy over Google engineer Blake Lemoine’s belief that the company’s LaMDA AI has become sentient is gradually shifting from the remote and unnecessary topic of Skynet to the more pressing topic of how we interact with these systems as that they are becoming increasingly effective at mimicking human speech. This trend also once again highlights regulatory issues.

Something particularly striking about Lemoine’s claims is that they come from a Google engineer with seven years of experience.

Much is said about Lemoine’s personal interest in spirituality. He pretends to be a priest. This may have made him more sensitive about how he interpreted LaMDA’s responses. But he seems to have a solid understanding of how AI works and pattern matching. He helped develop an “equity algorithm to remove bias from machine learning systems” before joining the AI ​​team responsible for Google.

This raises an obvious question: if a chatbot like LaMDA is already so good at mimicking human communication that someone with Lemoine’s experience attributed it with sentience, what about the rest of us?

Some specific risks are just as obvious: could a chatbot trick a person into divulging too much personal information to what would still be a business tool; or could he exert an unwelcome, deceptive or potentially malicious influence on his individual opinions?

On this last point, it is worth reading Lemoine’s published transcripts of his and a colleague’s conversations with LaMDA.

They are remarkably fluid but on closer inspection, one senses how they proceed from a matching of the topics addressed and the line of questioning pursued. The answers are hackneyed, reflections of the likely “consensus” around topics that lie within the underlying natural language processing model (Lemoine himself describes LaMDA as “a 7-8 year old who knows physics”) .

Big AI players say they mitigate big problems through self-regulation. Three of Google’s competitors – OpenAI, Cohere and AI21 Labs – published a joint proposal on deploying language models in early June. These aren’t without merit, but they come from an industry that has squandered a lot of trust.

Amid ongoing controversies over the behavior of social media companies, AI has found itself embroiled in a seemingly endless cycle of rollout-apologies-BandAid-repeat.

For Google in particular, Lemoine’s case and the way it was awkwardly handled echoes its earlier confrontation with company scientists who published research questioning the value of extremely large language models. This led to the controversial departures of other top researchers, including Timnit Gebru and Margaret Mitchell.

It seems we need formal government regulation instead.

Some are in hand. The EU is trying to establish itself as an AI counterpart alongside the US and China, in part by acting as a legislator. But the main actor will certainly remain the United States. Its legislators define the culture in which Silicon Valley operates and the Valley dominates AI (along with now being one of the biggest spenders in lobbying US politicians).

One problem is that Washington’s record isn’t great.

Much of this comes down to the now infamous Section 230 of the Communications Decency Act. Originally intended to stimulate the economy of the Internet by offloading the responsibility of what happened online from technology platforms, it also stimulated the “move fast and break things” culture described by Mark Zuckerberg of Facebook, which can be seen among the causes of the problems we are facing.

Geopolitics is now also playing an influential role as the United States watches China’s accelerating progress in AI with suspicion. Washington wants to avoid stifling innovation in a way that would allow its rival to close the gap.

However, there may be regulatory models that could be helpful. Areas that would seem to warrant the creation of legal do’s and don’ts are understood. They primarily relate to potential harms in areas such as privacy, bias, hate speech/incitement to violence, vulnerability to bad actors, deployment, monitoring/moderation, interaction (as illustrated by the case of Lemoine) and more.

What needs to be added is some form of continuous auditing process. AI is still in its infancy as a technology, but is advancing at a rapid pace.

Consider how quickly some of its language patterns are developing. OpenAI’s GPT-2 model released in 2019 had 1.5 billion parameters, ten times more than the original 2018 model. GPT-3 then came in 2020 with 175 billion parameters, a 120X increase. And GPT-4 should have 100 trillion, an increase of 500X, although its arrival is a bit long.

Processing also continues to expand. The first publicly available exascale supercomputer, Frontier, was unveiled at Oak Ridge National Laboratory in June. It can perform a billion billion operations per second. It is a ‘1’ followed by 18 zeros.

Can this type of activity, and also what it allows, be audited and published like the accounts of a company? Perhaps only to the extent that the “results” would present key figures and other details regarding the activity deemed to require disclosure, while acknowledging appropriate commercial confidentiality. The periodicity – quarterly or half-yearly perhaps – could also be comparable given the pace of innovation. Could this at least offset political concerns about slowing innovation?

It’s a very vague idea, more of a provocation than a proposal. It needs to be fleshed out and the definition is a difficult process. But in the context of what we’ve seen most recently around Lemoine’s claims, it could also start to help us build a platform to educate the public by showing what AI really is, what she can do and how her abilities develop.

Something like this now seems essential. AI remains poorly understood outside of the tech world, but the world as a whole is already interacting with systems. While there are some exceptions, those in the AI ​​space have done a poor job of communicating and even worse when it comes to ensuring trust. Some would say deliberately.

But while AI continues to generate confusing and often feverish human written coverage, there is a recurring and compelling argument that the industry has imposed on itself a need for stricter, ongoing oversight.

Sign up for the E&T News email to get great stories like this delivered to your inbox every day.