Human language

AI masters language. Should we trust what he says?

“I think it allows us to be more thoughtful and deliberate about security issues,” Altman says. “Part of our strategy is this: gradual change in the world is better than sudden change. Or, as OpenAI VP Mira Murati put it when I asked her about the security team’s work restricting open access to software, “If we’re going to learn how to deploy these powerful technologies, let’s start when the stakes are very low.

While GPT-3 itself runs on those 285,000 processor cores in the Iowa supercomputer cluster, OpenAI operates out of San Francisco’s Mission District, in a refurbished baggage factory. In November of last year, I met Ilya Sutskever there, trying to get a layman’s explanation of how GPT-3 actually works.

“Here’s the idea behind GPT-3,” Sutskever said intently, leaning forward in his chair. He has an intriguing way of answering questions: a few false starts—”I can give you a description that almost matches the one you asked for”—interrupted by long contemplative pauses, as if he’s plotting the whole answer in advance.

“The idea behind GPT-3 is a way to tie an intuitive notion of understanding to something that can be measured and understood mechanically,” he finally said, “and that’s the task predict the next word in the text. “Other forms of artificial intelligence attempt to hard-code information about the world: grandmaster chess strategies, principles of climatology. But GPT-3’s intelligence, if intelligence is the right word, comes from the bottom up: through the basic act of predicting the next word. To train GPT-3, the model is given a “prompt” – a few sentences or paragraphs of text from, say, a newspaper article, or a novel or scientific paper – and then asked to suggest a list of potential words that could complete the sequence, ranked by probability. In the early stages of training, the suggested words are nonsense. Prompt the algorithm with a sentence like “Writer omitted the very last word from the first. . . ” and guesses will be kind of a stream of nonsense: ” satellite ” ” puppy ” ” Seattle ” ” so ”. But somewhere in the list – maybe thousands of words in the list – the correct missing word appears: ”paragraph”. The software then strengthens any random neural connections that generated that particular suggestion and weakens any connections that generated incorrect guesses. And then it goes to the next prompt. Over time, with enough iterations, the software learns.

Last January, OpenAI added a feature that allowed users to give GPT-3 direct instructions as a prompt, rather than just asking it to develop an example text passage. For example, using ”instruct” mode, I once gave GPT-3 the prompt: ”Write an essay on the role of metafiction in the work of Italo Calvino.” back, the software provided five specific paragraphs beginning as follows:

Italian author Italo Calvino is considered a master of metafiction, a genre of writing in which the author breaks down the fourth wall to discuss the act of writing itself. For Calvino, metafiction is a way of exploring the nature of reality and how stories can shape our perceptions of the world. His novels often incorporate playful and labyrinthine structures that play with the boundaries between reality and fiction. In If on a winter’s night a traveler, for example, the reader is constantly interrupted by meta-level discussions about the act of reading and the nature of storytelling. . . .

You can give GPT-3 the exact same prompt over and over again, and each time it will generate a unique response, some of them more persuasive than others, but almost all of them remarkably articulate. Instructional prompts can take all kinds of forms: “Give me a list of all the ingredients for Bolognese sauce,” “Write a poem about a French coastal village in the style of John Ashbery,” ‘Explain the Big Bang in language an 8-year-old will understand.’ The first few times I fed GPT-3 prompts of this ilk, I felt a real chill run up my spine. It seemed almost impossible that a machine could generate such lucid, responsive text entirely based on elementary next-word prediction training.

But AI has a long history of creating the illusion of intelligence or understanding without actually delivering the goods. In a much discussed paper published last year, University of Washington linguistics professor Emily M. Bender, ex-Google researcher Timnit Gebru and a group of co-authors said the big language models are just “stochastic parrots”: that is, the software used randomization to simply remix sentences written by humans. “What has changed is not a step above a threshold towards ‘AI,'” Bender told me recently via email. Rather, she says, what has changed is “the hardware, software and economic innovations that enable the accumulation and processing of huge datasets” – as well as a technological culture in which “the people who build and sell such things can get away with building them on non-curated databases.”