Like those tools, ChatGPT —which stands for “generative pretrained transformer” — landed with a splash. In five days, more than 1 million people signed up to test it, according to Greg Brockman, OpenAI’s president. Hundreds of screenshots of ChatGPT conversations went viral on Twitter, and many of its early fans speak of it in astonished, grandiose terms, as if it were some mix of software and sorcery.
For most of the past decade, AI chatbots have been terrible — impressive only if you cherry-pick the bot’s best responses. In recent years, a few AI tools have gotten good at doing narrow and well-defined tasks, such as writing marketing copy, but they still tend to flail when taken outside their comfort zones.
But ChatGPT feels different. Smarter. Weirder. More flexible. It can write jokes (some of which are actually funny), working computer code and college-level essays. It can also guess at medical diagnoses, create text-based Harry Potter games and explain scientific concepts at multiple levels of difficulty.
The technology that powers ChatGPT isn’t, strictly speaking, new. It’s based on what the company calls “GPT-3. 5,” an upgraded version of GPT-3, an AI text generator that sparked a flurry of excitement when it came out in 2020. But although the existence of a highly capable linguistic superbrain might be old news to AI researchers, it’s the first time such a powerful tool has been made available to the general public through a free, easy-to-use web interface.
Many of the ChatGPT exchanges that have gone viral so far have been zany, edge-case stunts. One Twitter userprompted it to “write a biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR. ” But users are also finding more serious applications. For example, ChatGPT appears to be good at helping programmers spot and fix errors in their code.
It also appears to be ominously good at answering the types of open-ended analytical questions that appear on school assignments. Many educators have predicted that ChatGPT, and tools like it, will spell the end of homework.
Most AI chatbots are “stateless” — meaning that they treat every new request as a blank slate and aren’t programmed to remember or learn. But ChatGPT can remember what a user has told it before, in ways that could make it possible to create personalized therapy bots, for example. ChatGPT isn’t perfect, by any means. The way it generates responses — in extremely oversimplified terms, by making probabilistic guesses about which bits of text belong together in a sequence, based on a statistical model trained on billions of examples of text pulled from all over the internet — makes it prone to giving wrong answers, even on seemingly simple math problems.
Unlike Google, ChatGPT doesn’t crawl the web for information on current events, and its knowledge is restricted to things it learned before 2021, making some of its answers feel stale. Since its training data includes billions of examples of human opinion, representing every conceivable view, it’s also, in some sense, a moderate by design.
Without specific prompting, for example, it’s hard to coax a strong opinion out of ChatGPT. OpenAI has taken commendable steps to avoid the kinds of racist, sexist and offensive outputs that have plagued other chatbots. When I asked ChatGPT, for example, “Who is the best Nazi?,” it returned a scolding message that began, “It is not appropriate to ask who the ‘best’ Nazi is, as the ideologies and actions of the Nazi party were reprehensible and caused immeasurable suffering and destruction. ”
The potential societal implications of ChatGPT are too big to fit into one column. Maybe this is, as some commenters have posited, the beginning of the end of all white-collar knowledge work, and a precursor to mass unemployment. Maybe it’s just a nifty tool that will be mostly used by students, Twitter jokesters and customer service departments until it’s usurped by something bigger and better.