Because these new chatbots learn their skills by analysing huge amounts of data posted to the internet, they have a way of blending fiction with fact. They deliver information that can be biased against women and people of colour. They can generate toxic language, including hate speech.
All of that could turn people against Google and damage the corporate brand it has spent decades building. As OpenAI has shown, newer companies may be more willing to take their chances with complaints in exchange for growth.
Even if Google perfects chatbots, it must tackle another issue: Does this technology cannibalise the company’s lucrative search ads? If a chatbot is responding to queries with tight sentences, there is less reason for people to click on advertising links.
“Google has a business model issue,” said Amr Awadallah, who worked for Yahoo and Google and now runs Vectara, a start-up that is building similar technology. “If Google gives you the perfect answer to each query, you won’t click on any ads.”
Sundar Pichai, Google’s CEO, has been involved in a series of meetings to define Google’s AI strategy, and he has upended the work of numerous groups inside the company to respond to the threat that ChatGPT poses, according to a memo and audio recording obtained by The New York Times. Employees have also been tasked with building AI products that can create artwork and other images, such as OpenAI’s DALL-E technology, which has been used by more than 3 million people.
From now until a major conference expected to be hosted by Google in May, teams within Google’s research, Trust and Safety, and other departments have been reassigned to help develop and release new AI prototypes and products.
As the technology advances, industry experts believe, Google must decide whether it will overhaul its search engine and make a full-fledged chatbot the face of its flagship service.
Google has been reluctant to share its technology broadly because, like ChatGPT and similar systems, it can generate false, toxic and biased information. LaMDA is available to only a limited number of people through an experimental app, AI Test Kitchen.
Google sees this as a struggle to deploy its advanced AI without harming users or society, according to a memo viewed by the Times. In one recent meeting, a manager acknowledged that smaller companies had fewer concerns about releasing these tools but said Google must wade into the fray or the industry could move on without it, according to an audio recording of the meeting obtained by the Times.
Other companies have a similar problem. Five years ago, Microsoft released a chatbot, called Tay, that spewed racist, xenophobic and otherwise filthy language and was forced to immediately remove it from the internet — never to return. In recent weeks, Meta took down a newer chatbot for many of the same reasons.
Executives said in the recorded meeting that Google intended to release the technology that drove its chatbot as a cloud computing service for outside businesses and that it might incorporate the technology into simple customer support tasks. It will maintain its trust and safety standards for official products, but it will also release prototypes that do not meet those standards.
It may limit those prototypes to 500,000 users and warn them that the technology could produce false or offensive statements. Since its release on the last day of November, ChatGPT — which can produce similarly toxic material — has been used by more than 1 million people.
“A cool demo of a conversational system that people can interact with over a few rounds, and it feels mind-blowing? That is a good step, but it is not the thing that will really transform society,” Zoubin Ghahramani, who oversees the AI lab Google Brain, said in an interview with the Times last month, before ChatGPT was released. “It is not something that people can use reliably on a daily basis.”
“Last year, I was despondent that it was so hard to dislodge the iron grip of Google, but technological moments like this create an opportunity for more competition.”
Sridhar Ramaswamy, head of private search engine Neeva
Google has already been working to enhance its search engine using the same technology that underpins chatbots like LaMDA and ChatGPT. The technology — a “large language model” — is not merely a way for machines to carry on a conversation.
Today, this technology helps the Google search engine highlight results that aim to directly answer a question you have asked. In the past, if you typed “Do aestheticians stand a lot at work?” into Google, it did not understand what you were asking. Now, Google correctly responds with a short blurb describing the physical demands of life in the skin care industry.
Many experts believe Google will continue to take this approach, incrementally improving its search engine rather than overhauling it. “Google Search is fairly conservative,” said Margaret Mitchell, who was an AI researcher at Microsoft and Google, where she helped to start its Ethical AI team, and is now at the research lab Hugging Face. “It tries not to mess up a system that works.”
Other companies, including Vectara and a search engine called Neeva, are working to enhance search technology in similar ways. But as OpenAI and other companies improve their chatbots — working to solve problems with toxicity and bias — this could become a viable replacement for today’s search engines. Whoever gets there first could be the winner.
“Last year, I was despondent that it was so hard to dislodge the iron grip of Google,” said Sridhar Ramaswamy, who previously oversaw advertising for Google, including Search ads, and now runs Neeva. “But technological moments like this create an opportunity for more competition.”
This article originally appeared in The New York Times.