Sundar Pichai has been trying to start an A.I. revolution for a very long time.
In 2016, shortly after being named Google’s chief executive, Mr. Pichai declared that Google was an “A.I.-first” company. He spent lavishly to assemble an all-star team of A.I. researchers, whose breakthroughs powered changes to products like Google Translate and Google Photos. He even predicted that A.I.’s impact would be bigger than “electricity or fire.”
So it had to sting when A.I.’s big moment finally arrived, and Google wasn’t involved.
Instead, OpenAI — a scrappy A.I. start-up backed by Microsoft — stole the spotlight in November by releasing ChatGPT, a poem-writing, code-generating, homework-finishing marvel. ChatGPT became an overnight sensation, attracting millions of users and kicking off a Silicon Valley frenzy. It made Google look sluggish and vulnerable for the first time in years. (It didn’t help when Microsoft relaunched its Bing search engine with OpenAI’s technology inside, instantly ending Bing’s decade-long run as a punchline.)
In an interview with The Times’s “Hard Fork” podcast on Thursday, his first extended interview since ChatGPT’s launch, Mr. Pichai said he was glad that A.I. was having a moment, even if Google wasn’t the driving force.
A New Generation of Chatbots
A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:
“It’s an exciting moment, regardless of whether we had done it,” Mr. Pichai said. “Obviously, you always wish you had done it.”
It’s been a wild few months at Google. In December, shortly after ChatGPT’s release, someone in management — Mr. Pichai swears it wasn’t him — declared a “code red,” instructing employees to shift time and resources toward A.I. projects. The company also established a fast-track review process to get A.I. projects out more quickly. And Larry Page and Sergey Brin, Google’s co-founders, who took a hands-off approach for years, rolled up their sleeves to help. The company plans to release a raft of new A.I. products this year and plug the technology into many of its existing ones. (This week, it began testing a new Gmail feature that allows users to compose A.I.-generated emails.)
On Thursday, Mr. Pichai expressed both optimism and worry about the state of the A.I. race.
He gave a blunt assessment of Bard, the ChatGPT competitor that Google released last week to tepid reviews: “I feel like we took a souped-up Civic and kind of put it in a race with more powerful cars.” (He also broke some news: Bard, which currently runs on a version of an A.I. language model called LaMDA, will soon be upgraded to a more powerful model, known as PaLM.)
He reacted to a recent open letter, signed by nearly 2,000 technology leaders and researchers, that urged companies to pause development of powerful A.I. systems for at least six months to prevent “profound risks to society.” Mr. Pichai doesn’t agree with all of the letter’s details — and he wouldn’t commit to slowing down Google’s A.I. efforts — but he said that the letter’s cautionary message was “worth being out there.”
And he talked about the “whiplash” he often feels when it comes to A.I. these days, as some people urge companies like Google to move faster on A.I., release more products and take bigger risks, while others urge them to slow down and be more cautious.
“You will see us be bold and ship things,” he said, “but we are going to be very responsible in how we do it.”
Here are some other highlights of Mr. Pichai’s remarks:
On the initial, lukewarm reception for Google’s Bard chatbot:
We knew when we were putting Bard out, we wanted to be careful … So it’s not surprising to me that’s the reaction. But in some ways, I feel like we took a souped-up Civic and kind of put it in a race with more powerful cars. And what surprised me is how well it does on many, many, many classes of queries. But we are going to be iterating fast. We clearly have more capable models. Pretty soon, maybe as this goes live, we will be upgrading Bard to some of our more capable PaLM models, which will bring more capabilities, be it in reasoning, coding, it can answer math questions better. So you will see progress over the course of next week.
On whether ChatGPT’s success came as a surprise:
With OpenAI, we had a lot of context. There are some incredibly good people, some of whom had been at Google before, and so we knew the caliber of the team. So I think OpenAI’s progress didn’t surprise us. I think ChatGPT … you know, credit to them for finding something with product-market fit. The reception from users, I think, was a pleasant surprise, maybe even for them, and for a lot of us.
On his worries about tech companies racing toward A.I. advancements:
Sometimes I get concerned when people use the word “race” and “being first.” I’ve thought about A.I. for a long time, and we are definitely working with technology which is going to be incredibly beneficial, but clearly has the potential to cause harm in a deep way. And so I think it’s very important that we are all responsible in how we approach it.
On the return of Larry Page and Sergey Brin:
I’ve had a few meetings with them. Sergey has been hanging out with our engineers for a while now. He’s a deep mathematician and a computer scientist. So to him, the underlying technology, I think if I were to use his words, he would say it’s the most exciting thing he has seen in his lifetime.So it’s all that excitement. And I’m glad. They’ve always said, “Call us whenever you need to.” And I call them.
On the open letter, signed by nearly 2,000 A.I. researchers and tech luminaries including Elon Musk, that urged companies to pause development of powerful A.I. systems for at least six months:
In this area, I think it’s important to hear concerns. There are many thoughtful people behind it, including people who have thought about A.I. for a long time. I remember talking to Elon eight years ago, and he was deeply concerned about A.I. safety then. I think he has been consistently concerned. And I think there is merit to be concerned about it. While I may not agree with everything that’s there and the details of how you would go about it, I think the spirit of [the letter] is worth being out there.
On whether he’s worried about the danger of creating artificial general intelligence, or A.G.I., an A.I. that surpasses human intelligence:
All those are good questions. But to me, it almost doesn’t matter because it is so clear to me that these systems are going to be very, very capable. And so it almost doesn’t matter whether you reached A.G.I. or not; you’re going to have systems which are capable of delivering benefits at a scale we’ve never seen before, and potentially causing real harm. Can we have an A.I system which can cause disinformation at scale? Yes. Is it A.G.I.? It really doesn’t matter.
On why climate change activism makes him hopeful about A.I.:
One of the things that gives me hope about A.I., like climate change, is it affects everyone. Over time, we live on one planet, and so these are both issues that have similar characteristics in the sense that you can’t unilaterally get safety in A.I. By definition, it affects everyone. So that tells me the collective will will come over time to tackle all of this responsibly.