Google CEO Sundar Pichai on the future of search, AI agents,...

Google CEO Sundar Pichai on the future of search, AI agents, and selling Chrome

Today, I’m talking with Alphabet and Google CEO Sundar Pichai. We recorded this conversation in person after the Google I/O developer conference last week in what’s becoming a bit of a Decoder tradition. This is the third year that we’ve done Decoder after I/O, and this one felt really different. Google is in a very confident place right now, and you can really feel that in this conversation.

If you caught any of the news from I/O, you know that it was all about AI — particularly a huge new set of AI products, not just models and capabilities. Sundar told me that these products represent a new phase of the AI platform shift, and we talked about that at length: how that shift is playing out, what the markers of these phases are, and whether any of these products can actually deliver a return on the huge investments Google has made in AI over the years.

Listen to Decoder, a show hosted by The Verge‘s Nilay Patel about big ideas — and other problems. Subscribe here!

This year’s I/O also marked the beginning of what appears to be a new era for search and the web. Google’s new vision for search goes well beyond links to webpages to something that feels a lot more like custom app development. When you search for something, Google’s new AI Mode will build you a custom search results page, including interactive charts and potentially other kinds of apps, in real time.

You can see that vision in the new AI Mode, which is now available to all US users. Google’s plan is to “graduate” features from AI Mode into the main search experience over time. I wanted to know how Sundar was thinking about that graduation process and how he thinks that will affect the web itself, which is shaped more than anything by the incentives of Google Search and SEO.

You’ll hear Sundar say in several different ways that the web is still getting bigger and Google is sending more traffic to more websites than ever before, but the specifics of that are hotly contested. Just before we talked, the News Media Alliance — the trade group that represents publishers like Conde Nast, The New York Times, and The Verge‘s parent company Vox Media — issued what can only be described as a furious statement, calling AI Mode “theft.” So we talked about that, too, and about what happens to the web when AI tools and eventually agents do most of the browsing for us.

What does it mean for the web to go from a series of websites when AI tools just see it as a series of databases instead of as sites for people to use? Why would companies like Uber, DoorDash, or Airbnb allow their businesses to get commoditized in that way? If you’ve been listening to the show, you know I’ve been talking about this idea a lot, so Sundar and I spent some time on this idea; it was a real Decoder conversation.

Of course, we also talked about the smart glasses that Google announced at I/O and when the next era of AI hardware might arrive — including what Sundar thinks of the big OpenAI and Jony Ive deal that was announced just before we spoke. And I couldn’t let this go without asking about the major antitrust trials that Google is involved in, including the government’s demand that Google sell Chrome, and what the negotiations with the Justice Department have involved. President Donald Trump has long complained about his search results being too negative, but Sundar told me that he will not change search rankings in response to political pressure, calling search “sacrosanct.”

There’s a lot in this one — I’m eager to hear what you all think of it.

Okay: Alphabet and Google CEO Sundar Pichai. Here we go.

This transcript has been lightly edited for length and clarity. 

Sundar Pichai, you’re the CEO of Alphabet and Google. Welcome back to Decoder.

Nilay, good to be back. Feels like a nice tradition post-I/O to be talking to you. So good to be back.

I think this is the third year we’ve done this after I/O. I’m excited. Thank you for keeping the tradition alive. Lots to talk about. You announced a lot of things yesterday during the keynote. There’s AI mode rolling out for US users, and big updates to Gemini. There’s Veo 3 and Imagen, the generators that you solved Pokémon with, which is very exciting.

My takeaway from yesterday was that Google feels very confident now. There’s a real confidence about the technology coming to life and the products. A lot of things are shipping imminently. What’s the one piece that gave you that confidence? Is it just the volume of things that are shipping? Is it one technology that clicked into being ready for consumers? Where is it coming from?

I think it comes from the depth and breadth of the AI frontier we are pushing in a more fundamental and foundational way. We spoke a lot about this theme called research becomes reality, but it is…  We’ve always felt we are a deep computer science company, and we’ve been AI-first for a while. So putting all that together and bringing it to our products at the depth and breadth is what I think is really pleasing to see. For example, people may not have noticed it much. It was so quick. We spoke about text diffusion models in the middle of the whole thing, but we are pushing the frontier on all dimensions, right? And Demis [Hassabis] spoke about world models. So I think that’s the exciting part, like how deep we are pushing this frontier and then bringing it to users, and maybe that’s what makes it feel that way.

You mentioned research into reality several times. Obviously, a lot of these projects have been cooking in the labs for a long time. You’ve said many times over the past many, many years that you and I have talked about it that you think AI will be as profound as electricity.

But you said something yesterday that I think adds to that, which is that we are in a new phase of the platform shift. People have talked about AI being a platform shift for quite a while, but that always has meant to me that there’s a user interface platform shift coming, right? We’re going to interact with computers in natural language in more natural ways, they’ll interact with us back in that same way, and everything will change. Is that the platform shift?

Yeah, you are right. Each of these platform shifts changes many things on the I/O front. Nothing to do with Google I/O, just I/O in the traditional computer science sense. You could feel it. Yesterday, when I watched the Android XR… I’ve used them and played around with them, but watching it, with two people talking in different languages, you can envision the future one day where it’ll actually be seamless. In a way, you couldn’t have done it with phones, you couldn’t have done it without AI because there’s nothing in your way. You’re looking at the other person and talking, right? And that is an element of platform shift, but there are many more elements.

This is the only platform where I think the actual platform is, over time, capable of creating, self-improving, and so on. In a way, we could have never talked about any other platform before, so that’s why I think it’s much more profound than the other platform shifts. It’ll allow people to create new things because, at each layer of the stack, there’s going to be profound improvements. And so I think that virtuous cycle you get in terms of how you can unleash this creative power to all of society, be it software engineers, be it creators — I think that is going to happen in a much more multiplicative way. So when I say it’s a next phase, I’m talking about that part too.

Let me just make that more concrete for people. I think the last platform shift we all understand is the shift to mobile.

That’s right.

And that was, we’re going to have multi-touch, we’re going to have faster cell connections, we’re going to increase processing power that can go with you everywhere. And then there was a layer of applications that was enabled by all of those things. You can push a button, and a Toyota Camry will show up wherever you are in the world. It’s like a very powerful thing that requires all of those ideas. How would you describe the phase we’re in now compared to that? The phase of this, that first phase of AI was that the transformers work and the models work, and we can all see this capability. The second phase, what is it to you?

Just imagine when the internet came, blogging became a thing. Pre-internet, very few people had a means by which they could put their thoughts out to the world. With the internet came a new medium, which allowed people to create and express themselves in a new way. With mobile came cameras, and you could shoot and you could create videos. Look at what’s happened with YouTube. For me, a similar part of this is that we are all talking about things like vibe coding. Yesterday, you saw Veo 3, so we are now in that phase. I think people are going to be able to create AI applications, you can call it, vibe coding, there are many names for it. But that power is yet to be unleashed. We are barely scratching the surface, and these models aren’t quite there. You can kind of do one-shot coding, but you really need to be a programmer to iterate and create something with polish, right? But that frontier is evolving pretty rapidly.

So I think you’re going to see a new wave of things, just like mobile did. Just like the internet did. I came to Google at the time when AJAX was the revolution, the fact that the web became dynamic. You had things like Google Maps, Flickr, and Gmail, that all suddenly came into existence. But I think AI is going to turbocharge in a way we haven’t seen before.

It feels like what you’re describing is that we’re in the phase where the products are developed, right? The capabilities were the first phase, and now we’re going to make some actual products.

And more people can build products than ever before. That’s the multiplicative part I’m talking about. Not just this platform that helps you create more products. The process of creating, developing, etc., is going to be accessible to a much wider swath of humanity than ever before.

I’m wondering, when you look at the landscape of products that exist now, most people experience AI in Gemini or ChatGPT as a chatbot. It’s a general-purpose interface to a bunch of knowledge that will talk to you. What products do you see that will have the same kind of impact as the web two products you were talking about, besides the general-purpose chatbot?

Well, obviously, you’ve seen a wave with coding IDs, like that entire landscape is… I can’t even keep track of how many new companies have come into it, and people are using a lot of it. And yesterday we showed a bunch of partners with whom we are working. So that’s an area where coding… where AI is making the most progress. You’re seeing the application layer, at least in terms of code editors, really come into vogue. We’ve had success with NotebookLM. We launched Flow yesterday. Flow is a new product that allows you to create and imagine.

So those are all the applications we are doing. I think others are beginning to do it. People are working on legal assistance, and there are all kinds of startups. I was recently in a doctor’s office, and they have an AI that transcribes the whole thing, puts it all in reports, and so on. That’s an enterprise application layer. It kind of works completely differently from when I went on a visit two years ago. So all that change is happening across the board, but I think we are just in the early stages. You will see it play out over the next three to five years in a big way.

Did you ask your doctor what model their transcription software is running on?

No, I didn’t. No, I didn’t.

One of the reasons I’m asking this, and I’m pushing on this, is that the huge investment in the capability from Google and others has to pay off in some products that return on that investment. NotebookLM is great. I don’t think it’s going to fully return on Google’s data center investment, let alone the investment in pure AI research. Do you see a product that can return on that investment at scale?

Do you think in 2004 if you had looked at Gmail, which was a 20% project, which people were internally using as an email service, how would we be able to think about Gmail as what led us to do workspace, or get into the enterprise? I made a big bet on Google Cloud, which is tens of billions of dollars in revenue today. And so my point is that things build out over time. Think about the journey we have been on with Waymo. I think one of the mistakes people often make in a period of rapid innovation is thinking about the next big business versus looking at the underlying innovation and saying, “Can you build something and put out something which people love and use?” And out of which you do the next thing, and create value out of it.

So when I look at it, AI is such a horizontal piece of technology across our entire business. It’s why it impacts not just Google search, but YouTube, Cloud, and all of Android. You saw XR, etc., Google Play, things like Waymo, and Isomorphic Labs, which is based on AlphaFold. So I’ve never seen one piece of technology that can impact and help us create so many businesses. AI is going to be so useful as an assistant. I think that people will be willing to pay for it, too. We are introducing subscription plans, and so there’s a lot of headroom ahead, I think. And obviously, that’s why we are investing, because we see that opportunity. Some of it will take time, and it may not always be immediately obvious.

I gave the Waymo example. The sentiment on Waymo was quite negative three years ago. But actually, as a company, we increased our investment in Waymo at that time, right? Because you’re betting on the underlying technology and you’re seeing the progress of where it’s going. But these are good questions. In some ways, if you don’t realize the opportunities, that may constrain the pace of investment in this area, but I’m optimistic we’ll be able to unlock new opportunities.

One reason I wanted to start here as the foundation of the conversation is that you showed off Android XR yesterday. You showed off some prototype glasses, and you have some partners making glasses. A lot of people think augmented reality glasses powered by AI will be the realization of the full platform shift, right? 

You will have an always-on assistant that can look at the world around you. You showed some of those demos yesterday. The form factor will change, and the interface will change. This will be marketed as big as smartphones were. How close do you think we are to that as a mainstream product?

It was a nice reflective moment all the way back from Google Glass. Wearing the product, I think there’s a difference between goggles and glasses. Everyone at The Verge understands as well, but obviously, we are also shipping goggles. We have announced products with Samsung to come later this year. 

On the XR side, I’m excited about our partnership with Gentle Monster and Warby Parker. We’ll have products in the hands of developers this year, but I think those products will be pretty close to what people eventually see as final products. I’m excited. I think the pace is actually pretty palpable. I’d be shocked if you and I were sitting next year and I wasn’t wearing one of those when I’m doing this.

Do you think that will be a mainstream iPhone-level replacement product? Because there’s a lot of hardware that needs to be developed along the way to pull that off.

You’re wearing something on your face. I have a prescription. The bar is higher in terms of making the experience seamless enough that you’re willing to wear it on your face and enjoy it for all. I don’t necessarily think next year [it will be] as mainstream as what you’re talking about, but would millions of people be trying it? I think so, yeah. Both are true. 

I have to ask you… Just before we sat down, OpenAI announced that Jony Ive was selling a company he had started called “io,” and Ive and his design consultants in Lovefrom would take overall design. They didn’t announce a product, but they said it’s the future of computing and it’s coming next year. Do you anticipate more of that competition, that your competitors who don’t have a smartphone operating system will go even harder in this direction?

I’m looking forward to an OpenAI announcement ahead of Google I/O, the night before. First of all, look, stepping back, I mean Jony Ive is one of a kind. You look at his track record over the years, I’ve met him only once or twice, but I’ve admired his work, obviously like so many of us. I think it’s exciting. This is why I feel like there’s so much innovation ahead, and I think people tend to underestimate this moment. In some ways, I always like to point out that when the internet happened, Google didn’t even exist.

I think AI is going to be bigger than the internet. There are going to be companies, products, and categories created that we aren’t aware of today. I think the future looks exciting. I think there’s a lot of opportunity to innovate around hardware form factors at this moment with this platform shift. I’m looking forward to seeing what they do. We are going to be doing a lot as well. I think it’s an exciting time to be a consumer, it’s an exciting time to be a developer. I’m looking forward to it.

Ive, in that video, described the phone and the laptop as legacy platforms, which is very interesting considering his own history. Are you all the way there that the phone and the laptop are legacy platforms?

I think these things, if anything, I’ve found through this AI moment that I’m using the web a lot more, because it’s easier to create a VO3 video in my browser on a big screen, right? And so I think the way I’ve internalized this, computing will be available, and you don’t have to make these hard choices. Computing will become so essential to you. You’re going to have it in multiple ways around you when you need it, right? I use a phone, a tablet, a laptop, and I have my workstation. And so I have the breadth of it, but over time… It makes sense to me that at some point in the future, consuming content by pulling out this black glass display rectangle in front of you and looking at it is not the most intuitive way to do it, but I think it’s going to take some time.

I feel like we could do a full hour just on Android tablets and where they could go. We’re going to come back for that. A big part of what you’re describing implicates search in really big ways, right? We’re going to be surrounded by information search, Gemini, or some future Google product. We’ll organize that information, take action for you across the web in some way, and you will have a companion. Maybe you only pull out your tablet to watch a video or something. A lot of what’s going on with search has downstream effects on the web, and downstream effects on information providers broadly. Last year, we spent a lot of time talking about those effects. Are you seeing that play out the way that you thought it would?

It depends. I think people are consuming a lot more information, and the web is one specific format. We should talk about the web, but zooming back out, there are new platforms like YouTube and others. I think people are just consuming a lot more information, right? It feels like an expansionary moment.

I think there are more creators, and people are putting out more content. And so people are generally doing a lot more. Maybe people have a little extra time on their hands, and so it’s a combination of all that. On the web, look, things that have been interesting and… We’ve had these conversations for a while. Obviously, in 2015, there was this famous meme, “The web is dead.” I always have it somewhere around, and I look at it once in a while. Predictions… It has existed for a while. I think the web is evolving pretty profoundly. I think that is true. When we crawl and look at the number of web pages available to us, that number has gone up by 45% in the last two years alone, right? That’s a staggering thing to think of.

Can you detect if that volume increase is because more pages are generated by AI or not? This is the thing I may be worried about the most, right?

It’s a good question. We generally have many techniques in search to try and understand the quality of a page, including whether it was machine-generated, etc. That doesn’t explain the trend we are seeing.

Generally, there are more web pages, right? At an underlying level, I think that’s an interesting phenomenon. I think everybody as a creator like you are at The Verge, has to do everything in a cross-platform, cross-format way. I look at the quality of video content you put out, it’s very sophisticated and very different from how The Verge used to be, maybe five to 10 years ago, right? It has profoundly changed. I think things are becoming more dynamic, and cross-format. I think another thing people are underestimating with AI is that AI will make it zero friction to move from one format to another, right? Because our models are natively multimodal. We tease people’s imagination with audio overviews in NotebookLM, right? The fact that you can throw a bunch of documents at it, you have a podcast, and you can join and learn from it.

I think this notion, the static moment of producing content by format, whereas… I think machines can help translate it. It’s almost like different languages and they can go seamlessly between. I think it’s one of the incredible opportunities to be unlocked right ahead. But maybe, I didn’t want to drift from the question we were having. Look, I think people are producing a lot of content, and I see consumers consuming a lot of content. And we see it in our products; others are seeing it too. That’s how I would probably answer at the highest level.

The way I see it currently is that the web is at an all-time high as an application platform, right? The fact that Figma exists and is as successful as it is, and its primary interface as a web app is, I think, remarkable. A lot of the products you are talking about are expressed as web apps. Even some of the most interesting search results you showed yesterday are how Google would generate a custom web app for you and display it in a search result to do some data visualization. I think that’s all looking incredible. I think the web as a transaction platform is reaching new highs, especially with rulings that mean smartphone makers have to let people push transactions to the web. There’s something very interesting happening there. As a media platform, it feels like it’s at an all-time low, right? 

Do you mean the web as a media platform?

The web as a media platform, as an information platform. If I were starting The Verge today with 11 of my co-founders and friends, we would start a TikTok channel, and we might start a YouTube channel. We would definitely not start a website with the dependencies we have as a website today. And that’s the dynamic that it feels like AI is pushing on even harder.

I’m not fully sure I agree, right? I think if you were to go and restart The Verge again, I bet you would have an extraordinary web presence.

At best, no, I’ve thought about this a lot. I think at best our web presence would look like a Substack or a Ghost or something, right?

Maybe. I’m not fully sold on that, but you know the space. I acknowledge that you know that space better than I do. I don’t have that intuition, which you do here. But look, in fact, you say the web application platform is at an all-time high, but I’ve looked. I was vibe coding with Replit a few weeks ago. The power of what you’re going to be able to create on the web — we haven’t given that power to developers in 25 years.

That is going to come ahead. It’s not exactly clear to me. Maybe today you’re looking at it and say, I wouldn’t put all the investment in because it looks like a lot of investment to do that. But that may not be true two years out, right? If you feel like you would create a TikTok channel, then maybe with 2% extra effort you could have a robust web presence. Why wouldn’t you, right? And so I’m not fully sold on it, but I think it’s a good question to ask. But you have to somehow reconcile that with the fact that overall, web traffic seems to be growing. We see more web pages. Somewhere, we have to explain all of that, too.

The publishers, as they often do, responded to Google I/O announcements. So, the News Media Alliance, after AI mode was announced yesterday, I would say they’re very upset. Here’s a statement from the President of the News Media Alliance, “Links for the last redeeming quality of search that gave publishers traffic and revenue. Now, Google takes content by force and uses it with no return, no economic return. That’s the definition of theft.” And they go on to say the DOJ and lawsuits must address it. That’s pretty furious. That’s not a negotiation, right? That’s a “We just want this to stop.” How do you respond to that very loud set of people who say, “Yeah, okay, maybe it’s growing somewhere, but for us, it’s crushing our businesses.”

First of all, through all the products, AI mode is going to have sources. And we are very committed as a direction, as a product direction to make… I think part of why people come to Google is to experience that breadth of the web and go in the direction they want to, right? So I view us as giving more context. Yes, there are certain questions that may get answers, but overall… And that’s the pattern we see today, right? And, if anything over the last year, it’s clear to us that the breadth of area we are sending people to is increasing. And so I expect that to be true with AI mode as well.

But if it was increasing, wouldn’t they be less angry with you?

You’re always going to have areas where people are robustly debating value exchanges, etc., like app developers and platforms. That’s not on the web, etc. There’s always going to be — when you’re running a platform — these debates. I would challenge, I think more than any other company, we prioritize sending traffic to the web. No one sends traffic to the web in the way we do. I look at other companies, newer emerging companies, where they openly talk about it as something they’re not going to do. We are the only ones who make it a high priority, agonize, and so on. We’ll engage, and we’ve always engaged. There are going to be debates through it all, but we are committed to, I’ve said this before, everything we do across all… You will see us five years from now sending a lot of traffic out to the web. I think that’s the product direction we are committed to. I think it’s what users look for when they come to Google, and the nature of it will evolve. But I am confident that that’s the direction we’ll continue taking.

Is there public data that shows that AI overviews and AI mode actually send more traffic out than the previous search engine results page?

The way we look at it is… I mean, obviously, we take a lot of… We are definitely sending traffic to a wider range of sources and publishers. And because just like we’ve done over 25 years, we’ve been through the same with featured snippets, the quality of… It’s higher-quality referral traffic, too. And we can observe it because the time that people spend is one metric. And there are other ways by which we measure the quality of our outbound traffic, and it’s also increasing. And overall, through this transition, I think AI is also growing, and the growth compounds over time. So whenever we have worked through these transitions, it ends up posted. That’s how Google has worked for 25 years, and we end up sending more traffic over time. So that’s how I would expect all this to play out.

So why do you think that there is so much general economic turmoil on that side of the house? If you’re sending more traffic and the goal over time is to make sure that that works… We’re a year into it, and it doesn’t seem to have gotten better over there.

No, look, we are sending traffic to a broader source of people. People may be surfacing more content, looking at more content, so someone may individually see less. There are all kinds of… At the end of the day, we are reflecting what users want. If you do the wrong thing, users won’t use our product and go somewhere else. And so you have all these dynamics underway, and I think we have genuinely… We took a long time designing AI overviews, and we are constantly iterating in a way that we prioritize this, sources, and sending traffic to the web.

I mean, my criticism of this industry, just to be clear, is that everyone’s addicted to Google, and it would be better if they weren’t. But they’re addicted to Google, right? And they’re feeling it. And then on top of that, you see… You’ve mentioned several times that overall queries are increasing on Google surfaces, but they’re changing. They’re getting longer, they’re getting more complicated. AI mode might walk you through several steps. Maybe some people are searching on TikTok now. Eddie Cue on the stand in the trial the other day said that search in Safari for the last month dropped for the first time in 22 years. That’s a big stat. Obviously, your stock price was affected by it. There was a statement. Is that trend bearing out that the standard Google search is dropping from devices, and different kinds of searches are increasing?

No. We’ve been very clear. We’re seeing overall query growth in search. 

But have you actually seen the drop in Safari?

We have a comprehensive view of how we look at data across the board. There can be a lot of noise in search data, but everything we see tells us that we are seeing query growth, including across Apple’s devices and platforms. Specifically, I think we quantified the query growth from AI overviews. And what’s healthy is that the query growth is continuing to grow over time.

So to step back, and this is what I’ve said before, it feels very far from a zero-sum game to me. I said this last year. I think people are… It’s interesting we spoke about TikTok, right? Think about how profound of a new product TikTok was. How has YouTube done since TikTok has come, right? You could ask all these questions there. Why is it that TikTok arrives and YouTube has grown? And so I think what we always underestimate in these moments is that people are engaging more, doing more with it. We are improving our products. And so that’s how I would think about these moments.

Let me just broaden that out to agents. I watched Demis Hassabis yesterday. He was on stage with Sergey Brin and Alex Kantrowitz asked him, “What does a web look like in 10 years?” And Demis said, “I don’t know that an agent-first web looks anything like the web that we have today. I don’t know that we have to render web pages for agents the way that we have to see them.” That kind of implies that the web will turn into a series of databases for various agents to go and ask questions to, and then return those answers. And I’ve been thinking about this in the context of services like Uber, DoorDash, and Airbnb. Why would they want to participate in that and be abstracted away for agents to use the services they’ve spent a lot of time and money building?

Two things, though. First, there’s a lot to unpack, a fascinating topic. The web is a series of databases, etc. We build a UI on top of it for all of us to conceive.

This is exactly what I wanted, “the web is a series of databases.”

It is. But I think I listened to the Demis and Sergey conversation yesterday. I enjoyed it. I think he’s saying for an agent-first web, for a web that is interacting with agents, you would think about how to make that process more efficient. Today, you’re running a restaurant, people are coming, dining and eating, and people are ordering takeout and delivery. Obviously, for you to service the takeout, you would think about it differently than all the tables, the clothing, and the furniture. But both are important to you.

You could be a restaurant that decides not to participate in the takeout business. I’m only going to focus on the dining experience. You’re going to have some people that are vice versa. I’m going to say, I’m going to go all in on this and run a different experience. So, to your question on agents… I think of agents as a new powerful format. I do think it’ll happen in enterprises faster than in consumer. In the context of an enterprise, you have a CIO who’s able to go and say, “I really don’t know why these two things don’t talk to each other. I am not going to buy more of this unless you interoperate with this.” I think it’s part of why you see, on the enterprise side, a lot more agentic experiences. On the consumer side, I think what you’re saying is a good point. People have to think about and say, “What is the value for me to participate in this world?” And it could come in many ways. It could be because I participated in it, and overall, my business grows.

Some people may feel that it’s disintermediating, and doesn’t make sense. I think all of that can happen, but users may work with their feet. You may find some people are supporting the agent experience, and your life is better because of it. And so you’re going to have all these dynamics, and I think they’re going to try and find an equilibrium somewhere. That’s how everything evolves. 

I mean, I think the idea that the web is a series of databases and we change the interface… First of all, this is the most Decoder conversation that we’ve ever had. I’m very happy with it. But I had Dara [Khosrowshahi] from Uber on the show. I asked him this question from his perspective, and his answer attracts yours broadly. He said, first, we’ll do it because it’s cool and we’ll see if there’s value there. And if there is, he’s going to charge a big fee for the agent to come and use Uber.

Because losing the customer for him, or losing the ability to upsell or sell a subscription, none of that is great. The same is true for Airbnb. I keep calling it the DoorDash problem. DoorDash should not be a dumb pipe for sandwiches. They’re actually trying to run a business, and they want the customer relationship. And so if the agents are going across the web and they’re looking at all these databases and saying, okay, this is where I get food from, and this is where I get cars from, and this is where I book… I think the demo was booking a vacation home in Spanish, and I’m going to connect you to that travel agent. Is it just going to be tolls that everyone pays to let the agents work? The CIO gets to just spend money to solve the problem. He says, “I want this capability from you. I’m just going to pay you to do it.” The market, the consumer market, doesn’t have that capability, right?

Well, look, all kinds of models may emerge, right? I can literally envision 20 different ways this could work. Consumers could pay a subscription for agents, and the agents could revenue share back. So that is the CIO-like use case you are talking about, that’s possible. We can’t rule that out. I don’t think we should underestimate… People may actually see more value in participating in it. I think this is… It’s tough to predict, but I do think that over time, if you’re removing friction and improving user experience, it’s tough to bet against those in the long run. And so I think if you are lowering friction for it and then people are enjoying using it, somebody’s going to want to participate in it and grow their business. And would brands want to be in retailers? Why don’t they sell directly today? Why won’t they do that?

Because retailers provide value in the middle. And why do merchants take credit cards? Why… I’m just saying. So there are many parts, and you find equilibrium because merchants take credit cards so they see more business as part of taking credit cards than not, which justifies the increased cost of taking credit cards. It may not be the perfect analogy, but I think there are all these kinds of effects going around, and so… But what you’re saying is true. Some of this will slow progress in agents just because we all are excited about Agent2Agent (A2A) and Model Context Protocol (MCP)… And we think no, some of it will slow progress, but I think it’ll be very dynamic. Yeah.

Yeah. There are other pressures on Google. There are antitrust pressures. The government would like you to sell Chrome. Can you do all the things you want to do if you’re made to sell Chrome?

I don’t want to comment on… We are in a legal process. I look at having been directly involved in building Chrome. I look at… I think there are very few companies that would’ve… We not only improved our product, but we also improved the state of the web by building Chrome. We open-sourced it. We provided Chromium. Everyone else has access to the browser. So I think the amount of R&D, the amount of innovation we put into it, the investments in security, etc., we do, so I think we-

But if you’re made to sell it, can you do all the things that you want to do?

I don’t think that’s the scenario we’re looking at, but stepping back… Look, I think as a company, we think of ourselves as a deeply foundational technology company, which then translates into products that touch billions of people. So we do it across many, many things. And so, of course, I think, look, as a company, we’re going to continue investing and doing our best to innovate and build a successful business in all scenarios. So this is how I would answer it.

The Trump administration is extremely transactional, I would say. The tech industry has a new relationship with Trump in his second term. You were at the inauguration. Have you had conversations about what a settlement might look like and what the Trump administration might demand to make these problems go away?

We’ve engaged with the DOJ, like we did over the years, in the context of all the cases we have. So that’s how we normally do these conversations.

Trump has very publicly said he doesn’t like his search ranking, and he wants it changed in some way. Would you ever adjust the search ranking for Donald Trump?

No. Look, we have a… I can’t… Today, the way Google Search works is that I cannot… No person at Google can influence the ranking algorithm.

AI mode is different, right? We’ve seen system prompts adjusted in very chaotic ways from some of your competitors. Is that something that you would be open to in a world where you’re serving the full answer? Would you adjust the AI mode responses in response to political pressure?

No.

Because we’ve certainly seen in Grok and others, the system prompts change the answers in dramatic ways.

The way we do ranking is sacrosanct to us. We’ve done it for over 25 years. We make a lot of… There are a lot of ranking signals we take into account and stuff. And if there’s broad feedback from people that something isn’t working, we will look at it systematically and try and make changes, but we don’t look at individual cases and change the rankings.

When you think about those sources of information, one of the things that I have been thinking about a lot is, I don’t know, how the CDC web pages have changed a lot recently. Diversity, equity, and inclusion language has been removed from pages across the government. Those used to be very high-ranking sources in Google Search. We just implicitly trusted the CDC’s web pages in some ways. Are you re-evaluating that? How there might be misinformation on some of these pages that then gets synthesized into AI results?

Oh, it’s a misunderstanding of how search works. We don’t individually evaluate the authoritativeness of a page right then. It’s what our signals do. PageRank, it’s… Obviously, our signals are multiple orders of magnitude more complicated than PageRank today. But to use PageRank as an example, we weren’t the ones determining how authoritative a page is. It’s how other pages were linking to it, like an academic citation, etc. So we are not making those decisions today. And so I don’t see that changing.

As you synthesize more of the answers, do you think you’re going to have to take more responsibility for the results?

We are giving context around it, but we’re still anchoring it in the sources we find. But we’ve always felt a high bar at Google. I mean, last year when we launched AI Overviews, I think people were adversarially querying to find errors, and the error rate was one in 7 million for adversarial queries, and so… But that’s the bar we’ve always operated at as a company. And so I think to me, nothing has changed. Google operates at a very high bar. That’s the bar we strive to meet, and our search page results are there for everyone to see. With that comes natural accountability, and we have to constantly earn people’s trust. So that’s how I would approach it.

What do you think the marker is for the next phase of the platform shift after this one? We opened by talking about how we’re in a second phase. What’s the marker for the final phase, or the third phase?

Of the platform shift, do you mean?

Yeah.

Of the AI platform?

What are you looking for as the next marker?

I think the real thing about AI, which I think is why I’ve always called it more profound, is self-improving technology. Having watched AlphaGo start from scratch, not knowing how to play Go, and within a couple of hours or four hours, be better than top-level human players, and in eight hours, no human can ever aspire to play against it. And that’s the essence of the technology, obviously in a deep way. 

I think there’s so much ahead on the opportunity side. I’m blown away by the ability to discover new drugs, completely change how we treat diseases like cancer over time, etc. The opportunity is there. The creative power, which I talked about, which we’re putting in everyone’s hands, the next generation of kids, everyone can program and will… If you think of something, you can create it. I don’t think we have comprehended what that means, but that’s going to be true. The part where the next phase of the shift is going to be really meaningful is when this translates into the physical world through robotics. 

So that aha moment of robotics, I think, when it happens, that’s going to be the next big thing we will all grapple with. Today they’re all online and you’re doing things with it, but on one hand… Today, I think of Waymo as a robot. So we are running around driving a robot, but I’m talking about a more general-purpose robot. And when AI creates that magical moment with robotics, I think that’ll be a big platform shift as well.

Yeah. I’m looking forward to it. Next year we’re going to do this with glasses and robots. It’s going to be great.

We’ll give it a shot.

Thank you so much, Sundar.

All right. Thanks, Nilay. I appreciate it.

Questions or comments about this episode? Hit us up at [email protected]. We really do read every email!

Leave a Comment
Categories
Archives