The biggest mystery surrounding Google over the past year has concerned its core product, its original and still primary source of revenue: Will search engines be replaced by AI chatbots? In May, the company offered some clarity: “In the next era of search, AI will do the work so you don’t have to,” according to a video announcing that AI Overviews, Google’s new name for AI-generated answers, would soon be showing up at the top of users’ results pages. It’s a half-step into a future in which the internet, when given a query, doesn’t provide links and clues — it simply answers.
Any revision of Google’s search engine is consequential. The search box is one of the main interfaces through which people interact with the internet, their computers, and their phones. This half-step has been treated as a watershed event in the press, since Google’s role in the web, as both a distributor and monetizer of attention, is massive, contentious, and maybe about to change.
In nearly a year of testing, however, Google’s AI-search experiment has felt, at least to me, less like a total overhaul than one more dubious entry on an increasingly jumbled results page. Lately, I skim the contents of the AI answer just closely enough to notice that it is sometimes glaringly wrong. Maybe it’ll get better, as Google claims; maybe quality is beside the point if users like it anyway. Either way, the question of whether Google is in the midst of resetting the entire economy of the web, and whether synthesized summaries will deal a final, fatal blow to publishers and other Google-dependent platforms, won’t be open for long.
It’s clear enough what Google wants from AI when it comes to search: to fend off competition from the likes of OpenAI and maintain its place at the top. But search was just one of dozens of products and features it showed off in May at Google I/O, the company’s developer summit. The updates to search served the secondary purpose of letting the world know that the company is all in on AI — a bet that AI offers an opportunity to profoundly reset norms around privacy, again, in favor of companies like Google.
Google is rolling out, or teasing, new image-, audio-, and video-generation tools. There will be a new voice assistant that can answer questions about what it sees on your device’s camera or screen. There will be upgrades to assistants that can answer questions about your documents, or a meeting that just finished, or the contents of your inbox. There will be a program that can scan phone calls, in real time, for language associated with scams.
Some of these features are in the live-demo stage, while a good number live just over the horizon as suggestions or maybe marketing. “Whatever our competitors are doing with AI,” Google seemed to be saying, “we’re doing it too, and, in fact, we were doing it first.”
But a different story is coming into focus that positions AI not so much as a pure technology with which Google is trying to sort out its relationship — creator? Victim? Both? — but instead as a continuation of one of its defining traits as a company, whose policy a former CEO had described as “Get right up to the creepy line and not cross it.” Many of these tools make explicit claims about what they can do for you. What you provide in exchange is fuller access to every aspect of your digital life. Whatever else it may be, the rush to roll out AI is also a bid for more access and more data, and it represents an assumption by the industry that users will give them up.
Such moments are rare but not unprecedented. In 2004, shortly after the launch of Gmail, Google faced a backlash for putting contextual ads in users’ inboxes, which was at the time seen by some as a bold, presumptuous violation. “Scanning personal communications in the way Google is proposing is letting the proverbial genie out of the bottle,” wrote a group of privacy advocates in an open letter to Google executives. This would soon sound both quaint and prescient. In hindsight, users were clearly happy to make this exchange to whatever extent they understood or thought about it, but this really was how everything was going to work — across the internet.
By 2017, Google, which already offered maps, workplace software, a smartphone OS, and dozens of other products that depend on the collection of data from users, discontinued the practice of scanning email to target contextual ads. It was a gesture toward privacy that felt sort of absurd — by then, Google was living in our pockets, its software on billions of phones through which users conducted more and more of their lives.
Since then, normative shifts around privacy have tended to arrive quietly with more subtle implications. One day, a smartphone user opens their phone and notices that their photo library has been scanned for faces and organized into albums of people. Huh. Elsewhere, during a tense meeting with a coworker, a Zoom user notices that their meeting is being automatically transcribed into a searchable format. Hmmm.
In AI assistants — novel tools that can seem magical and that companies like Google are keen to market as such — tech companies have an opportunity to shift things further. These tools depend on access to data that in many cases has already been granted by users; it’s not exactly a scandal for a Google assistant to ask for access to documents hosted on Google Docs, for example, but it’s not entirely inconsequential, either, and hints at the extent to which the concept of an omniscient helper can shape expectations about digital autonomy.
In the past, Google has resorted to fairly unconvincing arguments to make its case for collecting data about its users — that it’s necessary to “help show you more relevant and interesting ads,” for example. Mostly, it has made its arguments in the form of software, and its users have responded with adoption or rejection. In contrast, AI assistants make a more direct case for the necessity and payoff of user surveillance. The assistants obviously work better if they can see what the user can see or at least what’s on their screens. The fact that they’re not quite here yet minimizes the tension that users might feel, if and when they actually arrive, between an ever-smarter and more humanlike assistant and its demands for access to ever-more intimate material.
In the same way that years of data collected from the web through Google Search gave Google the ability to start simply generating plausible results itself, AI assistants purport to be able to help you as you — to operationalize the vast trove of data that Google has long been collecting, with your technical consent, for purposes beyond marketing. In a narrow sense, this sounds like a better deal, in which at least some of the value of the immense personal corpus accrues back to the user in the form of a helpful chatbot. More broadly, though, the illusion of choice should feel familiar. (There are signs that Google is aware of and attentive to privacy concerns: It emphasized that its call-screening feature, for example, would rely on on-device AI rather than sending data to the cloud.)
The popular notion that the AI boom represents a disruptive threat to the internet giants deserves more skepticism than it’s gotten so far — the needs of the tech industry past, present, and future are neatly and logically aligned. These are companies whose existing businesses were built on the acquisition, production, and monetization of large amounts of highly personal data about their users; meanwhile, the secret to unlocking the full, glorious potential of large-language-model-based AI, at the level of the personal assistant or in service of achieving machine intelligence, is, according to the people building them, simply more access to more data.
It’s not so much a conspiracy, or even a deliberate plan, as an aspirational vision of a world in which traditional notions of what belongs to us have been redefined beyond recognition. AI firms have asserted that they need to ingest enormous amounts of public and private data in order to fulfill their promise. Google is making a more personal version of the same argument: Soon, it’ll be able to help with anything. All it needs from you in return is everything.
More on AI
- The AI Guys Are Driving Themselves Mad
- ChatGPT Users Want Help With Homework. They’re Also Very Horny.
- Elon Musk’s Image Generator Will Do Anything for a Laugh