Bard is going to destroy online search

Sure, Google's answer to ChatGPT will save you time. But it'll also lie to you.



This week Sundar Pichai, the CEO of Google, announced that his company's internet search engine — the way the vast majority of humans interact with a near-total corpus of human knowledge — is about to change. Enter a query, and you'll get more than pages and pages of links, along with a few suggested answers. Now you'll get an assist from artificial intelligence.

"Soon," a Google blog post under Pichai's byline declared, "you'll see AI-powered features in Search that distill complex information and multiple perspectives into easy-to-digest formats, so you can quickly understand the big picture and learn more from the web." A chatbot named Bard will deliver search results in complete sentences, as a human might.

A day later Satya Nadella, the CEO of Microsoft, announced that his company's competing search engine, Bing, will do the same, using the tech behind the popular AI chatbot ChatGPT. No search engine has ever really challenged Google's hold on the world's questions; Microsoft sees AI as its chance to come at the king.

These new chatbots aren't actually intelligent. The tech behind the scenes is called a large language model, a hunk of software that can extract words related to each other from a huge database and produce sophisticated writing and visual art based on minimal prompting. But when it comes to the acquisition, classification, and retrieval of knowledge, this approach is the subject of an old fight. It's been brewing since at least the early 2000s — and maybe since the 0s, at the Library of Alexandria. Fundamentally, it's a debate about the best way to know stuff. Do we engage with the complexity of competing information? Or do we let an authority reduce everything to a simple answer? 

Bard has a simple answer for that age-old question. From now on, instead of showing you a dozen webpages with instructions for opening a can of beans, machine-learning droids will just tell you how to open one. And if you believe that effective search is what made the internet the most important technology of the 20th and 21st centuries, then that seemingly simple change should give you the shakes. The collateral damage in this war of the machines could be nothing less than the obliteration of useful online information forever.


A hallucination of answers

Sometimes a simple answer is fine. In what the trade calls a "known-item search," we just want a factual response to a specific question. What's the most popular dog breed? How old is Madonna? Google is great at that stuff.

The other kind of search — "exploratory search" — is the hard one. That's where you don't know what you don't know. What's the right phone for me? What's the deal with the Thirty Years' War? Getting a satisfactory answer is more iterative. You throw a bunch of keywords into the search box, you scroll through the links, you try new terms. It's not perfect, and it's skewed by the profit motives of advertisers and the implicit judgments that Google makes behind the scenes about which pages count as authoritative. But it's what made it possible for us to find a needle in an online haystack.

Then came ChatGPT. As Google's vice president of search told me a year ago, when I wrote an article about why online search sucks, the company was already using artificial intelligence to make its search bar better at understanding what we seekers of knowledge really meant. But the seemingly overnight success of ChatGPT left Google scrambling to bring online a bot of its own that could answer back.

Google has been dreaming of this particular electric sheep for a long time. At a conference in 2011, its chairman at the time, Eric Schmidt, declared that search's endgame was to use AI to "literally compute the right answer" to queries rather than identify relevant pages. A 2021 paper from Google Research lays out that aspiration in much more detail. "The original vision of question answering," the authors write, "was to provide human-quality responses (i.e., ask a question using natural language and get an answer in natural language). Question answering systems have only delivered on the question part." Language-model chatbots might be able to provide more humanlike answers than regular old search, they added, but there was one problem: "Such models are dilettantes." Meaning they don't have "a true understanding of the world," and they're "incapable of justifying their utterances by referring to supporting documents in the corpus they were trained over."

To make an AI chatbot effective at search, the paper concludes, you'd have to build in more authority and transparency. You'd have to somehow remove bias from its training database, and you'd have to teach it to incorporate diverse perspectives. Pull off that hat trick inside a backflip, and you'd transform the bot from a dilettante to a reasonable facsimile of a "domain expert."

I talked to a bunch of non-Google computer scientists about the state of internet search for my story last year, and all of them said the same thing about this idea: Don't do it.


For one thing, chatbots lie. Not on purpose! It's just that they don't understand what they're saying. They're just recapitulating things they've absorbed elsewhere. And sometimes that stuff is wrong. Researchers describe this as a tendency to "hallucinate" — "producing highly pathological translations that are completely untethered from the source material." Chatbots, they warn, are inordinately vulnerable to regurgitating racism, misogyny, conspiracy theories, and lies with as much confidence as the truth.

That's why we, the searchers, are a crucial component of the search process. Over years of exploring the digital world, we've all gotten better at spotting misinformation and disinformation. You know what I mean. When you're scrolling through the links in a Google search, looking for "esoteric shit," as one search expert calls it, you see some pages that just look dodgy, maybe in ways you can't even totally articulate. You skim past those and open the legit-looking ones in new tabs.

Conversational answers generated automatically by chatbots will pretty much eliminate that human element of bullshit detection. Look at it this way: If you're the kind of person who reads this kind of article, you're trained to think that a halfway decent bit of writing signifies a modicum of competency and expertise. Links to sources or quotes from experts indicate viable research and confirmed facts. But search chatbots can fake all that. They'll elide the sources they're drawing on, and the biases built into their databases, behind the trappings of acceptable, almost-but-not-quite-human-sounding prose. However wrong they are, they'll sound right. We won't be able to tell if they're hallucinating.

An early example of what we're in for: A wag on Mastodon who has been challenging chatbots asked a demo of a Microsoft model trained on bioscience literature whether the antiparasitic drug ivermectin is effective in the treatment of COVID-19. It simply answered "yes." (Ivermectin is not effective against COVID-19.) And that was a known-item search! The wag was looking for a simple fact. The chatbot gave him a nonfact and served it up as the truth.

Sure, an early demo of Bing's new search bot provides traditional links-'n'-boxes results along with the AI's response. And it's possible that Google and Microsoft will eventually figure out how to make their bots better at separating fact from fiction, so you won't feel the need to check their work. But if algorithms were any good at spotting misinformation, then QAnon and vaccine deniers and maybe even Donald Trump wouldn't be a thing — or, at least, not as much of a thing. When it comes to search, AI isn't going to be a lie detector. It's going to be a very authoritative and friendly-sounding bullshit spreader.

Knowing where we've been

In his blog post, Pichai says conversational responses to complex queries are easier to understand than a long list of links. They're certainly faster to read — no more of that pesky scrolling and clicking. But even though a chatbot will presumably be drawing on the same sources as a traditional search engine, its answers are more likely to be oversimplifications. The risk is that search results will from now on be tales programmed by idiots, full of sound and vocabulary but with answers signifying nothing. That's not a result. It's spam.

But the really dangerous part is that the chatbot's conversational answers will obliterate a core element of human understanding. Citations — a bibliography, a record of your footsteps through an intellectual forest — are the connective tissue of inquiry. They're not just about the establishment of provenance. They're a map of replicable pathways for ideas, the ligaments that turn information into knowledge. There's a reason it's called a train of thought; insights come from attaching ideas to each other and taking them out for a spin. That's what an exploratory search is all about: figuring out what you need to know as you learn it. Hide those pathways, and there's no way to know how a chatbot knows what it knows, which means there's no way to assess its answer.

"In many situations there is no one answer. There is no easy answer. You have to let people discover their own answers," Chirag Shah, an information scientist at the University of Washington, told me last year. "Now we have the technical abilities to build a large language model that can capture basically all of human knowledge. Let's say we could do that. The question is, would you then use it to answer all the questions? Even the questions that are not factual? It's one thing to ask when Labor Day is, or the next full solar eclipse. It's another to ask, should Russia have invaded Ukraine?"


Complex subjects and ideas with multiple facets and arguments don't lend themselves to one-and-done answers. What you want is to click on the links, to follow your nose. That's how people turn existing information and art into something new, through innovation and synthesis. And that's exactly what chatbot search will not favor. Worst case, you won't be able to know anything outside what an opaque algorithm thinks is most relevant — factual or not.

Microsoft's bot already shows its work. Presumably Google is also working on that. But honestly, it might not be much of a priority. "They want to keep things as simple and easy as possible for their end users," Shah observes. "That allows them to intertwine more ads in the same display and to optimize on whatever metrics they want in terms of ranking. But we already know that these things are not purely ranked on relevance. They're ranked on engagement. People don't just click and share things that are factually or authoritatively correct."

Google and Bing, after all, are businesses. The chatbots answering our search terms can't be honest information brokers, not just because they're dumbasses, but because an honest information broker won't sell as many ads or amp up engagement. Google's search pages already aren't fully trustworthy — they overindex YouTube video results, for example, because YouTube is a subsidiary of Google. If the best instructional video for how to paint tabletop-game minifigures is on Vimeo? Tough.

So imagine the kind of hallucinations a large language model like Bard will have if, in addition to misreading its own sources, it's programmed to favor engagement. It'll push the stuff that keeps us meatbags clicking. And as the past few years of social media have shown, that's rarely the truth. If a search engine offers only easy answers, no one will be able to ask hard questions.


Previous
Previous

Amazon launches new same-day delivery facility in New Jersey

Next
Next

Andy Jassy believes that on premises IT infrastructure will not go away