clock menu more-arrow no yes mobile
Closeup of a search button and mouse pointer on a computer screen. Getty Images/iStockphoto

Filed under:

Why Google is reinventing the internet search

Generative AI is here. Let’s hope we’re ready.

Sara Morrison is a senior Vox reporter who has covered data privacy, antitrust, and Big Tech’s power over us all for the site since 2019.

If you feel like you’ve been hearing a lot about generative AI, you’re not wrong. After a generative AI tool called ChatGPT went viral a few months ago, it seems everyone in Silicon Valley is trying to find a use for this new technology. Microsoft and Google are chief among them, and they’re racing to reinvent how we use computers. But first, they’re reinventing how we search the internet.

Generative AI is essentially a more advanced and useful version of the conventional artificial intelligence that already helps power everything from autocomplete to Siri. The big difference is that generative AI can create new content, such as images, text, audio, video, and even code — usually from a prompt or command. It can write news articles, movie scripts, and poetry. It can make images out of some really specific parameters. And if you listen to some experts and developers, generative AI will eventually be able to make almost anything, including entire apps, from scratch. For now, the killer app for generative AI appears to be search.

One of the first major generative AI products for the consumer market is Microsoft’s new AI-infused Bing, which debuted in January to great fanfare. The new Bing uses generative AI in its web search function to return results that appear as longer, written answers culled from various internet sources instead of a list of links to relevant websites. There’s also a new accompanying chat feature that lets users have human-seeming conversations with an AI chatbot.

Google, the undisputed king of search for decades now, appeared to take on Microsoft’s AI dominance at its annual developers conference on May 10. The company announced that its days of behind-the-scenes, years-long, carefully considered generative AI development was over. Soon, AI should be a powerful feature in virtually every major Google product, from Google Docs to Gmail. Among many other tricks, the new generative AI technology can write emails and even create entire presentations — complete with images — out of a few text prompt. But the biggest changes are coming to Google’s bread and butter: search.

In other words, the AI wars are now underway. And the battles may not just be over search engines. Generative AI is already starting to find its way into mainstream applications for everything from food shopping to social media.

Microsoft and Google are the biggest companies with public-facing generative AI products, but they aren’t the only ones working on it. Apple, Meta, and Amazon have their own AI initiatives, and there are plenty of startups and smaller companies developing generative AI or working it into their existing products. TikTok has a generative AI text-to-image system. Design platform Canva has one, too. An app called Lensa creates stylized selfies and portraits (sometimes with ample bosoms). And the open-source model Stable Diffusion can generate detailed and specific images in all kinds of styles from text prompts.

Generative AI has the potential to be a revolutionary technology, and it’s certainly being hyped as such. Venture capitalists, who are always looking for the next big tech thing, believe that generative AI can replace or automate a lot of creative processes, freeing up humans to do more complex tasks and making people more productive overall. But it’s not just creative work that generative AI can produce. It can help developers make software. It could improve education. It may be able to discover new drugs or become your therapist. It just might make our lives easier and better.

Or it could make things a lot worse. There are reasons to be concerned about the damage generative AI can do if it’s released to a society that isn’t ready for it — or if we ask the AI program to do something it isn’t ready for. How ethical or responsible generative AI technologies are is largely in the hands of the companies developing them, as there are few if any regulations or laws in place governing AI. This powerful technology could put millions of people out of work if it’s able to automate entire industries. It could spawn a destructive new era of misinformation. There are also concerns of bias due to a lack of diversity in the material and data that generative AI is trained on, or the people who are overseeing that training.

Nevertheless, powerful generative AI tools are making their way to the masses. If 2022 was the “year of generative AI,” 2023 is shaping up to be the year that generative AI is actually put to use, ready or not.

The slow, then sudden, rise of generative AI

Conventional artificial intelligence is already integrated into a ton of products we use all the time, like autocomplete, voice assistants like Amazon’s Alexa, and even the recommendations for music or movies we might enjoy on streaming services. But generative AI is more sophisticated. It uses deep learning, or algorithms that create artificial neural networks that are meant to mimic how human brains process information and learn. And then those models are fed enormous amounts of data to train on. For example, large language models power things like ChatGPT, which train on text collected from around the internet until they learn to generate and mimic those kinds of texts and conversations upon request. Image models have been fed tons of images and captions that describe them in order to learn how to create new content based on prompts.

After years of development, most of it outside of public view, generative AI hit the mainstream in 2022 with the widespread releases of art and text models. Models like Stable Diffusion and DALL-E, which was released by OpenAI, were first to go viral, and they let anyone create new images from text prompts. Then came OpenAI’s ChatGPT (GPT stands for “generative pre-trained transformer”) which got everyone’s attention. This tool could create large, entirely new chunks of text from simple prompts. For the most part, ChatGPT worked really well, too — better than anything the world had seen before.

Though it’s one of many AI startups out there, OpenAI seems to have the most advanced or powerful products right now. Or at least, it’s the startup that has given the general public access to its services, thereby providing the most evidence of its progress in the generative AI field. This is a demonstration of its abilities as well as a source of even more data for OpenAI’s models to learn from.

OpenAI is also backed by some of the biggest names in Silicon Valley. It was founded in 2015 as a nonprofit research lab with $1 billion in support from the likes of Elon Musk, Reid Hoffman, Peter Thiel, Amazon, and former Y Combinator president Sam Altman, who is now the company’s CEO. OpenAI has since changed its structure to become a for-profit company but has yet to make a profit or even much by way of revenue. That’s not a problem yet, as OpenAI has gotten a considerable amount of funding from Microsoft, which began investing in OpenAI in 2019. And OpenAI is seizing on the wave of excitement for ChatGPT to promote its API services, which are not free. Neither is the company’s upcoming ChatGPT Plus service.

OpenAI CEO Sam Altman wearing a dark top and sunglasses.
OpenAI CEO Sam Altman attends the Allen & Company Sun Valley Conference in July 2022.
Kevin Dietsch/Getty Images

Other big tech companies have for years been working on their own generative AI initiatives. There’s Apple’s Gaudi, Meta’s LLaMA and Make-a-Scene, Amazon’s collaboration with Hugging Face, and Google’s LaMDA (which is good enough that one Google engineer thought it was sentient). But thanks to its early investment in OpenAI, Microsoft had access to the AI project everyone knew about and was trying out.

In January 2023, Microsoft announced it was giving $10 billion to OpenAI, bringing its total investment in the company to $13 billion. From that partnership, Microsoft has gotten what it hopes will be a real challenge to Google’s longtime dominance in web search: a new Bing powered by generative AI. We’ll soon see how well Google’s AI-powered search engine can compete.

AI search will give us the first glimpse of how generative AI can be used in our everyday lives ... if it works

Tech companies and investors are willing to pour resources into generative AI because they hope that, eventually, it will be able to create or generate just about any kind of content humans ask for. Some of those aspirations may be a long way from becoming reality, but right now, it’s possible that generative AI will power the next evolution of the humble internet search.

After months of rumors that both Microsoft and Google were working on generative AI versions of their web search engines, Microsoft debuted its AI-integrated Bing in January in a splashy media event that showed off all the cool things it could do, thanks to OpenAI’s custom-built technology that powered it. Instead of entering a prompt for Bing to look up and return a list of relevant links, you could ask Bing a question and get a “complete answer” composed by Bing’s generative AI and culled from various sources on the web that you didn’t have to take the time to visit yourself. You could also use Bing’s chatbot to ask follow-up questions to better refine your search results. The resultsmay not always be accurate and you might even get insulted, as happened to a few people who pushed past Bing AI’s supposed guardrails found, but Microsoft was going full steam ahead anyway. In the ensuing months, it added AI to a bunch of its products, from the Windows 11 operating system to Office.

This posed a major threat to Google, which has had the search market sewn up for decades and makes most of its revenue from the ads placed alongside its search results. Though Google has been working on its own generative AI models for years, the company says it kept them away from the public until it was sure the technology was safe to deploy. As soon Microsoft emerged as a major competitive threat, Google decided it was safe enough.

After the underwhelming limited release of its Bard chatbot, Google began to roll out its real generative AI offerings at its I/O developers conference in May. Like Microsoft, Google was incorporating the AI features into as many things as possible. If you opt into the new Search Generative Experience, you can ask Google questions and it will return conversational answers, courtesy of its newest large language model, Pathways Language Model, or PaLM 2. Google’s workspace apps will also soon have something called Duet AI to help you write emails and documents, generate images, and more.

So although Microsoft was the first off the starting line, we’re about to see if Google can catch up. We’re also about to see how the rest of the world responds to having powerful AI tools at their fingertips. Hopefully, they’re as safe as their developers claim they are.

Again, Microsoft and Google aren’t the only companies working on generative AI, but their public releases have put more pressure on others to roll out their offerings as soon as possible, too. Meta is working to get its generative AI into as many of its own products as possible and just released a large language model of its own, called Large Language Model Meta AI, or LLaMA. And it seems like everyone is flocking to OpenAI to jam its ChatGPT and Whisper services to their businesses. Snapchat now has a chatbot called “My AI,” though reviews have been mixed, as is its ability to keep that bot from discussing inappropriate topics with Snapchat’s younger users. Instacart will use ChatGPT in a feature called “Ask Instacart” that can answer customers’ questions about food. Shopify’s Shop app has a ChatGPT-powered assistant to make personalized recommendations from the brands and stores that use the platform. Expedia says its ChatGPT integration helps users plan vacations, though it also stressed that this was still in a beta-testing phase and highlighted some of the ways Expedia already uses less-sophisticated forms of AI and machine learning on its app and website.

Generative AI is here to stay, but we don’t yet know if that’s for the best

Bing AI’s problems were just a glimpse of how generative AI can go wrong and have potentially disastrous consequences. That’s why pretty much every company that’s in the field of AI goes out of its way to reassure the public that it’s being very responsible with its products and taking great care before unleashing them on the world. Yet for all of their stated commitment to “building AI systems and products that are trustworthy and safe,” Microsoft and OpenAI either didn’t or couldn’t ensure a Bing chatbot could live up to those principles, but they released it anyway. Google and Meta, by contrast, were very conservative about releasing their products — until Microsoft and OpenAI gave them a push.

Error-prone generative AI is being put out there by many other companies that have promised to be careful. Some text-to-image models are infamous for producing images with missing or extra limbs. There are chatbots that confidently declare the winner of a Super Bowl that has yet to be played. These mistakes are funny as isolated incidents, but we’ve already seen one publication rely on generative AI to write authoritative articles with significant factual errors. And a law professor discovered that ChatGPT was saying he was accused of sexual harassment, basing that assertion on a Washington Post article that didn’t exist. Bing’s chatbot then repeated that false claim, citing the professor’s own op-ed about it.

Google CEO Sundar Pichai announced the new Google search experience at the company’s I/O conference in May 2023.
Google CEO Sundar Pichai announced the new Google search experience at the company’s I/O conference in May 2023.
David Paul Morris/Bloomberg via Getty Images

These screw-ups have been happening for years. Microsoft had one high-profile AI chatbot flop with its 2016 release of Tay, which Twitter users almost immediately trained to say some really offensive things. Microsoft quickly took it offline. Meta’s Blenderbot is based on a large language model and was released in August 2022. It didn’t go well. The bot seemed to hate Facebook, got racist and antisemitic, and wasn’t very accurate. It’s still available to try out, but after seeing what ChatGPT can do, it feels like a clunky, slow, and weird step backward.

There are even more serious concerns. Generative AI threatens to put a lot of people out of work if it’s good enough to replace them. It could have a profound impact on education. There are also questions of legalities over the material AI developers are using to train their models, which is typically scraped from millions of sources that the developers don’t have the rights to. And there are questions of bias both in the material that AI models are training on and the people who are training them.

It’s also a possibility that generative AI will be used to deliberately spread disinformation. An AI-generated image of the pope wearing a stylish coat, made using Midjourney, fooled a lot of people and demonstrated how close we may be to a world where it’s nearly impossible to tell what’s real and what isn’t.

On the other side, some conservative bomb-throwers have accused generative AI developers of moderating their platforms’ outputs too much and making them “woke” and biased against the right wing. To that end, Musk, the self-proclaimed free-speech absolutist and OpenAI critic as well as an early investor, is reportedly considering developing a ChatGPT rival that won’t have content restrictions or be trained on supposedly “woke” material.

And then there’s the fear not of generative AI but of the technology it could lead to: artificial general intelligence. AGI can learn and think and solve problems like a human, if not better. This has given rise to science fiction-based fears that AGI will lead to an army of super-robots that quickly realize they have no need for humans and either turn us into slaves or wipe us out entirely.

There are plenty of reasons to be optimistic about generative AI’s future, too. It’s a powerful technology with a ton of potential, and we’ve still seen relatively little of what it can do and who it can help. Silicon Valley clearly sees this potential, and venture capitalists like Andreessen Horowitz and Sequoia seem to be all-in. OpenAI is valued at nearly $30 billion, despite not having yet proved itself as a revenue generator.

Generative AI has the power to upend a lot of things, but that doesn’t necessarily mean it’ll make them worse. Its ability to automate tasks may give humans more time to focus on the stuff that can’t be done by increasingly sophisticated machines, as has been true for technological advances before it. And in the near future — once the bugs are worked out — it could make searching the web better. In the years and decades to come, it might even make everything else better, too.

Oh, and in case you were wondering: No, generative AI did not write this explainer.

Update, May 11, 5 pm ET: This story was originally published on March 4 and has been updated with information about ChatGPT’s expansion and Google’s AI integrations.

Correction, May 12, 10:15 am ET: A photo caption in an earlier version of this story misidentified Google CEO Sundar Pichai.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.