Skip to main content

Filed under:

From ChatGPT to Gemini: how AI is rewriting the internet

Big players, including Microsoft, with Copilot, Google, with Gemini, and OpenAI, with ChatGPT-4, are making AI chatbot technology previously restricted to test labs more accessible to the general public.

How do these large language model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.” 

Or, in the words of James Vincent, a human person: “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”

But there are so many more pieces to the AI landscape that are coming into play (and so many name changes — remember when we were talking about Bing and Bard last year?), but you can be sure to see it all unfold here on The Verge.

  • Alex Heath

    TODAY, Two hours ago

    Alex Heath

    OpenAI keeps vaguely teasing GPT-5.

    COO Brad Lightcap is speaking at Bloomberg’s Tech conference and was just asked when the next model is arriving. His answer hints that ChatGPT will evolve to act like an agent on your behalf or, at the very least, take on more of a persona.

    “Will there be such a thing as a prompt engineer in 2026?” he says. You don’t prompt engineer your friend.”


  • Kylie Robison

    TODAY, 3:44 PM UTC

    Kylie Robison

    Leaked OpenAI slide deck reveals how it's wooing publishers.

    According to Adweek, OpenAI’s incentives for publishers include financial compensation as well as:

    ...priority placement and “richer brand expression” in chat conversations, and their content benefits from more prominent link treatments.

    In exchange, OpenAI gets training data and a license to display info with attribution and links. OpenAI has struck deals with publishers like Axel Springer, The Financial Times, and most recently, People magazine publisher Dotdash Meredith. A comment from OpenAI said Adweek’s report “contains a number of mischaracterizations and outdated information.”


  • Umar Shakir

    TODAY, 1:31 PM UTC

    Umar Shakir

    TikTok is adding an ‘AI-generated’ label to watermarked third-party content

    Vector art of the TikTok logo.
    Image: The Verge

    TikTok already automatically applies an “AI-generated” tag to content on its platform made using TikTok’s AI tools, and that same label will now apply to content created on other platforms. Now, TikTok will detect when images or videos are uploaded to its platform containing metadata tags indicating the presence of AI-generated content and says it’s the first social media platform to support the new Content Credentials.

    Support for the Adobe-developed tagging system (which has been added to tools like Photoshop and Firefly) comes as TikTok partners with Adobe’s Content Authenticity Initiative (CAI) as well as the Coalition for Content Provenance and Authenticity (C2PA).

    Read Article >
  • OpenAI’s Model Spec outlines some basic rules for AI

    A rendition of OpenAI’s logo, which looks like a stylized whirlpool.
    Illustration: The Verge

    AI tools behaving badly — like Microsoft’s Bing AI losing track of which year it is — has become a subgenre of reporting on AI. But very often, it’s hard to tell the difference between a bug and poor construction of the underlying AI model that analyzes incoming data and predicts what an acceptable response will be, like Google’s Gemini image generator drawing diverse Nazis due to a filter setting.

    Now, OpenAI is releasing the first draft of a proposed framework, called Model Spec, that would shape how AI tools like its own GPT-4 model respond in the future. The OpenAI approach proposes three general principles — that AI models should assist the developer and end-user with helpful responses that follow instructions, benefit humanity with consideration of potential benefits and harms, and reflect well on OpenAI with respect to social norms and laws.

    Read Article >
  • Stack Overflow is feeding programmers’ answers to AI, whether they like it or not

    Photo illustration of the shape of a brain on a circuit board.
    Illustration: Cath Virginia / The Verge | Photos: Getty Images

    Stack Overflow’s new deal giving OpenAI access to its API as a source of data has users who’ve posted their questions and answers about coding problems in conversations with other humans rankled. Users say that when they attempt to alter their posts in protest, the site is retaliating by reversing the alterations and suspending the users who carried them out.

    A programmer named Ben posted a screenshot yesterday of the change history for a post seeking programming advice, which they’d updated to say that they had removed the question to protest the OpenAI deal. “The move steals the labour of everyone who contributed to Stack Overflow with no way to opt-out,” read the updated post.

    Read Article >
  • OpenAI is entering the search game.

    OpenAI is developing a search engine for ChatGPT, giving users the ability to crawl the web for answers to their questions, Bloomberg reports. Sources also tell The Verge that OpenAI has been aggressively trying to poach Google employees for a team that is working hard to ship the product soon.


  • We’re desi, so I guess we wear turbans?

    Meta Al is generating turbans on an overwhelmingly high amount of prompts for Indian men, TechCrunch has found. It’s certainly a stereotype that’s long depicted for South Asians and even Arabs in Hollywood, which has led some folks in the western world to assume any turban wearers’ background and religion.


    three AI images of Indian men wearing turbans
    Meta AI: If Indian man, then turban.
    Image: TechCrunch
  • Randy Travis gets his voice back in a new Warner AI music experiment

    Randy Travis singing at Cheyenne Frontier Days
    Randy Travis in 1987.
    Photo: Mark Junge / Getty Images

    For the first time since a 2013 stroke left country singer Randy Travis unable to speak or sing properly, he has released a new song. He didn’t sing it, though; instead, the vocals were created with AI software and a surrogate singer.

    The song, called “Where That Came From,” is every bit the kind of folksy, sentimental tune I came to love as a kid when Travis was at the height of his fame. The producers created it by training an unnamed AI model, starting with 42 of his vocal-isolated recordings. Then, under the supervision of Travis and his career-long producer Kyle Lehning, fellow country singer James DuPre laid down the vocals to be transformed into Travis’ by AI.

    Read Article >
  • YouTube tests out using AI to skip to the good part.

    Some YouTube Premium subscribers can now jump to the most-watched part of a video, only in the YouTube app, by double-tapping the right side of the screen (which normally skips ahead 10 seconds), then tapping a “Jump ahead” button that appears, according to 9to5Google.

    To see if you have the feature and enable it, go to Settings > Try experimental new features.


  • “You’re holding a taco!”

    If you’ve already read our review of the Rabbit R1 but haven’t gotten around to watching the video version of it, what better time than now?


  • Microsoft needs some time to ‘refine’ updates for Copilot AI in Windows

    Vector illustration of the Microsoft Copilot logo.
    The Verge

    Microsoft’s latest Windows Insider blog posts say that when it comes to testing new Copilot features in Windows 11, “We have decided to pause the rollouts of these experiences to further refine them based on user feedback.” For people who already have the feature, “Copilot in Windows will continue to work as expected while we continue to evolve new ideas with Windows Insiders.”

    Microsoft is holding an AI event on May 20th which would be a good time to show more of what’s next, and after setting up 2024 as “the year of the AI PC,” with a new Copilot key on Windows keyboards, there’s a lot to live up to.

    Read Article >
  • Wes Davis

    Apr 30

    Wes Davis

    OpenAI makes ChatGPT’s chat history feature available to everyone — no strings attached.

    OpenAI says free and Plus subscribers can now use the feature without giving over their chats to train its models.

    With chat history on, users can pick up previous chats where they left off, and the chatbot will reply as though they never stopped. The company also says users can start one-off chats that aren’t saved in the history.


  • ChatGPT’s AI ‘memory’ can remember the preferences of paying customers

    A rendition of OpenAI’s logo, which looks like a stylized whirlpool.
    Illustration: The Verge

    OpenAI announced the Memory feature that allows ChatGPT to store queries, prompts, and other customizations more permanently in February. At the time, it was only available to a “small portion” of users, but now it’s available for ChatGPT Plus paying subscribers outside of Europe or Korea.

    ChatGPT’s Memory works in two ways to make the chatbot’s responses more personalized. The first is by letting you tell ChatGPT to remember certain details, and the second is by learning from conversations similar to other algorithms in our apps. Memory brings ChatGPT closer to being a better AI assistant. Once it remembers your preferences, it can include them without needing a reminder.

    Read Article >
  • Wes Davis

    Apr 28

    Wes Davis

    The OLED iPad Pro could launch with an M4 chip

    A 2022 Apple iPad Pro in a Magic Keyboard case on a wooden desk.
    Image: Dan Seifert / The Verge

    Apple is preparing for its big AI coming out party in this year’s Worldwide Developer Conference; that much you can count on. But apparently, the company is going to start that party a little early with the OLED iPad Pro that it’s expected to unveil on May 7th. According to Bloomberg’s Mark Gurman, there’s “a strong possibility” the tablet will launch with an M4 chip and its accompanying neural engine, making it Apple’s “first truly AI-powered device.”

    Writing in his Power On newsletter today, Gurman said the company could use its May event to explain “its AI chip strategy without distraction,” freeing it to focus on exactly how the iPad Pro and its other M4 devices will use the company’s AI offerings in iPadOS 18. Those could include on-device Apple-developed features and deeply-integrated chatbots from one or more other companies like Google or OpenAI.

    Read Article >
  • Wes Davis

    Apr 27

    Wes Davis

    What will Instagram’s chatbot creator look like?

    At the moment, it seems Meta’s “AI studio” will let people make private and public bots, tuned for duties like personal shopping, trip-planning, meme generation, and helping users “never miss a romantic connection.” (I assume that last one is designed to trawl Craigslist Missed Connections for you.)

    Alessandro Paluzzi posted these screenshots in a thread where he’s been tracking the feature since January.


    Two screenshots from the forthcoming AI creator for Instagram.
    Meta’s chatbot maker can apparently help you “nerd out on your passions.”
    Image: Alessandro Paluzzi
  • Zuckerberg says it will take Meta years to make money from generative AI

    An image of the Meta logo.
    Illustration by Alex Castro / The Verge

    The generative AI gold rush is underway — just don’t expect it to create profits anytime soon.

    That was the message from Meta CEO Mark Zuckerberg to investors during Wednesday’s call for the company’s first-quarter earnings report. Having just put its ChatGPT competitor in a bunch of places across Instagram, Facebook, and WhatsApp, much of the call focused on exactly how generative AI will become a money-making endeavor for Meta.

    Read Article >
  • Microsoft launches Phi-3, its smallest AI model yet

    Illustration of the Microsoft wordmark on a green background
    Illustration: The Verge

    Microsoft launched the next version of its lightweight AI model Phi-3 Mini, the first of three small models the company plans to release. 

    Phi-3 Mini measures 3.8 billion parameters and is trained on a data set that is smaller relative to large language models like GPT-4. It is now available on Azure, Hugging Face, and Ollama. Microsoft plans to release Phi-3 Small (7B parameters) and Phi-3 Medium (14B parameters). Parameters refer to how many complex instructions a model can understand. 

    Read Article >
  • No one’s going to misuse this, right?

    Microsoft’s new AI model, VASA-1, transforms a single still image and an audio clip into an animated video, which is impressive, if not a little creepy.

    The benefits – such as enhancing educational equity, improving accessibility for individuals with communication challenges, offering companionship or therapeutic support to those in need, among many others – underscore the importance of our research and other related explorations.

    Microsoft says it won’t release a demo, API, or product with VASA-1 “until we are certain that the technology will be used responsibly.”


  • Emma Roth

    Apr 18

    Emma Roth

    Meta is adding real-time AI image generation to WhatsApp

    An image showing the WhatsApp logo in black
    Illustration: The Verge

    Meta is rolling out real-time AI image generation in beta for WhatsApp users in the US. As soon as you start typing a text-to-image prompt in a chat with Meta AI, you’ll see how the image changes as you add more detail about what you want to create.

    In the example shared by Meta, a user types in the prompt, “Imagine a soccer game on mars.” The generated image quickly changes from a typical soccer player to showing an entire soccer field on a Martian landscape. If you have access to the beta, you can try out the feature for yourself by opening a chat with Meta AI and then start a prompt with the word “Imagine.”

    Read Article >
  • Meta’s battle with ChatGPT begins now

    Mark Zuckerberg onstage at Meta Connect 2023.
    Mark Zuckerberg announcing Meta’s AI assistant at Connect 2023.
    Image: Meta

    ChatGPT kicked off the AI chatbot race. Meta is determined to win it.

    To that end: the Meta AI assistant, introduced last September, is now being integrated into the search box of Instagram, Facebook, WhatsApp, and Messenger. It’s also going to start appearing directly in the main Facebook feed. You can still chat with it in the messaging inboxes of Meta’s apps. And for the first time, it’s now accessible via a standalone website at Meta.ai.

    Read Article >
  • OpenAI will give you a 50 percent discount for off-peak GPT use.

    OpenAI’s Batch API now lets users upload a file of bulk queries to the AI model, like categorizing data or tagging images, with the understanding that they won’t need immediate attention. Promising results within 24 hours lets them run when there is unused compute power, and keeps those pricey GPUs humming around the clock.


  • I’ll see your Shrimp Jesus and raise you Spaghetti Jesus on a Lambo.

    A bunch of places covered AI-generated images of an unholy Jesus/shrimp hybrid going viral on Facebook earlier this year, but the attention didn’t cause Zuck to take any action to slow the situation down. Here’s JC Noods laying back on a Lambo, a post which has 36,000 likes on Facebook right now. The AI internet is going great, y’all.


    An AI generated image of Jesus made of spaghetti sitting on a Lambo made of green spaghetti. This description is accurate.
    This image represents decades of innovation and you will respect it.
  • OpenAI opens up an office in Japan.

    OpenAI chose Tokyo for its first office in Asia as it expands its footprint outside of the US. The company is also releasing a version of GPT-4 in Japanese.


  • Wes Davis

    Apr 14

    Wes Davis

    Someone is working on an AI fleshlight with RGB lights.

    Orifice.ai is where someone is tracking their progress in creating an apparently LLM-connected sex toy, complete with “generative moaning” and computer vision. (Spotted on Hacker News.)

    NSFW, by the way.


    Orifice.ai

    [orifice.ai]

  • Wes Davis

    Apr 14

    Wes Davis

    “I am the voice of the Knight Industries 2000 microprocessor...”

    The Knight Rider Historians channel used ChatGPT, Amazon Polly, and a Raspberry Pi to make the world’s most annoying version of KITT, the AI that helps David Hasslehoff from its home inside the fictional Pontiac Firebird Trans Am from the Knight Rider series.

    What do we think? 3 out of 10? 4?