Skip to main content

Filed under:

Bing, Bard, and ChatGPT: How AI is rewriting the internet

Share this story

Big players, including Microsoft, with its Bing AI (and Copilot), Google, with Bard, and OpenAI, with ChatGPT-4, are making AI chatbot technology previously restricted to test labs more accessible to the general public.

How do these large language model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.” 

Or, in the words of James Vincent, a human person: “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”

But there are so many more pieces to the AI landscape that are coming into play — and there are going to be problems — but you can be sure to see it all unfold here on The Verge.

  • Google launches Gemini, the AI model it hopes will take down GPT-4

    An illustration showing Gemini’s ability to take photos, audio, and more.
    Gemini was designed from the beginning to be about much more than just text.
    Image: Google

    It’s the beginning of a new era of AI at Google, says CEO Sundar Pichai: the Gemini era. Gemini is Google’s latest large language model, which Pichai first teased at the I/O developer conference in June and is now launching to the public. To hear Pichai and Google DeepMind CEO Demis Hassabis describe it, it’s a huge leap forward in an AI model that will ultimately affect practically all of Google’s products. “One of the powerful things about this moment,” Pichai says, “is you can work on one underlying technology and make it better and it immediately flows across our products.” 

    Gemini is more than a single AI model. There’s a lighter version called Gemini Nano that is meant to be run natively and offline on Android devices. There’s a beefier version called Gemini Pro that will soon power lots of Google AI services and is the backbone of Bard starting today. And there’s an even more capable model called Gemini Ultra that is the most powerful LLM Google has yet created and seems to be mostly designed for data centers and enterprise applications. 

    Read Article >
  • Google’s Bard chatbot is getting way better thanks to Gemini

    A graphic showing Bard’s logo with Gmail, Drive, Docs, and other apps
    Bard has access to Google’s apps and now its best large language model, too.
    Image: Google

    While OpenAI’s ChatGPT has become a worldwide phenomenon and one of the fastest-growing consumer products ever, Google’s Bard has been something of an afterthought. The chatbot has steadily gained new features, including access to your data across other Google products, but its answers and information have rarely seemed to rival what you get from ChatGPT and other bots using GPT-3 and GPT-4. 

    The case for Bard may have just gotten more compelling, though: as of today, for English-speaking users in 170 countries, Bard is now powered by Google’s new Gemini model, which it says matches and even exceeds OpenAI’s tech in a number of ways. (Google says Gemini is coming to more languages and countries “in the near future.”)

    Read Article >
  • OpenAI execs dubbed ChatGPT a “Low key research preview.”

    The phrase became an internal joke after ChatGPT’s popularity exploded right out of the gate, according to the NYT’s recap of its launch a year ago and the reaction among Big Tech companies.

    Google and Meta scrambled AI teams to launch competing products — even if that meant removing some guardrails — like Bard and LLaMa. And Microsoft’s rush to beat Google had Satya Nadella saying, “We have a big order coming to you, a really big order coming to you,” to Nvidia’s Jensen Huang as he ordered $2 billion in chips.

  • Bing’s GPT-4-powered Deep Search takes its time with AI questions

    A Bing logo.
    Illustration: The Verge

    Microsoft is readying a new Bing feature that should take the hassle out of coming up with your own AI prompt. The GPT-4-powered capability, called Deep Search, takes your Bing query and expands on it, allowing the search engine to find answers about several topics related to your question on the web.

    As an example, Microsoft shows how Bing turns a vague search for “how do points systems work in Japan” into a detailed prompt that asks Bing to:

    Read Article >
  • Microsoft Copilot is now generally available.

    Copilot, the AI chatbot formerly known as Bing Chat, is out of preview. That means Copilot is now available in 105 languages and 169 countries “on all modern browsers for mobile and web,” according to Caitlin Roulston, the director of communications at Microsoft.

    Even though the preview label is going away today, Roulston says Microsoft will continue to “launch new features in preview while we iterate, listen to feedback, and improve the experience for our users.”

  • ChatGPT is winning the future — but what future is that?

    A trippy graphic displaying a collection of items like paintbrushes, books, phone messages, and a notepad to represent generative AI. A large pair of eyes and hands can be seen at the center of the image.
    Illustration by Haein Jeong / The Verge

    There have been a handful of before-and-after moments in the modern technology era. Everything was one way, and then just like that, it was suddenly obvious it would never be like that again. Netscape showed the world the internet; Facebook made that internet personal; the iPhone made plain how the mobile era would take over. There are others — there’s a dating-app moment in there somewhere, and Netflix starting to stream movies might qualify, too — but not many.

    ChatGPT, which OpenAI launched a year ago today, might have been the lowest-key game-changer ever. Nobody took a stage and announced that they’d invented the future, and nobody thought they were launching the thing that would make them rich. If we’ve learned one thing in the last 12 months, it’s that no one — not OpenAI’s competitors, not the tech-using public, not even the platform’s creators — thought ChatGPT would become the fastest-growing consumer technology in history. And in retrospect, the fact that nobody saw ChatGPT coming might be exactly why it has seemingly changed everything.

    Read Article >
  • Microsoft joins OpenAI’s board with Sam Altman officially back as CEO

    Sam Altman at OpenAI’s developer conference.
    Sam Altman.

    Sam Altman is officially OpenAI’s CEO again.

    Just before Thanksgiving, the company said it had reached a deal in principle for him to return, and now it’s done. Microsoft is getting a non-voting observer seat on the nonprofit board that controls OpenAI as well, the company announced on Wednesday.

    Read Article >
  • A troll worthy of Clippy themself.

    This Wall Street Journal article about the recent drama at OpenAI contains an amazing anecdote. Apparently an employee at AI rival Anthropic thought it’d be funny to send “thousands of paper clips in the shape of OpenAI’s logo” as a prank, in reference to the infamous paperclip maximizer thought experiment.

    Weirdly, I think OpenAI’s logo makes for a great paperclip design. Should we be worried?

  • Wes Davis

    Nov 22

    Wes Davis

    Sony finished its second round of tests of its in-camera authenticity tech.

    The company tested baking a cryptographic “digital signature” into photos taken by its cameras to set them apart from AI-generated or otherwise faked images. Sony says the feature will come to cameras like the Alpha 9 III via a firmware update in Spring 2024.

    Picture of a Sony digital camera
    Image: Sony
  • Emma Roth

    Nov 21

    Emma Roth

    OpenAI drops a big new ChatGPT feature with a joke about its CEO drama

    A rendition of OpenAI’s logo, which looks like a stylized whirlpool.
    Illustration: The Verge

    ChatGPT’s voice feature is now available to all users for free. In a post on X (formerly Twitter), OpenAI announced users can now tap the headphones icon to use their voice to talk with ChatGPT in the mobile app, as well as get an audible response.

    OpenAI first rolled out the ability to prompt ChatGPT with your voice and images in September, but it only made the feature available to paying users.

    Read Article >
  • Wes Davis

    Nov 21

    Wes Davis

    OpenAI rival Anthropic makes its Claude chatbot even more useful

    Anthropic’s logo — the outline of a head surrounded by three more larger outlines of heads, and a star-shaped set of lines emerging from a center point of the smallest head and bursting out in seven different lengths of line, terminating in filled-in circles. The background is orange.
    Anthropic gives Claude more abilities.
    Image: Anthropic

    While OpenAI is in the middle of an existential crisis, there’s a new chatbot update from Anthropic, the Google-backed AI startup founded by former OpenAI engineers who left over disagreements about the company’s increasingly commercial direction as its Microsoft partnership went on.

    Anthropic has announced that the latest update of its chatbot, Claude 2.1, can digest up to 200,000 tokens at once for Pro tier users, which it says equals over 500 pages of material.

    Read Article >
  • Wes Davis

    Nov 18

    Wes Davis

    Meta disbanded its Responsible AI team

    Image of Meta’s logo with a red and blue background.
    Illustration by Nick Barclay / The Verge

    Meta has reportedly broken up its Responsible AI (RAI) team as it puts more of its resources into generative artificial intelligence. The Information broke the news today, citing an internal post it had seen.

    According to the report, most RAI members will move to the company’s generative AI product team, while others will work on Meta’s AI infrastructure. The company regularly says it wants to develop AI responsibly and even has a page devoted to the promise, where the company lists its “pillars of responsible AI,” including accountability, transparency, safety, privacy, and more.

    Read Article >
  • Emma Roth

    Nov 17

    Emma Roth

    Some of Bing’s search results now have AI-generated descriptions.

    Microsoft says it’s using GPT-4 to garner the “most pertinent insights” from webpages and write summaries beneath Bing search results. You won’t see these summaries beneath every search result, but you can check which ones are AI-generated by clicking the little arrow next to the result’s URL. If the description is written by AI, it’ll say “AI-Generated Caption.”

    A screenshot of an AI-generated search description on Bing
    Screenshot by Emma Roth / The Verge
  • Google’s next-generation ‘Gemini’ AI model is reportedly delayed.

    Earlier this year, Google combined two AI teams into one group which is working on a new model to compete with OpenAI’s GPT-4. Its leader, Demis Hassabis, discussed the combo on Decoder:

    And we’re already feeling, even a couple of months in, the benefits and the strengths of that with projects like Gemini that you may have heard of, which is our next-generation multimodal large models — very, very exciting work going on there, combining all the best ideas from across both world-class research groups. It’s pretty impressive to see.

    Now, The Information cites two sources saying its launch is expected in Q1 of 2024, not this month as they were previously told. It also reports Google co-founder Sergey Brin has been spending “four to five days a week” with the developers.

  • YouTube previews AI tool that clones famous singers — with their permission

    Google is testing new generative AI features for YouTube that’ll let people create music tracks using just a text prompt or a simple hummed tune. The first, Dream Track, already seeded to a few creators on the platform, is designed to auto-generate short 30-second music tracks in the style of famous artists. The feature can imitate nine different artists, who’ve chosen to collaborate with YouTube on its development. YouTube is also showing off new tools that can generate music tracks from a hum.

    The announcement comes as YouTube attempts to navigate the emerging norms and copyright rules around AI-generated music while also protecting its relationship with major music labels. The issue was brought into sharp relief when an AI-generated “Drake” song went viral earlier this year, and YouTube subsequently announced a deal to work with Universal Music as it establishes rules around AI-generated music on its platform.

    Read Article >
  • Wes Davis

    Nov 15

    Wes Davis

    Microsoft’s Copilot AI gets more personalized in its first update since launch

    Illustration of Microsoft’s new AI-powered Copilot for Office apps
    Image: Microsoft

    Microsoft announced a plethora of changes for its Microsoft Copilot AI during Microsoft Ignite today that make the chatbot more interactive and participatory, particularly in Teams meetings. The updates expand Copilot’s role as an enterprise helper in Office apps like Teams, PowerPoint, and Outlook.

    Microsoft has added some flexibility to the chatbot’s output, so users can tweak it with instructions to make its formatting and tone more to their personal liking. Word and PowerPoint will get the new personalization features to start, but the company says other Microsoft 365 apps will gain support in time.

    Read Article >
  • Wes Davis

    Nov 14

    Wes Davis

    Airbnb just bought its very own AI company, led by a Siri co-founder.

    CNBC reports Airbnb bought the “stealth mode” Gameplanner.AI for almost $200 million, and notes its plans to use generative AI for trip planning help.

    The article says Gameplanner.AI was founded in 2020 by former Siri co-founder Adam Cheyer, whose later work included co-founding Viv Labs, which Samsung bought to make Bixby.

  • Barack Obama on AI, free speech, and the future of the internet.

    In a sitdown with Verge EIC Nilay Patel on Decoder, the 44th president discussed Joe Biden’s recently-signed executive order about AI, why Obama disagrees with the idea that social networks are a “common carrier,” and which iPhone apps he uses the most, now that he’s no longer president and he can use an iPhone.

  • OpenAI wants to be the App Store of AI

    Sam Altman.
    OpenAI CEO Sam Altman.
    Photo by Andrew Caballero-Reynolds

    Moments after OpenAI’s big keynote wrapped in San Francisco on Monday, the reporters in attendance made our way down to a private room to chat with CEO Sam Altman and CTO Mira Murati. During the Q&A, they elaborated on the big news that had just been shared onstage: OpenAI is launching a platform for creating and discovering custom versions of ChatGPT

    There are natural parallels to draw between OpenAI’s GPT Store, which is set to go live in a few weeks, and the debut of the iPhone’s App Store in 2008. Like Apple way back then, OpenAI is inviting developers who are excited about this new wave of technology to hopefully help create a new, enduring platform. 

    Read Article >
  • Google is bringing generative AI to advertisers

    three dogs in colorful backgrounds, a prompt that says “elegant dogs on color backdrop” with a button that says generate assets.
    Very clickable good dogs.
    Image: Google

    Google is rolling out new generative AI tools for creating ads, from writing the headlines and descriptions that appear along with searches to creating and editing the accompanying images. It’s pitching the tool for use by both advertising agencies as well as businesses without in-house creative staff. Using text prompts, advertisers can iterate on the text and images they generate until they find something they like.

    Google also promises that it will never generate two of the same images, which can avoid the awkward possibility that two competing businesses end up with the exact same photo elements.

    Read Article >
  • OpenAI turbocharges GPT-4 and makes it cheaper

    Illustration of the OpenAI logo on an orange background with purple lines
    Illustration: The Verge

    OpenAI announced more improvements to its large language models, GPT-4 and GPT-3.5, including updated knowledge bases and a much longer context window. The company says it will also follow Google and Microsoft’s lead and begin protecting customers against copyright lawsuits.

    GPT-4 Turbo, currently available via an API preview, has been trained with information dating to April 2023, the company announced Monday at its first-ever developer conference. The earlier version of GPT-4 released in March only learned from data dated up to September 2021. OpenAI plans to release a production-ready Turbo model in the next few weeks but did not give an exact date.

    Read Article >
  • OpenAI is letting anyone create their own version of ChatGPT

    ChatGPT logo in mint green and black colors.
    Illustration: The Verge

    With the release of ChatGPT one year ago, OpenAI introduced the world to the idea of an AI chatbot that can seemingly do anything. Now, the company is releasing a platform for making custom versions of ChatGPT for specific use cases — no coding required.

    In the coming weeks, these AI agents, which OpenAI is calling GPTs, will be accessible through the GPT Store. Details about how the store will look and work are scarce for now, though OpenAI is promising to eventually pay creators an unspecified amount based on how much their GPTs are used. GPTs will be available to paying ChatGPT Plus subscribers and OpenAI enterprise customers, who can make internal-only GPTs for their employees.

    Read Article >
  • ChatGPT continues to be one of the fastest-growing services ever

    Stock image of computer chip on an illustration of the human brain.
    Illustration by Alex Castro / The Verge

    One hundred million people are using ChatGPT on a weekly basis, OpenAI CEO Sam Altman announced at its first-ever developer conference on Monday. Since releasing its ChatGPT and Whisper models via API in March, the company also now boasts over two million developers, including over 92 percent of Fortune 500 companies.

    OpenAI announced the figures as it detailed a range of new features, including a platform for building custom versions of ChatGPT to help with specific tasks and GPT-4 Turbo, a new model that has knowledge of world events up to April 2023 and which can fit the equivalent of over 300 pages of text in a single prompt.

    Read Article >
  • ChatGPT subscribers may get a ‘GPT builder’ option soon

    A rendition of OpenAI’s logo, which looks like a stylized whirlpool.
    Illustration: The Verge

    Update November 6th, 10AM ET: We have all the confirmed details about the future of ChatGPT right here in our coverage of OpenAI’s developer event. The original article continues below.

    Just as OpenAI is preparing for its first-ever developer conference, a significant ChatGPT update has leaked.

    Read Article >
  • A look at why the Leica M11-P’s Content Credentials matter.

    Over on MKBHD’s The Studio, David Imel talked about the Leica M11-P. Or, more accurately, he used it to talk about Content Credentials, which the $9,000-plus camera attaches to photos as they’re taken so they can be verified through Adobe’s Content Authenticity Initiative (CAI).

    It’s a good look at CAI and its potential benefits to the media using the first-ever camera to participate in an initiative intended to help onlookers identify real-world images in a sea of AI-generated ones.