Artificial Intelligence

google’s-frighteningly-good-veo-3-ai-videos-to-be-integrated-with-youtube-shorts

Google’s frighteningly good Veo 3 AI videos to be integrated with YouTube Shorts

Even in the age of TikTok, YouTube viewership continues to climb. While Google’s iconic video streaming platform has traditionally pushed creators to produce longer videos that can accommodate more ads, the site’s Shorts format is growing fast. That growth may explode in the coming months, as YouTube CEO Neal Mohan has announced that the Google Veo 3 AI video generator will be integrated with YouTube Shorts later this summer.

According to Mohan, YouTube Shorts has seen a rise in popularity even compared to YouTube as a whole. The streaming platform is now the most watched source of video in the world, but Shorts specifically have seen a massive 186 percent increase in viewership over the past year. Mohan says Shorts now average 200 billion daily views.

YouTube has already equipped creators with a few AI tools, including Dream Screen, which can produce AI video backgrounds with a text prompt. Veo 3 support will be a significant upgrade, though. At the Cannes festival, Mohan revealed that the streaming site will begin offering integration with Google’s leading video model later this summer. “I believe these tools will open new creative lanes for everyone to explore,” said Mohan.

YouTube Shorts recommendations.

YouTube heavily promotes Shorts on the homepage.

Credit: Google

YouTube heavily promotes Shorts on the homepage. Credit: Google

This move will require a few tweaks to Veo 3 outputs, but it seems like a perfect match. As the name implies, YouTube Shorts is intended for short video content. The format initially launched with a 30-second ceiling, but that has since been increased to 60 seconds. Because of the astronomical cost of generative AI, each generated Veo clip is quite short, a mere eight seconds in the current version of the tool. Slap a few of those together, and you’ve got a YouTube Short.

Google’s frighteningly good Veo 3 AI videos to be integrated with YouTube Shorts Read More »

google-can-now-generate-a-fake-ai-podcast-of-your-search-results

Google can now generate a fake AI podcast of your search results

NotebookLM is undoubtedly one of Google’s best implementations of generative AI technology, giving you the ability to explore documents and notes with a Gemini AI model. Last year, Google added the ability to generate so-called “audio overviews” of your source material in NotebookLM. Now, Google has brought those fake AI podcasts to search results as a test. Instead of clicking links or reading the AI Overview, you can have two nonexistent people tell you what the results say.

This feature is not currently rolling out widely—it’s available in search labs, which means you have to manually enable it. Anyone can opt in to the new Audio Overview search experience, though. If you join the test, you’ll quickly see the embedded player in Google search results. However, it’s not at the top with the usual block of AI-generated text. Instead, you’ll see it after the first few search results, below the “People also ask” knowledge graph section.

Credit: Google

Google isn’t wasting resources to generate the audio automatically, so you have to click the generate button to get started. A few seconds later, you’re given a back-and-forth conversation between two AI voices summarizing the search results. The player includes a list of sources from which the overview is built, as well as the option to speed up or slow down playback.

Google can now generate a fake AI podcast of your search results Read More »

how-to-draft-a-will-to-avoid-becoming-an-ai-ghost—it’s-not-easy

How to draft a will to avoid becoming an AI ghost—it’s not easy


Why requests for “no AI resurrections” will probably go ignored.

Proton beams capturing the ghost of OpenAI to suck it into a trap where it belongs

All right! This AI is TOAST! Credit: Aurich Lawson

All right! This AI is TOAST! Credit: Aurich Lawson

As artificial intelligence has advanced, AI tools have emerged to make it possible to easily create digital replicas of lost loved ones, which can be generated without the knowledge or consent of the person who died.

Trained on the data of the dead, these tools, sometimes called grief bots or AI ghosts, may be text-, audio-, or even video-based. Chatting provides what some mourners feel is a close approximation to ongoing interactions with the people they love most. But the tech remains controversial, perhaps complicating the grieving process while threatening to infringe upon the privacy of the deceased, whose data could still be vulnerable to manipulation or identity theft.

Because of suspected harms and perhaps a general repulsion to the idea of it, not everybody wants to become an AI ghost.

After a realistic video simulation was recently used to provide a murder victim’s impact statement in court, Futurism summed up social media backlash, noting that the use of AI was “just as unsettling as you think.” And it’s not the first time people have expressed discomfort with the growing trend. Last May, The Wall Street Journal conducted a reader survey seeking opinions on the ethics of so-called AI resurrections. Responding, a California woman, Dorothy McGarrah, suggested there should be a way to prevent AI resurrections in your will.

“Having photos or videos of lost loved ones is a comfort. But the idea of an algorithm, which is as prone to generate nonsense as anything lucid, representing a deceased person’s thoughts or behaviors seems terrifying. It would be like generating digital dementia after your loved ones’ passing,” McGarrah said. “I would very much hope people have the right to preclude their images being used in this fashion after death. Perhaps something else we need to consider in estate planning?”

For experts in estate planning, the question may start to arise as more AI ghosts pop up. But for now, writing “no AI resurrections” into a will remains a complicated process, experts suggest, and such requests may not be honored by all unless laws are changed to reinforce a culture of respecting the wishes of people who feel uncomfortable with the idea of haunting their favorite people through AI simulations.

Can you draft a will to prevent AI resurrection?

Ars contacted several law associations to find out if estate planners are seriously talking about AI ghosts. Only the National Association of Estate Planners and Councils responded; it connected Ars to Katie Sheehan, an expert in the estate planning field who serves as a managing director and wealth strategist for Crestwood Advisors.

Sheehan told Ars that very few estate planners are prepared to answer questions about AI ghosts. She said not only does the question never come up in her daily work, but it’s also “essentially uncharted territory for estate planners since AI is relatively new to the scene.”

“I have not seen any documents drafted to date taking this into consideration, and I review estate plans for clients every day, so that should be telling,” Sheehan told Ars.

Although Sheehan has yet to see a will attempting to prevent AI resurrection, she told Ars that there could be a path to make it harder for someone to create a digital replica without consent.

“You certainly could draft into a power of attorney (for use during lifetime) and a will (for use post death) preventing the fiduciary (attorney in fact or executor) from lending any of your texts, voice, image, writings, etc. to any AI tools and prevent their use for any purpose during life or after you pass away, and/or lay the ground rules for when they can and cannot be used after you pass away,” Sheehan told Ars.

“This could also invoke issues with contract, property and intellectual property rights, and right of publicity as well if AI replicas (image, voice, text, etc.) are being used without authorization,” Sheehan said.

And there are likely more protections for celebrities than for everyday people, Sheehan suggested.

“As far as I know, there is no law” preventing unauthorized non-commercial digital replicas, Sheehan said.

Widely adopted by states, the Revised Uniform Fiduciary Access to Digital Assets Act—which governs who gets access to online accounts of the deceased, like social media or email accounts—could be helpful but isn’t a perfect remedy.

That law doesn’t directly “cover someone’s AI ghost bot, though it may cover some of the digital material some may seek to use to create a ghost bot,” Sheehan said.

“Absent any law” blocking non-commercial digital replicas, Sheehan expects that people’s requests for “no AI resurrections” will likely “be dealt with in the courts and governed by the terms of one’s estate plan, if it is addressed within the estate plan.”

Those potential fights seemingly could get hairy, as “it may be some time before we get any kind of clarity or uniform law surrounding this,” Sheehan suggested.

In the future, Sheehan said, requests prohibiting digital replicas may eventually become “boilerplate language in almost every will, trust, and power of attorney,” just as instructions on digital assets are now.

As “all things AI become more and more a part of our lives,” Sheehan said, “some aspects of AI and its components may also be woven throughout the estate plan regularly.”

“But we definitely aren’t there yet,” she said. “I have had zero clients ask about this.”

Requests for “no AI resurrections” will likely be ignored

Whether loved ones would—or even should—respect requests blocking digital replicas appears to be debatable. But at least one person who built a grief bot wished he’d done more to get his dad’s permission before moving forward with his own creation.

A computer science professor at the University of Washington Bothell, Muhammad Aurangzeb Ahmad, was one of the earliest AI researchers to create a grief bot more than a decade ago after his father died. He built the bot to ensure that his future kids would be able to interact with his father after seeing how incredible his dad was as a grandfather.

When Ahmad started his project, there was no ChatGPT or other advanced AI model to serve as the foundation, so he had to train his own model based on his dad’s data. Putting immense thought into the effort, Ahmad decided to close off the system from the rest of the Internet so that only his dad’s memories would inform the model. To prevent unauthorized chats, he kept the bot on a laptop that only his family could access.

Ahmad was so intent on building a digital replica that felt just like his dad that it didn’t occur to him until after his family started using the bot that he never asked his dad if this was what he wanted. Over time, he realized that the bot was biased to his view of his dad, perhaps even feeling off to his siblings who had a slightly different relationship with their father. It’s unclear if his dad would similarly view the bot as preserving just one side of him.

Ultimately, Ahmad didn’t regret building the bot, and he told Ars he thinks his father “would have been fine with it.”

But he did regret not getting his father’s consent.

For people creating bots today, seeking consent may be appropriate if there’s any chance the bot may be publicly accessed, Ahmad suggested. He told Ars that he would never have been comfortable with the idea of his dad’s digital replica being publicly available because the question of an “accurate representation” would come even more into play, as malicious actors could potentially access it and sully his dad’s memory.

Today, anybody can use ChatGPT’s model to freely create a similar bot with their own loved one’s data. And a wide range of grief tech services have popped up online, including HereAfter AI, SeanceAI, and StoryFile, Axios noted in an October report detailing the latest ways “AI could be used to ‘resurrect’ loved ones.” As this trend continues “evolving very fast,” Ahmad told Ars that estate planning is probably the best way to communicate one’s AI ghost preferences.

But in a recently published article on “The Law of Digital Resurrection,” law professor Victoria Haneman warned that “there is no legal or regulatory landscape against which to estate plan to protect those who would avoid digital resurrection, and few privacy rights for the deceased. This is an intersection of death, technology, and privacy law that has remained relatively ignored until recently.”

Haneman agreed with Sheehan that “existing protections are likely sufficient to protect against unauthorized commercial resurrections”—like when actors or musicians are resurrected for posthumous performances. However, she thinks that for personal uses, digital resurrections may best be blocked not through estate planning but by passing a “right to deletion” that would focus on granting the living or next of kin the rights to delete the data that could be used to create the AI ghost rather than regulating the output.

A “right to deletion” could help people fight inappropriate uses of their loved ones’ data, whether AI is involved or not. After her article was published, a lawyer reached out to Haneman about a client’s deceased grandmother whose likeness was used to create a meme of her dancing in a church. The grandmother wasn’t a public figure, and the client had no idea “why or how somebody decided to resurrect her deceased grandmother,” Haneman told Ars.

Although Haneman sympathized with the client, “if it’s not being used for a commercial purpose, she really has no control over this use,” Haneman said. “And she’s deeply troubled by this.”

Haneman’s article offers a rare deep dive into the legal topic. It sensitively maps out the vague territory of digital rights of the dead and explains how those laws—or the lack thereof—interact with various laws dealing with death, from human remains to property rights.

In it, Haneman also points out that, on balance, the rights of the living typically outweigh the rights of the dead, and even specific instructions on how to handle human remains aren’t generally considered binding. Some requests, like organ donation that can benefit the living, are considered critical, Haneman noted. But there are mixed results on how courts enforce other interests of the dead—like a famous writer’s request to destroy all unpublished work or a pet lover’s insistence to destroy their cat or dog at death.

She told Ars that right now, “a lot of people are like, ‘Why do I care if somebody resurrects me after I’m dead?’ You know, ‘They can do what they want.’ And they think that, until they find a family member who’s been resurrected by a creepy ex-boyfriend or their dead grandmother’s resurrected, and then it becomes a different story.”

Existing law may protect “the privacy interests of the loved ones of the deceased from outrageous or harmful digital resurrections of the deceased,” Haneman noted, but in the case of the dancing grandma, her meme may not be deemed harmful, no matter how much it troubles the grandchild to see her grandma’s memory warped.

Limited legal protections may not matter so much if, culturally, communities end up developing a distaste for digital replicas, particularly if it becomes widely viewed as disrespectful to the dead, Haneman suggested. Right now, however, society is more fixated on solving other problems with deepfakes rather than clarifying the digital rights of the dead. That could be because few people have been impacted so far, or it could also reflect a broader cultural tendency to ignore death, Haneman told Ars.

“We don’t want to think about our own death, so we really kind of brush aside whether or not we care about somebody else being digitally resurrected until it’s in our face,” Haneman said.

Over time, attitudes may change, especially if the so-called “digital afterlife industry” takes off. And there is some precedent that the law could be changed to reinforce any culture shift.

“The throughline revealed by the law of the dead is that a sacred trust exists between the living and the deceased, with an emphasis upon protecting common humanity, such that data afforded no legal status (or personal data of the deceased) may nonetheless be treated with dignity and receive some basic protections,” Haneman wrote.

An alternative path to prevent AI resurrection

Preventing yourself from becoming an AI ghost seemingly now falls in a legal gray zone that policymakers may need to address.

Haneman calls for a solution that doesn’t depend on estate planning, which she warned “is a structurally inequitable and anachronistic approach that maximizes social welfare only for those who do estate planning.” More than 60 percent of Americans die without a will, often including “those without wealth,” as well as women and racial minorities who “are less likely to die with a valid estate plan in effect,” Haneman reported.”We can do better in a technology-based world,” Haneman wrote. “Any modern framework should recognize a lack of accessibility as an obstacle to fairness and protect the rights of the most vulnerable through approaches that do not depend upon hiring an attorney and executing an estate plan.”

Rather than twist the law to “recognize postmortem privacy rights,” Haneman advocates for a path for people resistant to digital replicas that focuses on a right to delete the data that would be used to create the AI ghost.

“Put simply, the deceased may exert control over digital legacy through the right to deletion of data but may not exert broader rights over non-commercial digital resurrection through estate planning,” Haneman recommended.

Sheehan told Ars that a right to deletion would likely involve estate planners, too.

“If this is not addressed in an estate planning document and not specifically addressed in the statute (or deemed under the authority of the executor via statute), then the only way to address this would be to go to court,” Sheehan said. “Even with a right of deletion, the deceased would need to delete said data before death or authorize his executor to do so post death, which would require an estate planning document, statutory authority, or court authority.”

Haneman agreed that for many people, estate planners would still be involved, recommending that “the right to deletion would ideally, from the perspective of estate administration, provide for a term of deletion within 12 months.” That “allows the living to manage grief and open administration of the estate before having to address data management issues,” Haneman wrote, and perhaps adequately balances “the interests of society against the rights of the deceased.”

To Haneman, it’s also the better solution for the people left behind because “creating a right beyond data deletion to curtail unauthorized non-commercial digital resurrection creates unnecessary complexity that overreaches, as well as placing the interests of the deceased over those of the living.”

Future generations may be raised with AI ghosts

If a dystopia that experts paint comes true, Big Tech companies may one day profit by targeting grieving individuals to seize the data of the dead, which could be more easily abused since it’s granted fewer rights than data of the living.

Perhaps in that future, critics suggest, people will be tempted into free trials in moments when they’re missing their loved ones most, then forced to either pay a subscription to continue accessing the bot or else perhaps be subjected to ad-based models where their chats with AI ghosts may even feature ads in the voices of the deceased.

Today, even in a world where AI ghosts aren’t yet compelling ad clicks, some experts have warned that interacting with AI ghosts could cause mental health harms, New Scientist reported, especially if the digital afterlife industry isn’t carefully designed, AI ethicists warned. Some people may end up getting stuck maintaining an AI ghost if it’s left behind as a gift, and ethicists suggested that the emotional weight of that could also eventually take a negative toll. While saying goodbye is hard, letting go is considered a critical part of healing during the mourning process, and AI ghosts may make that harder.

But the bots can be a helpful tool to manage grief, some experts suggest, provided that their use is limited to allow for a typical mourning process or combined with therapy from a trained professional, Al Jazeera reported. Ahmad told Ars that working on his bot has not only kept his father close to him but also helped him think more deeply about relationships and memory.

Haneman noted that people have many ways of honoring the dead. Some erect statues, and others listen to saved voicemails or watch old home movies. For some, just “smelling an old sweater” is a comfort. And creating digital replicas, as creepy as some people might find them, is not that far off from these traditions, Haneman said.

“Feeding text messages and emails into existing AI platforms such as ChatGPT and asking the AI to respond in the voice of the deceased is simply a change in degree, not in kind,” Haneman said.

For Ahmad, the decision to create a digital replica of his dad was a learning experience, and perhaps his experience shows why any family or loved one weighing the option should carefully consider it before starting the process.

In particular, he warns families to be careful introducing young kids to grief bots, as they may not be able to grasp that the bot is not a real person. When he initially saw his young kids growing confused with whether their grandfather was alive or not—the introduction of the bot was complicated by the early stages of the pandemic, a time when they met many relatives virtually—he decided to restrict access to the bot until they were older. For a time, the bot only came out for special events like birthdays.

He also realized that introducing the bot also forced him to have conversations about life and death with his kids at ages younger than he remembered fully understanding those concepts in his own childhood.

Now, Ahmad’s kids are among the first to be raised among AI ghosts. To continually enhance the family’s experience, their father continuously updates his father’s digital replica. Ahmad is currently most excited about recent audio advancements that make it easier to add a voice element. He hopes that within the next year, he might be able to use AI to finally nail down his South Asian father’s accent, which up to now has always sounded “just off.” For others working in this space, the next frontier is realistic video or even augmented reality tools, Ahmad told Ars.

To this day, the bot retains sentimental value for Ahmad, but, as Haneman suggested, the bot was not the only way he memorialized his dad. He also created a mosaic, and while his father never saw it, either, Ahmad thinks his dad would have approved.

“He would have been very happy,” Ahmad said.

There’s no way to predict how future generations may view grief tech. But while Ahmad said he’s not sure he’d be interested in an augmented reality interaction with his dad’s digital replica, kids raised seeing AI ghosts as a natural part of their lives may not be as hesitant to embrace or even build new features. Talking to Ars, Ahmad fondly remembered his young daughter once saw that he was feeling sad and came up with her own AI idea to help her dad feel better.

“It would be really nice if you can just take this program and we build a robot that looks like your dad, and then add it to the robot, and then you can go and hug the robot,” she said, according to her father’s memory.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

How to draft a will to avoid becoming an AI ghost—it’s not easy Read More »

ai-overviews-hallucinates-that-airbus,-not-boeing,-involved-in-fatal-air-india-crash

AI Overviews hallucinates that Airbus, not Boeing, involved in fatal Air India crash

When major events occur, most people rush to Google to find information. Increasingly, the first thing they see is an AI Overview, a feature that already has a reputation for making glaring mistakes. In the wake of a tragic plane crash in India, Google’s AI search results are spreading misinformation claiming the incident involved an Airbus plane—it was actually a Boeing 787.

Travelers are more attuned to the airliner models these days after a spate of crashes involving Boeing’s 737 lineup several years ago. Searches for airline disasters are sure to skyrocket in the coming days, with reports that more than 200 passengers and crew lost their lives in the Air India Flight 171 crash. The way generative AI operates means some people searching for details may get the wrong impression from Google’s results page.

Not all searches get AI answers, but Google has been steadily expanding this feature since it debuted last year. One searcher on Reddit spotted a troubling confabulation when searching for crashes involving Airbus planes. AI Overviews, apparently overwhelmed with results reporting on the Air India crash, stated confidently (and incorrectly) that it was an Airbus A330 that fell out of the sky shortly after takeoff. We’ve run a few similar searches—some of the AI results say Boeing, some say Airbus, and some include a strange mashup of both Airbus and Boeing. It’s a mess.

In this search, Google’s AI says the crash involved an Airbus A330 instead of a Boeing 787.

Credit: /u/stuckintrraffic

In this search, Google’s AI says the crash involved an Airbus A330 instead of a Boeing 787. Credit: /u/stuckintrraffic

But why is Google bringing up the Air India crash at all in the context of Airbus? Unfortunately, it’s impossible to predict if you’ll get an AI Overview that blames Boeing or Airbus—generative AI is non-deterministic, meaning the output is different every time, even for identical inputs. Our best guess for the underlying cause is that numerous articles on the Air India crash mention Airbus as Boeing’s main competitor. AI Overviews is essentially summarizing these results, and the AI goes down the wrong path because it lacks the ability to understand what is true.

AI Overviews hallucinates that Airbus, not Boeing, involved in fatal Air India crash Read More »

“yuck”:-wikipedia-pauses-ai-summaries-after-editor-revolt

“Yuck”: Wikipedia pauses AI summaries after editor revolt

Generative AI is permeating the Internet, with chatbots and AI summaries popping up faster than we can keep track. Even Wikipedia, the vast repository of knowledge famously maintained by an army of volunteer human editors, is looking to add robots to the mix. The site began testing AI summaries in some articles over the past week, but the project has been frozen after editors voiced their opinions. And that opinion is: “yuck.”

The seeds of this project were planted at Wikimedia’s 2024 conference, where foundation representatives and editors discussed how AI could advance Wikipedia’s mission. The wiki on the so-called “Simple Article Summaries” notes that the editors who participated in the discussion believed the summaries could improve learning on Wikipedia.

According to 404 Media, Wikipedia announced the opt-in AI pilot on June 2, which was set to run for two weeks on the mobile version of the site. The summaries appeared at the top of select articles in a collapsed form. Users had to tap to expand and read the full summary. The AI text also included a highlighted “Unverified” badge.

Feedback from the larger community of editors was immediate and harsh. Some of the first comments were simply “yuck,” with others calling the addition of AI a “ghastly idea” and “PR hype stunt.”

Others expounded on the issues with adding AI to Wikipedia, citing a potential loss of trust in the site. Editors work together to ensure articles are accurate, featuring verifiable information and a neutral point of view. However, nothing is certain when you put generative AI in the driver’s seat. “I feel like people seriously underestimate the brand risk this sort of thing has,” said one editor. “Wikipedia’s brand is reliability, traceability of changes, and ‘anyone can fix it.’ AI is the opposite of these things.”

“Yuck”: Wikipedia pauses AI summaries after editor revolt Read More »

anthropic-releases-custom-ai-chatbot-for-classified-spy-work

Anthropic releases custom AI chatbot for classified spy work

On Thursday, Anthropic unveiled specialized AI models designed for US national security customers. The company released “Claude Gov” models that were built in response to direct feedback from government clients to handle operations such as strategic planning, intelligence analysis, and operational support. The custom models reportedly already serve US national security agencies, with access restricted to those working in classified environments.

The Claude Gov models differ from Anthropic’s consumer and enterprise offerings, also called Claude, in several ways. They reportedly handle classified material, “refuse less” when engaging with classified information, and are customized to handle intelligence and defense documents. The models also feature what Anthropic calls “enhanced proficiency” in languages and dialects critical to national security operations.

Anthropic says the new models underwent the same “safety testing” as all Claude models. The company has been pursuing government contracts as it seeks reliable revenue sources, partnering with Palantir and Amazon Web Services in November to sell AI tools to defense customers.

Anthropic is not the first company to offer specialized chatbot services for intelligence agencies. In 2024, Microsoft launched an isolated version of OpenAI’s GPT-4 for the US intelligence community after 18 months of work. That system, which operated on a special government-only network without Internet access, became available to about 10,000 individuals in the intelligence community for testing and answering questions.

Anthropic releases custom AI chatbot for classified spy work Read More »

google’s-nightmare:-how-a-search-spinoff-could-remake-the-web

Google’s nightmare: How a search spinoff could remake the web


Google has shaped the Internet as we know it, and unleashing its index could change everything.

Google may be forced to license its search technology when the final antitrust ruling comes down. Credit: Aurich Lawson

Google may be forced to license its search technology when the final antitrust ruling comes down. Credit: Aurich Lawson

Google wasn’t around for the advent of the World Wide Web, but it successfully remade the web on its own terms. Today, any website that wants to be findable has to play by Google’s rules, and after years of search dominance, the company has lost a major antitrust case that could reshape both it and the web.

The closing arguments in the case just wrapped up last week, and Google could be facing serious consequences when the ruling comes down in August. Losing Chrome would certainly change things for Google, but the Department of Justice is pursuing other remedies that could have even more lasting impacts. During his testimony, Google CEO Sundar Pichai seemed genuinely alarmed at the prospect of being forced to license Google’s search index and algorithm, the so-called data remedies in the case. He claimed this would be no better than a spinoff of Google Search. The company’s statements have sometimes derisively referred to this process as “white labeling” Google Search.

But does a white label Google Search sound so bad? Google has built an unrivaled index of the web, but the way it shows results has become increasingly frustrating. A handful of smaller players in search have tried to offer alternatives to Google’s search tools. They all have different approaches to retrieving information for you, but they agree that spinning off Google Search could change the web again. Whether or not those changes are positive depends on who you ask.

The Internet is big and noisy

As Google’s search results have changed over the years, more people have been open to other options. Some have simply moved to AI chatbots to answer their questions, hallucinations be damned. But for most people, it’s still about the 10 blue links (for now).

Because of the scale of the Internet, there are only three general web search indexes: Google, Bing, and Brave. Every search product (including AI tools) relies on one or more of these indexes to probe the web. But what does that mean?

“Generally, a search index is a service that, when given a query, is able to find relevant documents published on the Internet,” said Brave’s search head Josep Pujol.

A search index is essentially a big database, and that’s not the same as search results. According to JP Schmetz, Brave’s chief of ads, it’s entirely possible to have the best and most complete search index in the world and still show poor results for a given query. Sound like anyone you know?

Google’s technological lead has allowed it to crawl more websites than anyone else. It has all the important parts of the web, plus niche sites, abandoned blogs, sketchy copies of legitimate websites, copies of those copies, and AI-rephrased copies of the copied copies—basically everything. And the result of this Herculean digital inventory is a search experience that feels increasingly discombobulated.

“Google is running large-scale experiments in ways that no rival can because we’re effectively blinded,” said Kamyl Bazbaz, head of public affairs at DuckDuckGo, which uses the Bing index. “Google’s scale advantage fuels a powerful feedback loop of different network effects that ensure a perpetual scale and quality deficit for rivals that locks in Google’s advantage.”

The size of the index may not be the only factor that matters, though. Brave, which is perhaps best known for its browser, also has a search engine. Brave Search is the default in its browser, but you can also just go to the URL in your current browser. Unlike most other search engines, Brave doesn’t need to go to anyone else for results. Pujol suggested that Brave doesn’t need the scale of Google’s index to find what you need. And admittedly, Brave’s search results don’t feel meaningfully worse than Google’s—they may even be better when you consider the way that Google tries to keep you from clicking.

Brave’s index spans around 25 billion pages, but it leaves plenty of the web uncrawled. “We could be indexing five to 10 times more pages, but we choose not to because not all the web has signal. Most web pages are basically noise,” said Pujol.

The freemium search engine Kagi isn’t worried about having the most comprehensive index. Kagi is a meta search engine. It pulls in data from multiple indexes, like Bing and Brave, but it has a custom index of what founder and CEO Vladimir Prelovac calls the “non-commercial web.”

When you search with Kagi, some of the results (it tells you the proportion) come from its custom index of personal blogs, hobbyist sites, and other content that is poorly represented on other search engines. It’s reminiscent of the days when huge brands weren’t always clustered at the top of Google—but even these results are being pushed out of reach in favor of AI, ads, Knowledge Graph content, and other Google widgets. That’s a big part of why Kagi exists, according to Prelovac.

A Google spinoff could change everything

We’ve all noticed the changes in Google’s approach to search, and most would agree that they have made finding reliable and accurate information harder. Regardless, Google’s incredibly deep and broad index of the Internet is in demand.

Even with Bing and Brave available, companies are going to extremes to syndicate Google Search results. A cottage industry has emerged to scrape Google searches as a stand-in for an official index. These companies are violating Google’s terms, yet they appear in Google Search results themselves. Google could surely do something about this if it wanted to.

The DOJ calls Google’s mountain of data the “essential raw material” for building a general search engine, and it believes forcing the firm to license that material is key to breaking its monopoly. The sketchy syndication firms will evaporate if the DOJ’s data remedies are implemented, which would give competitors an official way to utilize Google’s index. And utilize it they will.

Google CEO Sundar Pichai decried the court’s efforts to force a “de facto divestiture” of Google’s search tech.

Credit: Ryan Whitwam

Google CEO Sundar Pichai decried the court’s efforts to force a “de facto divestiture” of Google’s search tech. Credit: Ryan Whitwam

According to Prelovac, this could lead to an explosion in search choices. “The whole purpose of the Sherman Act is to proliferate a healthy, competitive marketplace. Once you have access to a search index, then you can have thousands of search startups,” said Prelovac.

The Kagi founder suggested that licensing Google Search could allow entities of all sizes to have genuinely useful custom search tools. Cities could use the data to create deep, hyper-local search, and people who love cats could make a cat-specific search engine, in both cases pulling what they want from the most complete database of online content. And, of course, general search products like Kagi would be able to license Google’s tech for a “nominal fee,” as the DOJ puts it.

Prelovac didn’t hesitate when asked if Kagi, which offers a limited number of free searches before asking users to subscribe, would integrate Google’s index. “Yes, that is something we would do,” he said. “And that’s what I believe should happen.”

There may be some drawbacks to unleashing Google’s search services. Judge Amit Mehta has expressed concern that blocking Google’s search placement deals could reduce browser choice, and there is a similar issue with the data remedies. If Google is forced to license search as an API, its few competitors in web indexing could struggle to remain afloat. In a roundabout way, giving away Google’s search tech could actually increase its influence.

The Brave team worries about how open access to Google’s search technology could impact diversity on the web. “If implemented naively, it’s a big problem,” said Brave’s ad chief JP Schmetz, “If the court forces Google to provide search at a marginal cost, it will not be possible for Bing or Brave to survive until the remedy ends.”

The landscape of AI-based search could also change. We know from testimony given during the remedy trial by OpenAI’s Nick Turley that the ChatGPT maker tried and failed to get access to Google Search to ground its AI models—it currently uses Bing. If Google were suddenly an option, you can be sure OpenAI and others would rush to connect Google’s web data to their large language models (LLMs).

The attempt to reduce Google’s power could actually grant it new monopolies in AI, according to Brave Chief Business Officer Brian Brown. “All of a sudden, you would have a single monolithic voice of truth across all the LLMs, across all the web,” Brown said.

What if you weren’t the product?

If white labeling Google does expand choice, even at the expense of other indexes, it will give more kinds of search products a chance in the market—maybe even some that shun Google’s focus on advertising. You don’t see much of that right now.

For most people, web search is and always has been a free service supported by ads. Google, Brave, DuckDuckGo, and Bing offer all the search queries you want for free because they want eyeballs. It’s been said often, but it’s true: If you’re not paying for it, you’re the product. This is an arrangement that bothers Kagi’s founder.

“For something as important as information consumption, there should not be an intermediary between me and the information, especially one that is trying to sell me something,” said Prelovac.

Kagi search results acknowledge the negative impact of today’s advertising regime. Kagi users see a warning next to results with a high number of ads and trackers. According to Prelovac, that is by far the strongest indication that a result is of low quality. That icon also lets you adjust the prevalence of such sites in your personal results. You can demote a site or completely hide it, which is a valuable option in the age of clickbait.

Kagi search gives you a lot of control.

Credit: Ryan Whitwam

Kagi search gives you a lot of control. Credit: Ryan Whitwam

Kagi’s paid approach to search changes its relationship with your data. “We literally don’t need user data,” Prelovac said. “But it’s not only that we don’t need it. It’s a liability.”

Prelovac admitted that getting people to pay for search is “really hard.” Nevertheless, he believes ad-supported search is a dead end. So Kagi is planning for a future in five or 10 years when more people have realized they’re still “paying” for ad-based search with lost productivity time and personal data, he said.

We know how Google handles user data (it collects a lot of it), but what does that mean for smaller search engines like Brave and DuckDuckGo that rely on ads?

“I’m sure they mean well,” said Prelovac.

Brave said that it shields user data from advertisers, relying on first-party tracking to attribute clicks to Brave without touching the user. “They cannot retarget people later; none of that is happening,” said Brave’s JP Schmetz.

DuckDuckGo is a bit of an odd duck—it relies on Bing’s general search index, but it adds a layer of privacy tools on top. It’s free and ad-supported like Google and Brave, but the company says it takes user privacy seriously.

“Viewing ads is privacy protected by DuckDuckGo, and most ad clicks are managed by Microsoft’s ad network,” DuckDuckGo’s Kamyl Bazbaz said. He explained that DuckDuckGo has worked with Microsoft to ensure its network does not track users or create any profiles based on clicks. He added that the company has a similar privacy arrangement with TripAdvisor for travel-related ads.

It’s AI all the way down

We can’t talk about the future of search without acknowledging the artificially intelligent elephant in the room. As Google continues its shift to AI-based search, it’s tempting to think of the potential search spin-off as a way to escape that trend. However, you may find few refuges in the coming years. There’s a real possibility that search is evolving beyond the 10 blue links and toward an AI assistant model.

All non-Google search engines have AI integrations, with the most prominent being Microsoft Bing, which has a partnership with OpenAI. But smaller players have AI search features, too. The folks working on these products agree with Microsoft and Google on one important point: They see AI as inevitable.

Today’s Google alternatives all have their own take on AI Overviews, which generates responses to queries based on search results. They’re generally not as in-your-face as Google AI, though. While Google and Microsoft are intensely focused on increasing the usage of AI search, other search operators aren’t pushing for that future. They are along for the ride, though.

AI overview on phone

AI Overviews are integrated with Google’s search results, and most other players have their own version.

Credit: Google

AI Overviews are integrated with Google’s search results, and most other players have their own version. Credit: Google

“We’re finding that some people prefer to start in chat mode and then jump into more traditional search results when needed, while others prefer the opposite,” Bazbaz said. “So we thought the best thing to do was offer both. We made it easy to move between them, and we included an off switch for those who’d like to avoid AI altogether.”

The team at Brave views AI as a core means of accessing search and one that will continue to grow. Brave generates AI answers for many searches and prominently cites sources. You can also disable Brave’s AI if you prefer. But according to search chief Josep Pujol, the move to AI search is inevitable for a pretty simple reason: It’s convenient, and people will always choose convenience. So AI is changing the web as we know it, for better or worse, because LLMs can save a smidge of time, especially for more detailed “long-tail” queries. These AI features may give you false information while they do it, but that’s not always apparent.

This is very similar to the language Google uses when discussing agentic search, although it expresses it in a more nuanced way. By understanding the task behind a query, Google hopes to provide AI answers that save people time, even if the model needs a few ticks to fan out and run multiple searches to generate a more comprehensive report on a topic. That’s probably still faster than running multiple searches and manually reviewing the results, and it could leave traditional search as an increasingly niche service, even in a world with more choices.

“Will the 10 blue links continue to exist in 10 years?” Pujol asked. “Actually, one question would be, does it even exist now? In 10 years, [search] will have evolved into more of an AI conversation behavior or even agentic. That is probably the case. What, for sure, will continue to exist is the need to search. Search is a verb, an action that you do, and whether you will do it directly or whether it will be done through an agent, it’s a search engine.”

Vlad from Kagi sees AI becoming the default way we access information in the long term, but his search engine doesn’t force you to use it. On Kagi, you can expand the AI box for your searches and ask follow-ups, and the AI will open automatically if you use a question mark in your search. But that’s just the start.

“You watch Star Trek, nobody’s clicking on links there—I do believe in that vision in science fiction movies,” Prelovac said. “I don’t think my daughter will be clicking links in 10 years. The only question is if the current technology will be the one that gets us there. LLMs have inherent flaws. I would even tend to say it’s likely not going to get us to Star Trek.”

If we think of AI mainly as a way to search for information, the future becomes murky. With generative AI in the driver’s seat, questions of authority and accuracy may be left to language models that often behave in unpredictable and difficult-to-understand ways. Whether we’re headed for an AI boom or bust—for continued Google dominance or a new era of choice—we’re facing fundamental changes to how we access information.

Maybe if we get those thousands of search startups, there will be a few that specialize in 10 blue links. We can only hope.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google’s nightmare: How a search spinoff could remake the web Read More »

reddit-sues-anthropic-over-ai-scraping-that-retained-users’-deleted-posts

Reddit sues Anthropic over AI scraping that retained users’ deleted posts

Of particular note, Reddit pointed out that Anthropic’s Claude models will help power Amazon’s revamped Alexa, following about $8 billion in Amazon investments in the AI company since 2023.

“By commercially licensing Claude for use in several of Amazon’s commercial offerings, Anthropic reaps significant profit from a technology borne of Reddit content,” Reddit alleged, and “at the expense of Reddit.” Anthropic’s unauthorized scraping also burdens Reddit’s servers, threatening to degrade the user experience and costing Reddit additional damages, Reddit alleged.

To rectify alleged harms, Reddit is hoping a jury will award not just damages covering Reddit’s alleged losses but also punitive damages due to Anthropic’s alleged conduct that is “willful, malicious, and undertaken with conscious disregard for Reddit’s contractual obligations to its users and the privacy rights of those users.”

Without an injunction, Reddit users allegedly have “no way of knowing” if Anthropic scraped their data, Reddit alleged. They also are “left to wonder whether any content they deleted after Claude began training on Reddit data nevertheless remains available to Anthropic and the likely tens of millions (and possibly growing) of Claude users,” Reddit said.

In a statement provided to Ars, Anthropic’s spokesperson confirmed that the AI company plans to fight Reddit’s claims.

“We disagree with Reddit’s claims and will defend ourselves vigorously,” Anthropic’s spokesperson said.

Amazon declined to comment. Reddit did not immediately respond to Ars’ request to comment. But Reddit’s chief legal officer, Ben Lee, told The New York Times that Reddit “will not tolerate profit-seeking entities like Anthropic commercially exploiting Reddit content for billions of dollars without any return for redditors or respect for their privacy.”

“AI companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data,” Lee said. “Licensing agreements enable us to enforce meaningful protections for our users, including the right to delete your content, user privacy protections, and preventing users from being spammed using this content.”

Reddit sues Anthropic over AI scraping that retained users’ deleted posts Read More »

samsung-teams-up-with-glance-to-use-your-face-in-ai-generated-lock-screen-ads

Samsung teams up with Glance to use your face in AI-generated lock screen ads

On an average day, you might unlock or look at your phone dozens of times, which makes the lock screen a hot property for advertising. Ad tech company Glance has been taking advantage of that for years with its ad-laden lock screen experiences, but it’s going further in the age of AI. Samsung and Glance have teamed up to deliver a new “AI shopping” experience that uses a selfie to create custom fashion ads. This feature is rolling out to numerous Samsung phones in the next month.

Glance has been around for a while—its non-AI lock screen experience has been bundled on various phones from Samsung, Motorola, and others. Before the AI era, Glance lured people in with promises of pretty pictures and news alerts, which came with a side of ads and tracking. The new Glance AI feature has all that, but it adds an unsettling face-stealing layer to the experience.

The AI-infused Glance will arrive on Samsung phones as both a standalone app and a fully integrated lock screen. Thankfully, this is a fully opt-in experience. If you never open or set up Glance, you can keep using the normal lock screen on your phone.

Credit: Glance

Should you choose to wade into the murky waters of AI shopping, Glance will have you take a selfie and provide some basic body type details. From there, it uses Google Gemini and Imagen to create fashion ads tailored to you—because they are you. Your lock screen will be populated with images of you “in outfits and destinations [you] would never imagine.” Naturally, you will be able to buy the looks chosen for you with a tap, which fills Glance’s coffers.

Samsung teams up with Glance to use your face in AI-generated lock screen ads Read More »

the-gmail-app-will-now-create-ai-summaries-whether-you-want-them-or-not

The Gmail app will now create AI summaries whether you want them or not

Gmail AI summary

This block of AI-generated text will soon appear automatically in some threads.

Credit: Google

This block of AI-generated text will soon appear automatically in some threads. Credit: Google

Summarizing content is one of the more judicious applications of generative AI technology, dating back to the 2017 paper on the transformer architecture. Generative AI has since been employed to create chatbots that will seemingly answer any question, despite their tendency to make mistakes. Grounding the AI output with a few emails usually yields accurate results, but do you really need a robot to summarize your emails? Unless you’re getting novels in your inbox, you can probably just read a few paragraphs.

If you’re certain you don’t want any part of this, there is a solution. Automatic generation of AI summaries is controlled by Gmail’s “smart features.” You (or an administrator of your managed account) can disable that. Open the app settings, select the account, and uncheck the smart features toggle.

For most people, Gmail’s smart features are enabled out of the box, but they’re off by default in Europe and Japan. When you disable them, you won’t see the automatic AI summaries, but there will still be a button to generate those summaries with Gemini. Be aware that smart features also control high-priority notifications, package tracking, Smart Compose, Smart Reply, and nudges. If you can live without all of those features in the mobile app, you can avoid automatic AI summaries. The app will occasionally pester you to turn smart features back on, though.

The Gmail app will now create AI summaries whether you want them or not Read More »

gemini-in-google-drive-may-finally-be-useful-now-that-it-can-analyze-videos

Gemini in Google Drive may finally be useful now that it can analyze videos

Google’s rapid adoption of AI has seen the Gemini “sparkle” icon become an omnipresent element in almost every Google product. It’s there to summarize your email, add items to your calendar, and more—if you trust it to do those things. Gemini is also integrated with Google Drive, where it’s gaining a new feature that could make it genuinely useful: Google’s AI bot will soon be able to watch videos stored in your Drive so you don’t have to.

Gemini is already accessible in Drive, with the ability to summarize documents or folders, gather and analyze data, and expand on the topics covered in your documents. Google says the next step is plugging videos into Gemini, saving you from wasting time scrubbing through a file just to find something of interest.

Using a chatbot to analyze and manipulate text doesn’t always make sense—after all, it’s not hard to skim an email or short document. It can take longer to interact with a chatbot, which might not add any useful insights. Video is different because watching is a linear process in which you are presented with information at the pace the video creator sets. You can change playback speed or rewind to catch something you missed, but that’s more arduous than reading something at your own pace. So Gemini’s video support in Drive could save you real time.

Suppose you have a recorded meeting in video form uploaded to Drive. You could go back and rewatch it to take notes or refresh your understanding of a particular exchange. Or, Google suggests, you can ask Gemini to summarize the video and tell you what’s important. This could be a great alternative, as grounding AI output with a specific data set or file tends to make it more accurate. Naturally, you should still maintain healthy skepticism of what the AI tells you about the content of your video.

Gemini in Google Drive may finally be useful now that it can analyze videos Read More »

google-photos-turns-10,-celebrates-with-new-ai-infused-photo-editor

Google Photos turns 10, celebrates with new AI-infused photo editor

The current incarnation of Google Photos was not Google’s first image management platform, but it’s been a big success. Ten years on, Google Photos remains one of Google’s most popular products, and it’s getting a couple of new features to celebrate its 10th year in operation. You’ll be able to share albums a bit more easily, and editing tools are getting a boost with, you guessed it, AI.

Google Photos made a splash in 2015 when it broke free of the spiraling Google+ social network, offering people supposedly unlimited free storage for compressed images. Of course, that was too good to last. In 2021, Google began limiting photo uploads to 15GB for free users, sharing the default account level storage with other services like Gmail and Drive. Today, Google encourages everyone to pay for a Google One subscription to get more space, which is a bit of a bummer. Regardless, people still use Google Photos extensively.

According to the company, Photos has more than 1.5 billion monthly users, and it stores more than 9 trillion photos and videos. When using the Photos app on a phone, you are prompted to automatically upload your camera roll, which makes it easy to keep all your memories backed up (and edge ever closer to the free storage limit). Photos has also long offered almost magical search capabilities, allowing you to search for the content of images to find them. That may seem less impressive now, but it was revolutionary a decade ago. Google says users perform over 370 million searches in Photos each month.

An AI anniversary

Google is locked in with AI as it reimagines most of its products and services with Gemini. As it refreshes Photos for its 10th anniversary, the editor is getting a fresh dose of AI. And this may end up one of Google’s most used AI features—more than 210 million images are edited in Photos every month.

Google Photos turns 10, celebrates with new AI-infused photo editor Read More »