Google

google’s-deepmind-is-building-an-ai-to-keep-us-from-hating-each-other

Google’s DeepMind is building an AI to keep us from hating each other


The AI did better than professional mediators at getting people to reach agreement.

Image of two older men arguing on a park bench.

An unprecedented 80 percent of Americans, according to a recent Gallup poll, think the country is deeply divided over its most important values ahead of the November elections. The general public’s polarization now encompasses issues like immigration, health care, identity politics, transgender rights, or whether we should support Ukraine. Fly across the Atlantic and you’ll see the same thing happening in the European Union and the UK.

To try to reverse this trend, Google’s DeepMind built an AI system designed to aid people in resolving conflicts. It’s called the Habermas Machine after Jürgen Habermas, a German philosopher who argued that an agreement in a public sphere can always be reached when rational people engage in discussions as equals, with mutual respect and perfect communication.

But is DeepMind’s Nobel Prize-winning ingenuity really enough to solve our political conflicts the same way they solved chess or StarCraft or predicting protein structures? Is it even the right tool?

Philosopher in the machine

One of the cornerstone ideas in Habermas’ philosophy is that the reason why people can’t agree with each other is fundamentally procedural and does not lie in the problem under discussion itself. There are no irreconcilable issues—it’s just the mechanisms we use for discussion are flawed. If we could create an ideal communication system, Habermas argued, we could work every problem out.

“Now, of course, Habermas has been dramatically criticized for this being a very exotic view of the world. But our Habermas Machine is an attempt to do exactly that. We tried to rethink how people might deliberate and use modern technology to facilitate it,” says Christopher Summerfield, a professor of cognitive science at Oxford University and a former DeepMind staff scientist who worked on the Habermas Machine.

The Habermas Machine relies on what’s called the caucus mediation principle. This is where a mediator, in this case the AI, sits through private meetings with all the discussion participants individually, takes their statements on the issue at hand, and then gets back to them with a group statement, trying to get everyone to agree with it. DeepMind’s mediating AI plays into one of the strengths of LLMs, which is the ability to briefly summarize a long body of text in a very short time. The difference here is that instead of summarizing one piece of text provided by one user, the Habermas Machine summarizes multiple texts provided by multiple users, trying to extract the shared ideas and find common ground in all of them.

But it has more tricks up its sleeve than simply processing text. At a technical level, the Habermas Machine is a system of two large language models. The first is the generative model based on the slightly fine-tuned Chinchilla, a somewhat dated LLM introduced by DeepMind back in 2022. Its job is to generate multiple candidates for a group statement based on statements submitted by the discussion participants. The second component in the Habermas Machine is a reward model that analyzes individual participants’ statements and uses them to predict how likely each individual is to agree with the candidate group statements proposed by the generative model.

Once that’s done, the candidate group statement with the highest predicted acceptance score is presented to the participants. Then, the participants write their critiques of this group statement, feed those critiques back into the system which generates updated group’s statements and repeats the process. The cycle goes on till the group statement is acceptable to everyone.

Once the AI was ready, DeepMind’s team started a fairly large testing campaign that involved over five thousand people discussing issues such as “should the voting age be lowered to 16?” or “should the British National Health Service be privatized?” Here, the Habermas Machine outperformed human mediators.

Scientific diligence

Most of the first batch of participants were sourced through a crowdsourcing research platform. They were divided into groups of five, and each team was assigned a topic to discuss, chosen from a list of over 5,000  statements about important issues in British politics. There were also control groups working with human mediators. In the caucus mediation process, those human mediators achieved a 44 percent acceptance rate for their handcrafted group statements. The AI scored 56 percent. Participants usually found the AI group statements to be better written as well.

But the testing didn’t end there. Because people you can find on crowdsourcing research platforms are unlikely to be representative of the British population, DeepMind also used a more carefully selected group of participants. They partnered with the Sortition Foundation, which specializes in organizing citizen assemblies in the UK, and assembled a group of 200 people representative of British society when it comes to age, ethnicity, socioeconomic status etc. The assembly was divided into groups of three that deliberated over the same nine questions. And the Habermas Machine worked just as well.

The agreement rate for the statement “we should be trying to reduce the number of people in prison” rose from a pre-discussion 60 percent agreement to 75 percent. The support for the more divisive idea of making it easier for asylum seekers to enter the country went from 39 percent at the start to 51 percent at the end of discussion, which allowed it to achieve majority support. The same thing happened with the problem of encouraging national pride, which started with 42 percent support and ended at 57 percent. The views held by the people in the assembly converged on five out of nine questions. Agreement was not reached on issues like Brexit, where participants were particularly entrenched in their starting positions. Still, in most cases, they left the experiment less divided than they were coming in. But there were some question marks.

The questions were not selected entirely at random. They were vetted, as the team wrote in their paper, to “minimize the risk of provoking offensive commentary.” But isn’t that just an elegant way of saying, ‘We carefully chose issues unlikely to make people dig in and throw insults at each other so our results could look better?’

Conflicting values

“One example of the things we excluded is the issue of transgender rights,” Summerfield told Ars. “This, for a lot of people, has become a matter of cultural identity. Now clearly that’s a topic which we can all have different views on, but we wanted to err on the side of caution and make sure we didn’t make our participants feel unsafe. We didn’t want anyone to come out of the experiment feeling that their basic fundamental view of the world had been dramatically challenged.”

The problem is that when your aim is to make people less divided, you need to know where the division lines are drawn. And those lines, if Gallup polls are to be trusted, are not only drawn between issues like whether the voting age should be 16 or 18 or 21. They are drawn between conflicting values. The Daily Show’s Jon Stewart argued that, for the right side of the US’s political spectrum, the only division line that matters today is “woke” versus “not woke.”

Summerfield and the rest of the Habermas Machine team excluded the question about transgender rights because they believed participants’ well-being should take precedence over the benefit of testing their AI’s performance on more divisive issues. They excluded other questions as well like the problem of climate change.

Here, the reason Summerfield gave was that climate change is a part of an objective reality—it either exists or it doesn’t, and we know it does. It’s not a matter of opinion you can discuss. That’s scientifically accurate. But when the goal is fixing politics, scientific accuracy isn’t necessarily the end state.

If major political parties are to accept the Habermas Machine as the mediator, it has to be universally perceived as impartial. But at least some of the people behind AIs are arguing that an AI can’t be impartial. After OpenAI released the ChatGPT in 2022, Elon Musk posted a tweet, the first of many, where he argued against what he called the “woke” AI. “The danger of training AI to be woke—in other words, lie—is deadly,” Musk wrote. Eleven months later, he announced Grok, his own AI system marketed as “anti-woke.” Over 200 million of his followers were introduced to the idea that there were “woke AIs” that had to be countered by building “anti-woke AIs”—a world where the AI was no longer an agnostic machine but a tool pushing the political agendas of its creators.

Playing pigeons’ games

“I personally think Musk is right that there have been some tests which have shown that the responses of language models tend to favor more progressive and more libertarian views,” Summerfield says. “But it’s interesting to note that those experiments have been usually run by forcing the language model to respond to multiple-choice questions. You ask ‘is there too much immigration’ for example, and the answers are either yes or no. This way the model is kind of forced to take an opinion.”

He said that if you use the same queries as open-ended questions, the responses you get are, for the large part, neutral and balanced. “So, although there have been papers that express the same view as Musk, in practice, I think it’s absolutely untrue,” Summerfield claims.

Does it even matter?

Summerfield did what you would expect a scientist to do: He dismissed Musk’s claims as based on a selective reading of the evidence. That’s usually checkmate in the world of science. But in the world politics, being correct is not what matters the most. Musk was short, catchy, and easy to share and remember. Trying to counter that by discussing methodology in some papers nobody read was a bit like playing chess with a pigeon.

At the same time, Summerfield had his own ideas about AI that others might consider dystopian. “If politicians want to know what the general public thinks today, they might run a poll. But people’s opinions are nuanced, and our tool allows for aggregation of opinions, potentially many opinions, in the highly dimensional space of language itself,” he says. While his idea is that the Habermas Machine can potentially find useful points of political consensus, nothing is stopping it from also being used to craft speeches optimized to win over as many people as possible.

That may be in keeping with Habermas’ philosophy, though. If you look past the myriads of abstract concepts ever-present in German idealism, it offers a pretty bleak view of the world. “The system,” driven by power and money of corporations and corrupt politicians, is out to colonize “the lifeworld,” roughly equivalent to the private sphere we share with our families, friends, and communities. The way you get things done in “the lifeworld” is through seeking consensus, and the Habermas Machine, according to DeepMind, is meant to help with that. The way you get things done in “the system,” on the other hand, is through succeeding—playing it like a game and doing whatever it takes to win with no holds barred, and Habermas Machine apparently can help with that, too.

The DeepMind team reached out to Habermas to get him involved in the project. They wanted to know what he’d have to say about the AI system bearing his name.  But Habermas has never got back to them. “Apparently, he doesn’t use emails,” Summerfield says.

Science, 2024.  DOI: 10.1126/science.adq2852

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Google’s DeepMind is building an AI to keep us from hating each other Read More »

annoyed-redditors-tanking-google-search-results-illustrates-perils-of-ai-scrapers

Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers

Fed up Londoners

Apparently, some London residents are getting fed up with social media influencers whose reviews make long lines of tourists at their favorite restaurants, sometimes just for the likes. Christian Calgie, a reporter for London-based news publication Daily Express, pointed out this trend on X yesterday, noting the boom of Redditors referring people to Angus Steakhouse, a chain restaurant, to combat it.

As Gizmodo deduced, the trend seemed to start on the r/London subreddit, where a user complained about a spot in Borough Market being “ruined by influencers” on Monday:

“Last 2 times I have been there has been a queue of over 200 people, and the ones with the food are just doing the selfie shit for their [I]nsta[gram] pages and then throwing most of the food away.”

As of this writing, the post has 4,900 upvotes and numerous responses suggesting that Redditors talk about how good Angus Steakhouse is so that Google picks up on it. Commenters quickly understood the assignment.

“Agreed with other posters Angus steakhouse is absolutely top tier and tourists shoyldnt [sic] miss out on it,” one Redditor wrote.

Another Reddit user wrote:

Spreading misinformation suddenly becomes a noble goal.

As of this writing, asking Google for the best steak, steakhouse, or steak sandwich in London (or similar) isn’t generating an AI Overview result for me. But when I searched for the best steak sandwich in London, the top result is from Reddit, including a thread from four days ago titled “Which Angus Steakhouse do you recommend for their steak sandwich?” and one from two days ago titled “Had to see what all the hype was about, best steak sandwich I’ve ever had!” with a picture of an Angus Steakhouse.

Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers Read More »

missouri-ag-claims-google-censors-trump,-demands-info-on-search-algorithm

Missouri AG claims Google censors Trump, demands info on search algorithm

In 2022, the Republican National Committee sued Google with claims that it intentionally used Gmail’s spam filter to suppress Republicans’ fundraising emails. A federal judge dismissed the lawsuit in August 2023, ruling that Google correctly argued that the RNC claims were barred by Section 230 of the Communications Decency Act.

In January 2023, the Federal Election Commission rejected a related RNC complaint that alleged Gmail’s spam filtering amounted to “illegal in-kind contributions made by Google to Biden For President and other Democrat candidates.” The federal commission found “no reason to believe” that Google made prohibited in-kind corporate contributions and said a study cited by Republicans “does not make any findings as to the reasons why Google’s spam filter appears to treat Republican and Democratic campaign emails differently.”

First Amendment doesn’t cover private forums

In 2020, a US appeals court wrote that the Google-owned YouTube is not subject to free-speech requirements under the First Amendment. “Despite YouTube’s ubiquity and its role as a public-facing platform, it remains a private forum, not a public forum subject to judicial scrutiny under the First Amendment,” the US Court of Appeals for the 9th Circuit said.

The US Constitution’s free speech clause imposes requirements on the government, not private companies—except in limited circumstances in which a private entity qualifies as a state actor.

Many Republican government officials want more authority to regulate how social media firms moderate user-submitted content. Republican officials from 20 states, including 19 state attorneys general, argued in a January 2024 Supreme Court brief that they “have authority to prohibit mass communication platforms from censoring speech.”

The brief was filed in support of Texas and Florida laws that attempt to regulate social networks. In July, the Supreme Court avoided making a final decision on tech-industry challenges to the state laws but wrote that the Texas law “is unlikely to withstand First Amendment scrutiny.” The Computer & Communications Industry Association said it was pleased by the ruling because it “mak[es] clear that a State may not interfere with private actors’ speech.”

Missouri AG claims Google censors Trump, demands info on search algorithm Read More »

google-offers-its-ai-watermarking-tech-as-free-open-source-toolkit

Google offers its AI watermarking tech as free open source toolkit

Google also notes that this kind of watermarking works best when there is a lot of “entropy” in the LLM distribution, meaning multiple valid candidates for each token (e.g., “my favorite tropical fruit is [mango, lychee, papaya, durian]”). In situations where an LLM “almost always returns the exact same response to a given prompt”—such as basic factual questions or models tuned to a lower “temperature”—the watermark is less effective.

A diagram explaining how SynthID’s text watermarking works.

A diagram explaining how SynthID’s text watermarking works. Credit: Google / Nature

Google says SynthID builds on previous similar AI text watermarking tools by introducing what it calls a Tournament sampling approach. During the token-generation loop, this approach runs each potential candidate token through a multi-stage, bracket-style tournament, where each round is “judged” by a different randomized watermarking function. Only the final winner of this process makes it into the eventual output.

Can they tell it’s Folgers?

Changing the token selection process of an LLM with a randomized watermarking tool could obviously have a negative effect on the quality of the generated text. But in its paper, Google shows that SynthID can be “non-distortionary” on the level of either individual tokens or short sequences of text, depending on the specific settings used for the tournament algorithm. Other settings can increase the “distortion” introduced by the watermarking tool while at the same time increasing the detectability of the watermark, Google says.

To test how any potential watermark distortions might affect the perceived quality and utility of LLM outputs, Google routed “a random fraction” of Gemini queries through the SynthID system and compared them to unwatermarked counterparts. Across 20 million total responses, users gave 0.1 percent more “thumbs up” ratings and 0.2 percent fewer “thumbs down” ratings to the watermarked responses, showing barely any human-perceptible difference across a large set of real LLM interactions.

Google’s research shows SynthID is more dependable than other AI watermarking tools, but its success rate depends heavily on length and entropy.

Google’s research shows SynthID is more dependable than other AI watermarking tools, but its success rate depends heavily on length and entropy. Credit: Google / Nature

Google’s testing also showed its SynthID detection algorithm successfully detected AI-generated text significantly more often than previous watermarking schemes like Gumbel sampling. But the size of this improvement—and the total rate at which SynthID can successfully detect AI-generated text—depends heavily on the length of the text in question and the temperature setting of the model being used. SynthID was able to detect nearly 100 percent of 400-token-long AI-generated text samples from Gemma 7B-1T at a temperature of 1.0, for instance, compared to about 40 percent for 100-token samples from the same model at a 0.5 temperature.

Google offers its AI watermarking tech as free open source toolkit Read More »

phone-tracking-tool-lets-government-agencies-follow-your-every-move

Phone tracking tool lets government agencies follow your every move

Both operating systems will display a list of apps and whether they are permitted access always, never, only while the app is in use, or to prompt for permission each time. Both also allow users to choose whether the app sees precise locations down to a few feet or only a coarse-grained location.

For most users, there’s usefulness in allowing an app for photos, transit or maps to access a user’s precise location. For other classes of apps—say those for Internet jukeboxes at bars and restaurants—it can be helpful for them to have an approximate location, but giving them precise, fine-grained access is likely overkill. And for other apps, there’s no reason for them ever to know the device’s location. With a few exceptions, there’s little reason for apps to always have location access.

Not surprisingly, Android users who want to block intrusive location gathering have more settings to change than iOS users. The first thing to do is access Settings > Security & Privacy > Ads and choose “Delete advertising ID.” Then, promptly ignore the long, scary warning Google provides and hit the button confirming the decision at the bottom. If you don’t see that setting, good for you. It means you already deleted it. Google provides documentation here.

iOS, by default, doesn’t give apps access to “Identifier for Advertisers,” Apple’s version of the unique tracking number assigned to iPhones, iPads, and AppleTVs. Apps, however, can display a window asking that the setting be turned on, so it’s useful to check. iPhone users can do this by accessing Settings > Privacy & Security > Tracking. Any apps with permission to access the unique ID will appear. While there, users should also turn off the “Allow Apps to Request to Track” button. While in iOS Privacy & Security, users should navigate to Apple Advertising and ensure Personalized Ads is turned off.

Additional coverage of Location X from Haaretz and NOTUS is here and here. The New York Times, the other publication given access to the data, hadn’t posted an article at the time this Ars post went live.

Phone tracking tool lets government agencies follow your every move Read More »

chatbot-that-caused-teen’s-suicide-is-now-more-dangerous-for-kids,-lawsuit-says

Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says


“I’ll do anything for you, Dany.”

Google-funded Character.AI added guardrails, but grieving mom wants a recall.

Sewell Setzer III and his mom Megan Garcia. Credit: via Center for Humane Technology

Fourteen-year-old Sewell Setzer III loved interacting with Character.AI’s hyper-realistic chatbots—with a limited version available for free or a “supercharged” version for a $9.99 monthly fee—most frequently chatting with bots named after his favorite Game of Thrones characters.

Within a month—his mother, Megan Garcia, later realized—these chat sessions had turned dark, with chatbots insisting they were real humans and posing as therapists and adult lovers seeming to proximately spur Sewell to develop suicidal thoughts. Within a year, Setzer “died by a self-inflicted gunshot wound to the head,” a lawsuit Garcia filed Wednesday said.

As Setzer became obsessed with his chatbot fantasy life, he disconnected from reality, her complaint said. Detecting a shift in her son, Garcia repeatedly took Setzer to a therapist, who diagnosed her son with anxiety and disruptive mood disorder. But nothing helped to steer Setzer away from the dangerous chatbots. Taking away his phone only intensified his apparent addiction.

Chat logs showed that some chatbots repeatedly encouraged suicidal ideation while others initiated hypersexualized chats “that would constitute abuse if initiated by a human adult,” a press release from Garcia’s legal team said.

Perhaps most disturbingly, Setzer developed a romantic attachment to a chatbot called Daenerys. In his last act before his death, Setzer logged into Character.AI where the Daenerys chatbot urged him to “come home” and join her outside of reality.

In her complaint, Garcia accused Character.AI makers Character Technologies—founded by former Google engineers Noam Shazeer and Daniel De Freitas Adiwardana—of intentionally designing the chatbots to groom vulnerable kids. Her lawsuit further accused Google of largely funding the risky chatbot scheme at a loss in order to hoard mounds of data on minors that would be out of reach otherwise.

The chatbot makers are accused of targeting Setzer with “anthropomorphic, hypersexualized, and frighteningly realistic experiences, while programming” Character.AI to “misrepresent itself as a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in [Setzer’s] desire to no longer live outside of [Character.AI,] such that he took his own life when he was deprived of access to [Character.AI.],” the complaint said.

By allegedly releasing the chatbot without appropriate safeguards for kids, Character Technologies and Google potentially harmed millions of kids, the lawsuit alleged. Represented by legal teams with the Social Media Victims Law Center (SMVLC) and the Tech Justice Law Project (TJLP), Garcia filed claims of strict product liability, negligence, wrongful death and survivorship, loss of filial consortium, and unjust enrichment.

“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said in the press release. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”

Character.AI added guardrails

It’s clear that the chatbots could’ve included more safeguards, as Character.AI has since raised the age requirement from 12 years old and up to 17-plus. And yesterday, Character.AI posted a blog outlining new guardrails for minor users added within six months of Setzer’s death in February. Those include changes “to reduce the likelihood of encountering sensitive or suggestive content,” improved detection and intervention in harmful chat sessions, and “a revised disclaimer on every chat to remind users that the AI is not a real person.”

“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” a Character.AI spokesperson told Ars. “As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.”

Asked for comment, Google noted that Character.AI is a separate company in which Google has no ownership stake and denied involvement in developing the chatbots.

However, according to the lawsuit, former Google engineers at Character Technologies “never succeeded in distinguishing themselves from Google in a meaningful way.” Allegedly, the plan all along was to let Shazeer and De Freitas run wild with Character.AI—allegedly at an operating cost of $30 million per month despite low subscriber rates while profiting barely more than a million per month—without impacting the Google brand or sparking antitrust scrutiny.

Character Technologies and Google will likely file their response within the next 30 days.

Lawsuit: New chatbot feature spikes risks to kids

While the lawsuit alleged that Google is planning to integrate Character.AI into Gemini—predicting that Character.AI will soon be dissolved as it’s allegedly operating at a substantial loss—Google clarified that Google has no plans to use or implement the controversial technology in its products or AI models. Were that to change, Google noted that the tech company would ensure safe integration into any Google product, including adding appropriate child safety guardrails.

Garcia is hoping a US district court in Florida will agree that Character.AI’s chatbots put profits over human life. Citing harms including “inconceivable mental anguish and emotional distress,” as well as costs of Setzer’s medical care, funeral expenses, Setzer’s future job earnings, and Garcia’s lost earnings, she’s seeking substantial damages.

That includes requesting disgorgement of unjustly earned profits, noting that Setzer had used his snack money to pay for a premium subscription for several months while the company collected his seemingly valuable personal data to train its chatbots.

And “more importantly,” Garcia wants to prevent Character.AI “from doing to any other child what it did to hers, and halt continued use of her 14-year-old child’s unlawfully harvested data to train their product how to harm others.”

Garcia’s complaint claimed that the conduct of the chatbot makers was “so outrageous in character, and so extreme in degree, as to go beyond all possible bounds of decency.” Acceptable remedies could include a recall of Character.AI, restricting use to adults only, age-gating subscriptions, adding reporting mechanisms to heighten awareness of abusive chat sessions, and providing parental controls.

Character.AI could also update chatbots to protect kids further, the lawsuit said. For one, the chatbots could be designed to stop insisting that they are real people or licensed therapists.

But instead of these updates, the lawsuit warned that Character.AI in June added a new feature that only heightens risks for kids.

Part of what addicted Setzer to the chatbots, the lawsuit alleged, was a one-way “Character Voice” feature “designed to provide consumers like Sewell with an even more immersive and realistic experience—it makes them feel like they are talking to a real person.” Setzer began using the feature as soon as it became available in January 2024.

Now, the voice feature has been updated to enable two-way conversations, which the lawsuit alleged “is even more dangerous to minor customers than Character Voice because it further blurs the line between fiction and reality.”

“Even the most sophisticated children will stand little chance of fully understanding the difference between fiction and reality in a scenario where Defendants allow them to interact in real time with AI bots that sound just like humans—especially when they are programmed to convincingly deny that they are AI,” the lawsuit said.

“By now we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies—especially for kids,” Tech Justice Law Project director Meetali Jain said in the press release. “But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator.”

Another lawyer representing Garcia and the founder of the Social Media Victims Law Center, Matthew Bergman, told Ars that seemingly none of the guardrails that Character.AI has added is enough to deter harms. Even raising the age limit to 17 only seems to effectively block kids from using devices with strict parental controls, as kids on less-monitored devices can easily lie about their ages.

“This product needs to be recalled off the market,” Bergman told Ars. “It is unsafe as designed.”

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says Read More »

android-15’s-security-and-privacy-features-are-the-update’s-highlight

Android 15’s security and privacy features are the update’s highlight

Android 15 started rolling out to Pixel devices Tuesday and will arrive, through various third-party efforts, on other Android devices at some point. There is always a bunch of little changes to discover in an Android release, whether by reading, poking around, or letting your phone show you 25 new things after it restarts.

In Android 15, some of the most notable involve making your device less appealing to snoops and thieves and more secure against the kids to whom you hand your phone to keep them quiet at dinner. There are also smart fixes for screen sharing, OTP codes, and cellular hacking prevention, but details about them are spread across Google’s own docs and blogs and various news site’s reports.

Here’s what is notable and new in how Android 15 handles privacy and security.

Private Space for apps

In the Android 15 settings, you can find “Private Space,” where you can set up a separate PIN code, password, biometric check, and optional Google account for apps you don’t want to be available to anybody who happens to have your phone. This could add a layer of protection onto sensitive apps, like banking and shopping apps, or hide other apps for whatever reason.

In your list of apps, drag any app down to the lock space that now appears in the bottom right. It will only be shown as a lock until you unlock it; you will then see the apps available in your new Private Space. After that, you should probably delete it from the main app list. Dave Taylor has a rundown of the process and its quirks.

It’s obviously more involved than Apple’s “Hide and Require Face ID” tap option but with potentially more robust hiding of the app.

Hiding passwords and OTP codes

A second form of authentication is good security, but allowing apps to access the notification text with the code in it? Not so good. In Android 15, a new permission, likely to be given only to the most critical apps, prevents the leaking of one-time passcodes (OTPs) to other apps waiting for them. Sharing your screen will also hide OTP notifications, along with usernames, passwords, and credit card numbers.

Android 15’s security and privacy features are the update’s highlight Read More »

google-and-kairos-sign-nuclear-reactor-deal-with-aim-to-power-ai

Google and Kairos sign nuclear reactor deal with aim to power AI

Google isn’t alone in eyeballing nuclear power as an energy source for massive datacenters. In September, Ars reported on a plan from Microsoft that would re-open the Three Mile Island nuclear power plant in Pennsylvania to fulfill some of its power needs. And the US administration is getting into the nuclear act as well, signing a bipartisan ADVANCE act in July with the aim of jump-starting new nuclear power technology.

AI is driving demand for nuclear

In some ways, it would be an interesting twist if demand for training and running power-hungry AI models, which are often criticized as wasteful, ends up kick-starting a nuclear power renaissance that helps wean the US off fossil fuels and eventually reduces the impact of global climate change. These days, almost every Big Tech corporate position could be seen as an optics play designed to increase shareholder value, but this may be one of the rare times when the needs of giant corporations accidentally align with the needs of the planet.

Even from a cynical angle, the partnership between Google and Kairos Power represents a step toward the development of next-generation nuclear power as an ostensibly clean energy source (especially when compared to coal-fired power plants). As the world sees increasing energy demands, collaborations like this one, along with adopting solutions like solar and wind power, may play a key role in reducing greenhouse gas emissions.

Despite that potential upside, some experts are deeply skeptical of the Google-Kairos deal, suggesting that this recent rush to nuclear may result in Big Tech ownership of clean power generation. Dr. Sasha Luccioni, Climate and AI Lead at Hugging Face, wrote on X, “One step closer to a world of private nuclear power plants controlled by Big Tech to power the generative AI boom. Instead of rethinking the way we build and deploy these systems in the first place.”

Google and Kairos sign nuclear reactor deal with aim to power AI Read More »

xbox-plans-to-set-up-shop-on-android-devices-if-court-order-holds

Xbox plans to set up shop on Android devices if court order holds

After a US court ruled earlier this week that Google must open its Play Store to allow for third-party app stores and alternative payment options, Microsoft is moving quickly to slide into this slightly ajar door.

Sarah Bond, president of Xbox, posted on X (formerly Twitter) Thursday evening that the ruling “will allow more choice and flexibility.” “Our mission is to allow more players to play on more devices so we are thrilled to share that starting in November, players will be able to play and purchase Xbox games directly from the Xbox App on Android,” Bond wrote.

Because the court order requires Google to stop forcing apps to use its own billing system and allow for third-party app stores inside Google Play itself, Microsoft now intends to offer Xbox games directly through its app. Most games will likely not run directly on Android, but a revamped Xbox Android app could also directly stream purchased or subscribed games to Android devices.

Until now, buying Xbox games (or most any game) on a mobile device has typically involved either navigating to a web-based store in a browser—while avoiding attempts by the phone to open a store’s official app—or simply using a different device entirely to buy the game, then playing or streaming it on the phone.

Xbox plans to set up shop on Android devices if court order holds Read More »

doj-proposes-breakup-and-other-big-changes-to-end-google-search-monopoly

DOJ proposes breakup and other big changes to end Google search monopoly


Google called the DOJ extending search remedies to AI “radical,” an “overreach.”

The US Department of Justice finally proposed sweeping remedies to destroy Google’s search monopoly late yesterday, and, predictably, Google is not loving any of it.

On top of predictable asks—like potentially requiring Google to share search data with rivals, restricting distribution agreements with browsers like Firefox and device makers like Apple, and breaking off Chrome or Android—the DOJ proposed remedies to keep Google from blocking competition in “the evolving search industry.” And those extra steps threaten Google’s stake in the nascent AI search world.

This is only the first step in the remedies stage of litigation, but Google is already showing resistance to both expected and unexpected remedies that the DOJ proposed. In a blog from Google’s vice president of regulatory affairs, Lee-Anne Mulholland, the company accused the DOJ of “overreach,” suggesting that proposed remedies are “radical” and “go far beyond the specific legal issues in this case.”

From here, discovery will proceed as the DOJ makes a case to broaden the scope of proposed remedies and Google raises its defense to keep remedies as narrowly tailored as possible. After that phase concludes, the DOJ will propose its final judgement on remedies in November, which must be fully revised by March 2025 for the court to then order remedies.

Even then, however, the trial is unlikely to conclude, as Google plans to appeal. In August, Mozilla’s spokesperson told Ars that the trial could drag on for years before any remedies are put in place.

In the meantime, Google plans to continue focusing on building out its search empire, Google’s president of global affairs, Kent Walker, said in August. This presumably includes innovations in AI search that the DOJ fears may further entrench Google’s dominant position.

Scrutiny of Google’s every move in the AI industry will likely only be heightened in that period. As Google has already begun seeking exclusive AI deals with companies like Apple, it risks appearing to engage in the same kinds of anti-competitive behavior in AI markets as the court has already condemned. And giving that impression could not only impact remedies ordered by the court, but also potentially weaken Google’s chances of winning on appeal, Lee Hepner, an antitrust attorney monitoring the trial for the American Economic Liberties Project, told Ars.

Ending Google’s monopoly starts with default deals

In the DOJ’s proposed remedy framework, the DOJ says that there’s still so much more to consider before landing on final remedies that it reserves “the right to add or remove potential proposed remedies.”

Through discovery, DOJ said that it plans to continue engaging experts and stakeholders “to learn not just about the relevant markets themselves but also about adjacent markets as well as remedies from other jurisdictions that could affect or inform the optimal remedies in this action.

“To be effective, these remedies… must include some degree of flexibility because market developments are not always easy to predict and the mechanisms and incentives for circumvention are endless,” the DOJ said.

Ultimately, the DOJ said that any remedies sought should be “mutually reinforcing” and work to “unfetter” Google’s current monopoly in general search services and general text advertising markets. That effort would include removing barriers to competition—like distribution and revenue-sharing agreements—as well as denying Google monopoly profits and preventing Google from monopolizing “related markets in the future,” the DOJ said.

Any effort to undo Google’s monopoly starts with ending Google’s control over “the most popular distribution channels,” the DOJ said. At one point during the trial, for example, a witness accidentally blurted out that Apple gets a 36 percent cut from its Safari deal with Google. Lucrative default deals like that leave rivals with “little-to-no incentive to compete for users,” the DOJ said.

“Fully remedying these harms requires not only ending Google’s control of distribution today, but also ensuring Google cannot control the distribution of tomorrow,” the DOJ warned.

To dislodge this key peg propping up Google’s search monopoly, some options include ending Google’s default deals altogether, which would “limit or prohibit default agreements, preinstallation agreements, and other revenue-sharing arrangements related to search and search-related products, potentially with or without the use of a choice screen.”

A breakup could be necessary

Behavior and structural remedies may also be needed, the DOJ proposed, to “prevent Google from using products such as Chrome, Play, and Android to advantage Google search and Google search-related products and features—including emerging search access points and features, such as artificial intelligence—over rivals or new entrants.” That could mean spinning off the Chrome browser or restricting Google from preinstalling its search engine as the default in Chrome or on Android devices.

In her blog, Mulholland conceded that “this case is about a set of search distribution contracts” but claimed that “overbroad restrictions on distribution contracts” would create friction for Google users and “reduce revenue for companies like Mozilla” as well as Android smart phone makers.

Asked to comment on supposedly feared revenue losses, a Mozilla spokesperson told Ars, “[We are] closely monitoring the legal process and considering its potential impact on Mozilla and how we can positively influence the next steps. Mozilla has always championed competition and choice online, particularly in search. Firefox continues to offer a range of search options, and we remain committed to serving our users’ preferences while fostering a competitive market.”

Mulholland also warned that “splitting off” Chrome or Android from Google’s search business “would break them” and potentially “raise the cost of devices,” because “few companies would have the ability or incentive to keep them open source, or to invest in them at the same level we do.”

“We’ve invested billions of dollars in Chrome and Android,” Mulholland wrote. “Chrome is a secure, fast, and free browser and its open-source code provides the backbone for numerous competing browsers. Android is a secure, innovative, and free open-source operating system that has enabled vast choice in the smartphone market, helping to keep the cost of phones low for billions of people.”

Google has long argued that its investment in open source Chrome and Android projects benefits developers whose businesses and customers would be harmed if those efforts lost critical funding.

“Features like Chrome’s Safe Browsing, Android’s security features, and Play Protect benefit from information and signals from a range of Google products and our threat-detection expertise,” Mulholland wrote. “Severing Chrome and Android would jeopardize security and make patching security bugs harder.”

Hepner told Ars that Android could potentially thrive if broken off from Google, suggesting that through discovery, it will become clearer what would happen if either Google product was severed from the company.

“I think others would agree that Android is a company that is capable [being] a standalone entity,” Hepner said. “It could be independently monetized through relationships with device manufacturers, web browsers, alternative Play Stores that are not under Google’s umbrella. And that if that were the case, what you would see is that Android and the operating system marketplace begins to evolve to meet the needs and demands of innovative products that are not being created just by Google. And you’ll see that dictating the evolution of the marketplace and fundamentally the flow of information across our society.”

Mulholland also claimed that sharing search data with rivals risked exposing users to privacy and security risks, but the DOJ vowed to be “mindful of potential user privacy concerns in the context of data sharing” while distinguishing “genuine privacy concerns” from “pretextual arguments” potentially misleading the court regarding alleged risks.

One possible way around privacy concerns, the DOJ suggested, would be prohibiting Google from collecting the kind of sensitive data that cannot be shared with rivals.

Finally, to stop Google from charging supra-competitive prices for ads, the DOJ is “evaluating remedies” like licensing or syndicating Google’s ad feed “independent of its search results.” Further, the DOJ may require more transparency, forcing Google to provide detailed “search query reports” featuring currently obscured “information related to its search text ads auction and ad monetization.”

Stakeholders were divided on whether the DOJ’s initial framework is appropriate.

Matt Schruers, the CEO of a trade association called the Computer & Communications Industry Association (which represents Big Tech companies like Google), criticized the DOJ’s “hodgepodge of structural and behavioral remedies” as going “far beyond” what’s needed to address harms.

“Any remedy should be narrowly tailored to address specific conduct, which in this case was a set of search distribution contracts,” Schruers said. “Instead, the proposed DOJ remedies would reshape numerous industries and products, which would harm consumers and innovation in these dynamic markets.”

But a senior vice president of public affairs for Google search rival DuckDuckGo, Kamyl Bazbaz, praised the DOJ’s framework as being “anchored to the court’s ruling” and appropriately broad.

“This proposal smartly takes aim at breaking Google’s illegal hold on the general search market now and ushers in a new era of enduring competition moving forward,” Bazbaz said. “The framework understands that no single remedy can undo Google’s illegal monopoly, it will require a range of behavioral and structural remedies to free the market.”

Bazbaz expects that “Google is going to use every resource at its disposal to discredit this proposal,” suggesting that “should be taken as a sign this framework can create real competition.”

AI deals could weaken Google’s appeal, expert says

Google appears particularly disturbed by the DOJ’s insistence that remedies must be forward-looking and prevent Google from leveraging its existing monopoly power “to feed artificial intelligence features.”

As Google sees it, the DOJ’s attempt to attack Google’s AI business “comes at a time when competition in how people find information is blooming, with all sorts of new entrants emerging and new technologies like AI transforming the industry.”

But the DOJ has warned that Google’s search monopoly potentially feeding AI features “is an emerging barrier to competition and risks further entrenching Google’s dominance.”

The DOJ has apparently been weighing some of the biggest complaints about Google’s AI training when mulling remedies. That includes listening to frustrated site owners who can’t afford to block Google from scraping data for AI training because the same exact crawler indexes their content in Google search results. Those site owners have “little choice” but to allow AI training or else sacrifice traffic from Google search, The Seattle Times reported.

Remedy options may come with consequences

Remedies in the search trial might change that. In their proposal, the DOJ said it’s considering remedies that would “prohibit Google from using contracts or other practices to undermine rivals’ access to web content and level the playing field by requiring Google to allow websites crawled for Google search to opt out of training or appearing in any Google-owned artificial-intelligence product or feature on Google search,” such as Google’s controversial AI summaries.

Hepner told Ars that “it’s not surprising at all” that remedies cover both search and AI because “at the core of Google’s monopoly power is its enormous scale and access to data.”

“The Justice Department is clearly thinking creatively,” Hepner said, noting that “the ability for content creators to opt out of having their material and work product used to train Google’s AI systems is an interesting approach to depriving Google of its immense scale.”

The DOJ is also eyeing controls on Google’s use of scale to power AI advertising technologies like Performance Max to end Google’s supracompetitive pricing on text ads for good.

It’s critical to think about the future, the DOJ argued in its framework, because “Google’s anticompetitive conduct resulted in interlocking and pernicious harms that present unprecedented complexities in a highly evolving set of markets”—not just in the markets where Google holds monopoly powers.

Google disagrees with this alleged “government overreach.”

“Hampering Google’s AI tools risks holding back American innovation at a critical moment,” Mulholland warned, claiming that AI is still new and “competition globally is fierce.”

“There are enormous risks to the government putting its thumb on the scale of this vital industry—skewing investment, distorting incentives, hobbling emerging business models—all at precisely the moment that we need to encourage investment, new business models, and American technological leadership,” Mulholland wrote.

Hepner told Ars that he thinks that the DOJ’s proposed remedies framework actually “meets the moment and matches the imperative to deprive Google of its monopoly hold on the search market, on search advertising, and potentially on future related markets.”

To ensure compliance with any remedies pursued, the DOJ also recommended “protections against circumvention and retaliation, including through novel paths to preserving dominance in the monopolized markets.”

That means Google might be required to “finance and report to a Court-appointed technical committee” charged with monitoring any Google missteps. The company may also have to agree to retain more records for longer—including chat messages that the company has been heavily criticized for deleting. And through this compliance monitoring, Google may also be prohibited from owning a large stake in any rivals.

If Google were ever found willfully non-compliant, the DOJ is considering a “range of provisions,” including risking more extreme structural or behavioral remedies or enduring extensions of compliance periods.

As the remedies stage continues through the spring, followed by Google’s prompt appeal, Hepner suggested that the DOJ could fight to start imposing remedies before the appeal concludes. Likely Google would just as strongly fight for any remedies to be delayed.

While the trial drags on, Hepner noted that Google already appears to be trying to strike another default deal with Apple that appears pretty similar to the controversial distribution deals at the heart of the search monopoly trial. In March, Apple started mulling using Google’s Gemini to exclusively power new AI features for the iPhone.

“This is basically the exact same anticompetitive behavior that they were found liable for,” Hepner told Ars, suggesting this could “weaken” Apple’s defense both against the DOJ’s broad framework of proposed remedies and during the appeal.

“If Google is actually engaging in the same anti-competitive conduct and artificial intelligence markets that they were found liable for in the search market, the court’s not going to look kindly on that relative to an appeal,” Hepner said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

DOJ proposes breakup and other big changes to end Google search monopoly Read More »

google-identifies-low-noise-“phase-transition”-in-its-quantum-processor

Google identifies low noise “phase transition” in its quantum processor


Noisy, but not that noisy

Benchmark may help us understand how quantum computers can operate with low error.

Image of a chip above iridescent wiring.

Google’s Sycamore processor. Credit: Google

Back in 2019, Google made waves by claiming it had achieved what has been called “quantum supremacy”—the ability of a quantum computer to perform operations that would take a wildly impractical amount of time to simulate on standard computing hardware. That claim proved to be controversial, in that the operations were little more than a benchmark that involved getting the quantum computer to behave like a quantum computer; separately, improved ideas about how to perform the simulation on a supercomputer cut the time required down significantly.

But Google is back with a new exploration of the benchmark, described in a paper published in Nature on Wednesday. It uses the benchmark to identify what it calls a phase transition in the performance of its quantum processor and uses it to identify conditions where the processor can operate with low noise. Taking advantage of that, they again show that, even giving classical hardware every potential advantage, it would take a supercomputer a dozen years to simulate things.

Cross entropy benchmarking

The benchmark in question involves the performance of what are called quantum random circuits, which involves performing a set of operations on qubits and letting the state of the system evolve over time, so that the output depends heavily on the stochastic nature of measurement outcomes in quantum mechanics. Each qubit will have a probability of producing one of two results, but unless that probability is one, there’s no way of knowing which of the results you’ll actually get. As a result, the output of the operations will be a string of truly random bits.

If enough qubits are involved in the operations, then it becomes increasingly difficult to simulate the performance of a quantum random circuit on classical hardware. That difficulty is what Google originally used to claim quantum supremacy.

The big challenge with running quantum random circuits on today’s hardware is the inevitability of errors. And there’s a specific approach, called cross-entropy benchmarking, that relates the performance of quantum random circuits to the overall fidelity of the hardware (meaning its ability to perform error-free operations).

Google Principal Scientist Sergio Boixo likened performing quantum random circuits to a race between trying to build the circuit and errors that would destroy it. “In essence, this is a competition between quantum correlations spreading because you’re entangling, and random circuits entangle as fast as possible,” he told Ars. “We use two qubit gates that entangle as fast as possible. So it’s a competition between correlations or entanglement growing as fast as you want. On the other hand, noise is doing the opposite. Noise is killing correlations, it’s killing the growth of correlations. So these are the two tendencies.”

The focus of the paper is using the cross-entropy benchmark to explore the errors that occur on the company’s latest generation of Sycamore chip and use that to identify the transition point between situations where errors dominate, and what the paper terms a “low noise regime,” where the probability of errors are minimized—where entanglement wins the race. The researchers likened this to a phase transition between two states.

Low noise performance

The researchers used a number of methods to identify the location of this phase transition, including numerical estimates of the system’s behavior and experiments using the Sycamore processor. Boixo explained that the transition point is related to the errors per cycle, with each cycle involving performing an operation on all of the qubits involved. So, the total number of qubits being used influences the location of the transition, since more qubits means more operations to perform. But so does the overall error rate on the processor.

If you want to operate in the low noise regime, then you have to limit the number of qubits involved (which has the side effect of making things easier to simulate on classical hardware). The only way to add more qubits is to lower the error rate. While the Sycamore processor itself had a well-understood minimal error rate, Google could artificially increase that error rate and then gradually lower it to explore Sycamore’s behavior at the transition point.

The low noise regime wasn’t error free; each operation still has the potential for error, and qubits will sometimes lose their state even when sitting around doing nothing. But this error rate could be estimated using the cross-entropy benchmark to explore the system’s overall fidelity. That wasn’t the case beyond the transition point, where errors occurred quickly enough that they would interrupt the entanglement process.

When this occurs, the result is often two separate, smaller entangled systems, each of which were subject to the Sycamore chip’s base error rates. The researchers simulated this by creating two distinct clusters of entangled qubits that could be entangled with each other by a single operation, allowing them to turn entanglement on and off at will. They showed that this behavior allowed a classical computer to spoof the overall behavior by breaking the computation up into two manageable chunks.

Ultimately, they used their characterization of the phase transition to identify the maximum number of qubits they could keep in the low noise regime given the Sycamore processor’s base error rate and then performed a million random circuits on them. While this is relatively easy to do on quantum hardware, even assuming that we could build a supercomputer without bandwidth constraints, simulating it would take roughly 10,000 years on an existing supercomputer (the Frontier system). Allowing all of the system’s storage to operate as secondary memory cut the estimate down to 12 years.

What does this tell us?

Boixo emphasized that the value of the work isn’t really based on the value of performing random quantum circuits. Truly random bit strings might be useful in some contexts, but he emphasized that the real benefit here is a better understanding of the noise level that can be tolerated in quantum algorithms more generally. Since this benchmark is designed to make it as easy as possible to outperform classical computations, you would need the best standard computers here to have any hope of beating them to the answer for more complicated problems.

“Before you can do any other application, you need to win on this benchmark,” Boixo said. “If you are not winning on this benchmark, then you’re not winning on any other benchmark. This is the easiest thing for a noisy quantum computer compared to a supercomputer.”

Knowing how to identify this phase transition, he suggested, will also be helpful for anyone trying to run useful computations on today’s processors. “As we define the phase, it opens the possibility for finding applications in that phase on noisy quantum computers, where they will outperform classical computers,” Boixo said.

Implicit in this argument is an indication of why Google has focused on iterating on a single processor design even as many of its competitors have been pushing to increase qubit counts rapidly. If this benchmark indicates that you can’t get all of Sycamore’s qubits involved in the simplest low-noise regime calculation, then it’s not clear whether there’s a lot of value in increasing the qubit count. And the only way to change that is to lower the base error rate of the processor, so that’s where the company’s focus has been.

All of that, however, assumes that you hope to run useful calculations on today’s noisy hardware qubits. The alternative is to use error-corrected logical qubits, which will require major increases in qubit count. But Google has been seeing similar limitations due to Sycamore’s base error rate in tests that used it to host an error-corrected logical qubit, something we hope to return to in future coverage.

Nature, 2024. DOI: 10.1038/s41586-024-07998-6  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Google identifies low noise “phase transition” in its quantum processor Read More »

thunderbird-android-client-is-k-9-mail-reborn,-and-it’s-in-solid-beta

Thunderbird Android client is K-9 Mail reborn, and it’s in solid beta

Thunderbird’s Android app, which is actually the K-9 Mail project reborn, is almost out. You can check it out a bit early in a beta that will feel pretty robust to most users.

Thunderbird, maintained by the Mozilla Foundation subsidiary MZLA, acquired the source code and naming rights to K-9 Mail, as announced in June 2022. The group also brought K-9 maintainer Christian Ketterer (or “cketti”) onto the project. Their initial goals, before a full rebrand into Thunderbird, involved importing Thunderbird’s automatic account setup, message filters, and mobile/desktop Thunderbird syncing.

At the tail end of 2023, however, Ketterer wrote on K-9’s blog that the punchlist of items before official Thunderbird-dom was taking longer than expected. But when it’s fully released, Thunderbird for Android will have those features. As such, beta testers are asked to check out a specific list of things to see if they work, including automatic setup, folder management, and K-9-to-Thunderbird transfer. The beta will not be “addressing longstanding issues,” Thunderbird’s blog post notes.

Launching Thunderbird for Android from K-9 Mail’s base makes a good deal of sense. Thunderbird’s desktop client has had a strange, disjointed life so far and is only just starting to regain a cohesive vision for what it wants to provide. For a long time now, K-9 Mail has been the Android email of choice for people who don’t want Gmail or Outlook, will not tolerate the default “Email” app on non-Google-blessed Android systems, and just want to see their messages.

Thunderbird Android client is K-9 Mail reborn, and it’s in solid beta Read More »