Google

kids-are-short-circuiting-their-school-issued-chromebooks-for-tiktok-clout

Kids are short-circuiting their school-issued Chromebooks for TikTok clout

Schools across the US are warning parents about an Internet trend that has students purposefully trying to damage their school-issued Chromebooks so that they start smoking or catch fire.

Various school districts, including some in Colorado, New Jersey, North Carolina, and Washington, have sent letters to parents warning about the trend that’s largely taken off on TikTok.

Per reports from school districts and videos that Ars Technica has reviewed online, the so-called Chromebook Challenge includes students sticking things into Chromebook ports to short-circuit the system. Students are using various easily accessible items to do this, including writing utensils, paper clips, gum wrappers, and pushpins.

The Chromebook challenge has caused chaos for US schools, leading to laptop fires that have forced school evacuations, early dismissals, and the summoning of first responders.

Schools are also warning that damage to school property can result in disciplinary action and, in some states, legal action.

In Plainville, Connecticut, a middle schooler allegedly “intentionally stuck scissors into a laptop, causing smoke to emit from it,” Superintendent Brian Reas told local news station WFSB. The incident reportedly led to one student going to the hospital due to smoke inhalation and is suspected to be connected to the viral trend.

“Although the investigation is ongoing, the student involved will be referred to juvenile court to face criminal charges,” Reas said.

Kids are short-circuiting their school-issued Chromebooks for TikTok clout Read More »

google-hits-back-after-apple-exec-says-ai-is-hurting-search

Google hits back after Apple exec says AI is hurting search

The antitrust trial targeting Google’s search business is heading into the home stretch, and the outcome could forever alter Google—and the web itself. The company is scrambling to protect its search empire, but perhaps market forces could pull the rug out from under Google before the government can. Apple SVP of Services Eddie Cue suggested in his testimony on Wednesday that Google’s search traffic might be falling. Not so fast, says Google.

In an unusual move, Google issued a statement late in the day after Cue’s testimony to dispute the implication that it may already be losing its monopoly. During questioning by DOJ attorney Adam Severt, Cue expressed concern about losing the Google search deal, which is a major source of revenue for Apple. This contract, along with a similar one for Firefox, gives Google default search placement in exchange for a boatload of cash. The DOJ contends that is anticompetitive, and its proposed remedies call for banning Google from such deals.

Surprisingly, Cue noted in his testimony that search volume in Safari fell for the first time ever in April. Since Google is the default search provider, that implies fewer Google searches. Apple devices are popular, and a drop in Google searches there could be a bad sign for the company’s future competitiveness. Google’s statement on this comes off as a bit defensive.

Google hits back after Apple exec says AI is hurting search Read More »

doj-confirms-it-wants-to-break-up-google’s-ad-business

DOJ confirms it wants to break up Google’s ad business

In the trial, Google will paint this demand as a severe overreach, claiming that few, if any, companies would have the resources to purchase and run the products. Last year, an ad consultant estimated Google’s ad empire could be worth up to $95 billion, quite possibly too big to sell. However, Google was similarly skeptical about Chrome, and representatives from other companies have said throughout the search remedy trial that they would love to buy Google’s browser.

An uphill battle

After losing three antitrust cases in just a couple of years, Google will have a hard time convincing the judge it is capable of turning over a new leaf with light remedies. A DOJ lawyer told the court Google is a “recidivist monopolist” that has a pattern of skirting its legal obligations. Still, Google is looking for mercy in the case. We expect to get more details on Google’s proposed remedies as the next trial nears, but it already offered a preview in today’s hearing.

Google suggests making a smaller subset of ad data available and ending the use of some pricing schemes, including unified pricing, that the court has found to be anticompetitive. Google also promised not to re-implement discontinued practices like “last look,” which gave the company a chance to outbid rivals at the last moment. This was featured prominently in the DOJ’s case, although Google ended the practice several years ago.

To ensure it adheres to the remedies, Google suggested a court-appointed monitor would audit the process. However, Brinkema seemed unimpressed with this proposal.

As in its other cases, Google says it plans to appeal the verdict, but before it can do that, the remedies phase has to be completed. Even if it can get the remedies paused for appeal, the decision could be a blow to investor confidence. So, Google will do whatever it can to avoid the worst-case scenario, leaning on the existence of competing advertisers like Meta and TikTok to show that the market is still competitive.

Like the search case, Google won’t be facing any big developments over the summer, but this fall could be rough. Judge Amit Mehta will most likely rule on the search remedies in August, and the ad tech remedies case will begin the following month. Google also has the Play Store case hanging over its head. It lost the first round, but the company hopes to prevail on appeal when the case gets underway again, probably in late 2025.

DOJ confirms it wants to break up Google’s ad business Read More »

claude’s-ai-research-mode-now-runs-for-up-to-45-minutes-before-delivering-reports

Claude’s AI research mode now runs for up to 45 minutes before delivering reports

Still, the report contained a direct quote statement from William Higinbotham that appears to combine quotes from two sources not cited in the source list. (One must always be careful with confabulated quotes in AI because even outside of this Research mode, Claude 3.7 Sonnet tends to invent plausible ones to fit a narrative.) We recently covered a study that showed AI search services confabulate sources frequently, and in this case, it appears that the sources Claude Research surfaced, while real, did not always match what is stated in the report.

There’s always room for interpretation and variation in detail, of course, but overall, Claude Research did a relatively good job crafting a report on this particular topic. Still, you’d want to dig more deeply into each source and confirm everything if you used it as the basis for serious research. You can read the full Claude-generated result as this text file, saved in markdown format. Sadly, the markdown version does not include the source URLS found in the Claude web interface.

Integrations feature

Anthropic also announced Thursday that it has broadened Claude’s data access capabilities. In addition to web search and Google Workspace integration, Claude can now search any connected application through the company’s new “Integrations” feature. The feature reminds us somewhat of OpenAI’s ChatGPT Plugins feature from March 2023 that aimed for similar connections, although the two features work differently under the hood.

These Integrations allow Claude to work with remote Model Context Protocol (MCP) servers across web and desktop applications. The MCP standard, which Anthropic introduced last November and we covered in April, connects AI applications to external tools and data sources.

At launch, Claude supports Integrations with 10 services, including Atlassian’s Jira and Confluence, Zapier, Cloudflare, Intercom, Asana, Square, Sentry, PayPal, Linear, and Plaid. The company plans to add more partners like Stripe and GitLab in the future.

Each integration aims to expand Claude’s functionality in specific ways. The Zapier integration, for instance, reportedly connects thousands of apps through pre-built automation sequences, allowing Claude to automatically pull sales data from HubSpot or prepare meeting briefs based on calendar entries. With Atlassian’s tools, Anthropic says that Claude can collaborate on product development, manage tasks, and create multiple Confluence pages and Jira work items simultaneously.

Anthropic has made its advanced Research and Integrations features available in beta for users on Max, Team, and Enterprise plans, with Pro plan access coming soon. The company has also expanded its web search feature (introduced in March) to all Claude users on paid plans globally.

Claude’s AI research mode now runs for up to 45 minutes before delivering reports Read More »

google-teases-notebooklm-app-in-the-play-store-ahead-of-i/o-release

Google teases NotebookLM app in the Play Store ahead of I/O release

After several years of escalating AI hysteria, we are all familiar with Google’s desire to put Gemini in every one of its products. That can be annoying, but NotebookLM is not—this one actually works. NotebookLM, which helps you parse documents, videos, and more using Google’s advanced AI models, has been available on the web since 2023, but Google recently confirmed it would finally get an Android app. You can get a look at the app now, but it’s not yet available to install.

Until now, NotebookLM was only a website. You can visit it on your phone, but the interface is clunky compared to the desktop version. The arrival of the mobile app will change that. Google said it plans to release the app at Google I/O in late May, but the listing is live in the Play Store early. You can pre-register to be notified when the download is live, but you’ll have to tide yourself over with the screenshots for the time being.

NotebookLM relies on the same underlying technology as Google’s other chatbots and AI projects, but instead of a general purpose robot, NotebookLM is only concerned with the documents you upload. It can assimilate text files, websites, and videos, including multiple files and source types for a single agent. It has a hefty context window of 500,000 tokens and supports document uploads as large as 200MB. Google says this creates a queryable “AI expert” that can answer detailed questions and brainstorm ideas based on the source data.

Google teases NotebookLM app in the Play Store ahead of I/O release Read More »

eric-schmidt-apparently-bought-relativity-space-to-put-data-centers-in-orbit

Eric Schmidt apparently bought Relativity Space to put data centers in orbit

“This probably helps explain why Schmidt bought Relativity Space,” I commented on the social media site X after Schmidt’s remarks. A day later, Schmidt replied with a single word, “Yes.”

There are relatively few US launch companies that either have large rockets or are developing them. The options for a would-be space entrepreneur who wants to control their own access to space are limited. SpaceX and Blue Origin are already owned by billionaires who have total decision-making authority. United Launch Alliance’s Vulcan rocket is expensive, and its existing manifest is long already. Rocket Lab’s Neutron vehicle is coming soon, but it may not be large enough for Schmidt’s ambitions.

That leaves Relativity Space, which may be within a couple of years of flying the partially reusable Terran R rocket. If fully realized, Terran R would be a beastly launch vehicle capable of launching 33.5 metric tons to low-Earth orbit in expendable mode—more than a fully upgraded Vulcan Centaur—and 23.5 tons with a reusable first stage. If you were a billionaire seeking to put large data centers into space and wanted control of launch, Relativity is probably the only game in town.

As Ars has previously reported, there are some considerable flaws with Relativity’s approach to developing Terran R. However, these problems can be fixed with additional money, and Schmidt has brought that to the company over the last half-year.

Big problems, big ideas

Schmidt does not possess the wealth of an Elon Musk or Jeff Bezos. His personal fortune is roughly $20 billion, so approximately an order of magnitude less. This explains why, according to financial industry sources, Schmidt is presently seeking additional partners to bankroll a revitalized Relativity.

Solving launch is just one of the challenges this idea faces, of course. How big would these data centers be? Where would they go within an increasingly cluttered low-Earth orbit? Could space-based solar power meet their energy needs? Can all of this heat be radiated away efficiently in space? Economically, would any of this make sense?

These are not simple questions. But Schmidt is correct that the current trajectory of power and environmental demands created by AI data centers is unsustainable. It is good that someone is thinking big about solving big problems.

Eric Schmidt apparently bought Relativity Space to put data centers in orbit Read More »

google-is-quietly-testing-ads-in-ai-chatbots

Google is quietly testing ads in AI chatbots

Google has built an enormously successful business around the idea of putting ads in search results. Its most recent quarterly results showed the company made more than $50 billion from search ads, but what happens if AI becomes the dominant form of finding information? Google is preparing for that possibility by testing chatbot ads, but you won’t see them in Google’s Gemini AI—at least not yet.

A report from Bloomberg describes how Google began working on a plan in 2024 to adapt AdSense ads to a chatbot experience. Usually, AdSense ads appear in search results and are scattered around websites. Google ran a small test of chatbot ads late last year, partnering with select AI startups, including AI search apps iAsk and Liner.

The testing must have gone well because Google is now allowing more chatbot makers to sign up for AdSense. “AdSense for Search is available for websites that want to show relevant ads in their conversational AI experiences,” said a Google spokesperson.

If people continue shifting to using AI chatbots to find information, this expansion of AdSense could help prop up profits. There’s no hint of advertising in Google’s own Gemini chatbot or AI Mode search, but the day may be coming when you won’t get the clean, ad-free experience at no cost.

A path to profit

Google is racing to catch up to OpenAI, which has a substantial lead in chatbot market share despite Gemini’s recent growth. This has led Google to freely provide some of its most capable AI tools, including Deep Research, Gemini Pro, and Veo 2 video generation. There are limits to how much you can use most of these features with a free account, but it must be costing Google a boatload of cash.

Google is quietly testing ads in AI chatbots Read More »

google-search’s-made-up-ai-explanations-for-sayings-no-one-ever-said,-explained

Google search’s made-up AI explanations for sayings no one ever said, explained


But what does “meaning” mean?

A partial defense of (some of) AI Overview’s fanciful idiomatic explanations.

Mind…. blown Credit: Getty Images

Last week, the phrase “You can’t lick a badger twice” unexpectedly went viral on social media. The nonsense sentence—which was likely never uttered by a human before last week—had become the poster child for the newly discovered way Google search’s AI Overviews makes up plausible-sounding explanations for made-up idioms (though the concept seems to predate that specific viral post by at least a few days).

Google users quickly discovered that typing any concocted phrase into the search bar with the word “meaning” attached at the end would generate an AI Overview with a purported explanation of its idiomatic meaning. Even the most nonsensical attempts at new proverbs resulted in a confident explanation from Google’s AI Overview, created right there on the spot.

In the wake of the “lick a badger” post, countless users flocked to social media to share Google’s AI interpretations of their own made-up idioms, often expressing horror or disbelief at Google’s take on their nonsense. Those posts often highlight the overconfident way the AI Overview frames its idiomatic explanations and occasional problems with the model confabulating sources that don’t exist.

But after reading through dozens of publicly shared examples of Google’s explanations for fake idioms—and generating a few of my own—I’ve come away somewhat impressed with the model’s almost poetic attempts to glean meaning from gibberish and make sense out of the senseless.

Talk to me like a child

Let’s try a thought experiment: Say a child asked you what the phrase “you can’t lick a badger twice” means. You’d probably say you’ve never heard that particular phrase or ask the child where they heard it. You might say that you’re not familiar with that phrase or that it doesn’t really make sense without more context.

Someone on Threads noticed you can type any random sentence into Google, then add “meaning” afterwards, and you’ll get an AI explanation of a famous idiom or phrase you just made up. Here is mine

[image or embed]

— Greg Jenner (@gregjenner.bsky.social) April 23, 2025 at 6: 15 AM

But let’s say the child persisted and really wanted an explanation for what the phrase means. So you’d do your best to generate a plausible-sounding answer. You’d search your memory for possible connotations for the word “lick” and/or symbolic meaning for the noble badger to force the idiom into some semblance of sense. You’d reach back to other similar idioms you know to try to fit this new, unfamiliar phrase into a wider pattern (anyone who has played the excellent board game Wise and Otherwise might be familiar with the process).

Google’s AI Overview doesn’t go through exactly that kind of human thought process when faced with a similar question about the same saying. But in its own way, the large language model also does its best to generate a plausible-sounding response to an unreasonable request.

As seen in Greg Jenner’s viral Bluesky post, Google’s AI Overview suggests that “you can’t lick a badger twice” means that “you can’t trick or deceive someone a second time after they’ve been tricked once. It’s a warning that if someone has already been deceived, they are unlikely to fall for the same trick again.” As an attempt to derive meaning from a meaningless phrase —which was, after all, the user’s request—that’s not half bad. Faced with a phrase that has no inherent meaning, the AI Overview still makes a good-faith effort to answer the user’s request and draw some plausible explanation out of troll-worthy nonsense.

Contrary to the computer science truism of “garbage in, garbage out, Google here is taking in some garbage and spitting out… well, a workable interpretation of garbage, at the very least.

Google’s AI Overview even goes into more detail explaining its thought process. “Lick” here means to “trick or deceive” someone, it says, a bit of a stretch from the dictionary definition of lick as “comprehensively defeat,” but probably close enough for an idiom (and a plausible iteration of the idiom, “Fool me once shame on you, fool me twice, shame on me…”). Google also explains that the badger part of the phrase “likely originates from the historical sport of badger baiting,” a practice I was sure Google was hallucinating until I looked it up and found it was real.

It took me 15 seconds to make up this saying but now I think it kind of works!

Credit: Kyle Orland / Google

It took me 15 seconds to make up this saying but now I think it kind of works! Credit: Kyle Orland / Google

I found plenty of other examples where Google’s AI derived more meaning than the original requester’s gibberish probably deserved. Google interprets the phrase “dream makes the steam” as an almost poetic statement about imagination powering innovation. The line “you can’t humble a tortoise” similarly gets interpreted as a statement about the difficulty of intimidating “someone with a strong, steady, unwavering character (like a tortoise).”

Google also often finds connections that the original nonsense idiom creators likely didn’t intend. For instance, Google could link the made-up idiom “A deft cat always rings the bell” to the real concept of belling the cat. And in attempting to interpret the nonsense phrase “two cats are better than grapes,” the AI Overview correctly notes that grapes can be potentially toxic to cats.

Brimming with confidence

Even when Google’s AI Overview works hard to make the best of a bad prompt, I can still understand why the responses rub a lot of users the wrong way. A lot of the problem, I think, has to do with the LLM’s unearned confident tone, which pretends that any made-up idiom is a common saying with a well-established and authoritative meaning.

Rather than framing its responses as a “best guess” at an unknown phrase (as a human might when responding to a child in the example above), Google generally provides the user with a single, authoritative explanation for what an idiom means, full stop. Even with the occasional use of couching words such as “likely,” “probably,” or “suggests,” the AI Overview comes off as unnervingly sure of the accepted meaning for some nonsense the user made up five seconds ago.

If Google’s AI Overviews always showed this much self-doubt, we’d be getting somewhere.

Credit: Google / Kyle Orland

If Google’s AI Overviews always showed this much self-doubt, we’d be getting somewhere. Credit: Google / Kyle Orland

I was able to find one exception to this in my testing. When I asked Google the meaning of “when you see a tortoise, spin in a circle,” Google reasonably told me that the phrase “doesn’t have a widely recognized, specific meaning” and that it’s “not a standard expression with a clear, universal meaning.” With that context, Google then offered suggestions for what the phrase “seems to” mean and mentioned Japanese nursery rhymes that it “may be connected” to, before concluding that it is “open to interpretation.”

Those qualifiers go a long way toward properly contextualizing the guesswork Google’s AI Overview is actually conducting here. And if Google provided that kind of context in every AI summary explanation of a made-up phrase, I don’t think users would be quite as upset.

Unfortunately, LLMs like this have trouble knowing what they don’t know, meaning moments of self-doubt like the turtle interpretation here tend to be few and far between. It’s not like Google’s language model has some master list of idioms in its neural network that it can consult to determine what is and isn’t a “standard expression” that it can be confident about. Usually, it’s just projecting a self-assured tone while struggling to force the user’s gibberish into meaning.

Zeus disguised himself as what?

The worst examples of Google’s idiomatic AI guesswork are ones where the LLM slips past plausible interpretations and into sheer hallucination of completely fictional sources. The phrase “a dog never dances before sunset,” for instance, did not appear in the film Before Sunrise, no matter what Google says. Similarly, “There are always two suns on Tuesday” does not appear in The Hitchhiker’s Guide to the Galaxy film despite Google’s insistence.

Literally in the one I tried.

[image or embed]

— Sarah Vaughan (@madamefelicie.bsky.social) April 23, 2025 at 7: 52 AM

There’s also no indication that the made-up phrase “Welsh men jump the rabbit” originated on the Welsh island of Portland, or that “peanut butter platform heels” refers to a scientific experiment creating diamonds from the sticky snack. We’re also unaware of any Greek myth where Zeus disguises himself as a golden shower to explain the phrase “beware what glitters in a golden shower.” (Update: As many commenters have pointed out, this last one is actually a reference to the greek myth of Danaë and the shower of gold, showing Google’s AI knows more about this potential symbolism than I do)

The fact that Google’s AI Overview presents these completely made-up sources with the same self-assurance as its abstract interpretations is a big part of the problem here. It’s also a persistent problem for LLMs that tend to make up news sources and cite fake legal cases regularly. As usual, one should be very wary when trusting anything an LLM presents as an objective fact.

When it comes to the more artistic and symbolic interpretation of nonsense phrases, though, I think Google’s AI Overviews have gotten something of a bad rap recently. Presented with the difficult task of explaining nigh-unexplainable phrases, the model does its best, generating interpretations that can border on the profound at times. While the authoritative tone of those responses can sometimes be annoying or actively misleading, it’s at least amusing to see the model’s best attempts to deal with our meaningless phrases.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Google search’s made-up AI explanations for sayings no one ever said, explained Read More »

google:-governments-are-using-zero-day-hacks-more-than-ever

Google: Governments are using zero-day hacks more than ever

Governments hacking enterprise

A few years ago, zero-day attacks almost exclusively targeted end users. In 2021, GTIG spotted 95 zero-days, and 71 of them were deployed against user systems like browsers and smartphones. In 2024, 33 of the 75 total vulnerabilities were aimed at enterprise technologies and security systems. At 44 percent of the total, this is the highest share of enterprise focus for zero-days yet.

GTIG says that it detected zero-day attacks targeting 18 different enterprise entities, including Microsoft, Google, and Ivanti. This is slightly lower than the 22 firms targeted by zero-days in 2023, but it’s a big increase compared to just a few years ago, when seven firms were hit with zero-days in 2020.

The nature of these attacks often makes it hard to trace them to the source, but Google says it managed to attribute 34 of the 75 zero-day attacks. The largest single category with 10 detections was traditional state-sponsored espionage, which aims to gather intelligence without a financial motivation. China was the largest single contributor here. GTIG also identified North Korea as the perpetrator in five zero-day attacks, but these campaigns also had a financial motivation (usually stealing crypto).

Credit: Google

That’s already a lot of government-organized hacking, but GTIG also notes that eight of the serious hacks it detected came from commercial surveillance vendors (CSVs), firms that create hacking tools and claim to only do business with governments. So it’s fair to include these with other government hacks. This includes companies like NSO Group and Cellebrite, with the former already subject to US sanctions from its work with adversarial nations.

In all, this adds up to 23 of the 34 attributed attacks coming from governments. There were also a few attacks that didn’t technically originate from governments but still involved espionage activities, suggesting a connection to state actors. Beyond that, Google spotted five non-government financially motivated zero-day campaigns that did not appear to engage in spying.

Google’s security researchers say they expect zero-day attacks to continue increasing over time. These stealthy vulnerabilities can be expensive to obtain or discover, but the lag time before anyone notices the threat can reward hackers with a wealth of information (or money). Google recommends enterprises continue scaling up efforts to detect and block malicious activities, while also designing systems with redundancy and stricter limits on access. As for the average user, well, cross your fingers.

Google: Governments are using zero-day hacks more than ever Read More »

ios-and-android-juice-jacking-defenses-have-been-trivial-to-bypass-for-years

iOS and Android juice jacking defenses have been trivial to bypass for years


SON OF JUICE JACKING ARISES

New ChoiceJacking attack allows malicious chargers to steal data from phones.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

About a decade ago, Apple and Google started updating iOS and Android, respectively, to make them less susceptible to “juice jacking,” a form of attack that could surreptitiously steal data or execute malicious code when users plug their phones into special-purpose charging hardware. Now, researchers are revealing that, for years, the mitigations have suffered from a fundamental defect that has made them trivial to bypass.

“Juice jacking” was coined in a 2011 article on KrebsOnSecurity detailing an attack demonstrated at a Defcon security conference at the time. Juice jacking works by equipping a charger with hidden hardware that can access files and other internal resources of phones, in much the same way that a computer can when a user connects it to the phone.

An attacker would then make the chargers available in airports, shopping malls, or other public venues for use by people looking to recharge depleted batteries. While the charger was ostensibly only providing electricity to the phone, it was also secretly downloading files or running malicious code on the device behind the scenes. Starting in 2012, both Apple and Google tried to mitigate the threat by requiring users to click a confirmation button on their phones before a computer—or a computer masquerading as a charger—could access files or execute code on the phone.

The logic behind the mitigation was rooted in a key portion of the USB protocol that, in the parlance of the specification, dictates that a USB port can facilitate a “host” device or a “peripheral” device at any given time, but not both. In the context of phones, this meant they could either:

  • Host the device on the other end of the USB cord—for instance, if a user connects a thumb drive or keyboard. In this scenario, the phone is the host that has access to the internals of the drive, keyboard or other peripheral device.
  • Act as a peripheral device that’s hosted by a computer or malicious charger, which under the USB paradigm is a host that has system access to the phone.

An alarming state of USB security

Researchers at the Graz University of Technology in Austria recently made a discovery that completely undermines the premise behind the countermeasure: They’re rooted under the assumption that USB hosts can’t inject input that autonomously approves the confirmation prompt. Given the restriction against a USB device simultaneously acting as a host and peripheral, the premise seemed sound. The trust models built into both iOS and Android, however, present loopholes that can be exploited to defeat the protections. The researchers went on to devise ChoiceJacking, the first known attack to defeat juice-jacking mitigations.

“We observe that these mitigations assume that an attacker cannot inject input events while establishing a data connection,” the researchers wrote in a paper scheduled to be presented in August at the Usenix Security Symposium in Seattle. “However, we show that this assumption does not hold in practice.”

The researchers continued:

We present a platform-agnostic attack principle and three concrete attack techniques for Android and iOS that allow a malicious charger to autonomously spoof user input to enable its own data connection. Our evaluation using a custom cheap malicious charger design reveals an alarming state of USB security on mobile platforms. Despite vendor customizations in USB stacks, ChoiceJacking attacks gain access to sensitive user files (pictures, documents, app data) on all tested devices from 8 vendors including the top 6 by market share.

In response to the findings, Apple updated the confirmation dialogs in last month’s release of iOS/iPadOS 18.4 to require a user authentication in the form of a PIN or password. While the researchers were investigating their ChoiceJacking attacks last year, Google independently updated its confirmation with the release of version 15 in November. The researchers say the new mitigation works as expected on fully updated Apple and Android devices. Given the fragmentation of the Android ecosystem, however, many Android devices remain vulnerable.

All three of the ChoiceJacking techniques defeat the original Android juice-jacking mitigations. One of them also works against those defenses in Apple devices. In all three, the charger acts as a USB host to trigger the confirmation prompt on the targeted phone.

The attacks then exploit various weaknesses in the OS that allow the charger to autonomously inject “input events” that can enter text or click buttons presented in screen prompts as if the user had done so directly into the phone. In all three, the charger eventually gains two conceptual channels to the phone: (1) an input one allowing it to spoof user consent and (2) a file access connection that can steal files.

An illustration of ChoiceJacking attacks. (1) The victim device is attached to the malicious charger. (2) The charger establishes an extra input channel. (3) The charger initiates a data connection. User consent is needed to confirm it. (4) The charger uses the input channel to spoof user consent. Credit: Draschbacher et al.

It’s a keyboard, it’s a host, it’s both

In the ChoiceJacking variant that defeats both Apple- and Google-devised juice-jacking mitigations, the charger starts as a USB keyboard or a similar peripheral device. It sends keyboard input over USB that invokes simple key presses, such as arrow up or down, but also more complex key combinations that trigger settings or open a status bar.

The input establishes a Bluetooth connection to a second miniaturized keyboard hidden inside the malicious charger. The charger then uses the USB Power Delivery, a standard available in USB-C connectors that allows devices to either provide or receive power to or from the other device, depending on messages they exchange, a process known as the USB PD Data Role Swap.

A simulated ChoiceJacking charger. Bidirectional USB lines allow for data role swaps. Credit: Draschbacher et al.

With the charger now acting as a host, it triggers the file access consent dialog. At the same time, the charger still maintains its role as a peripheral device that acts as a Bluetooth keyboard that approves the file access consent dialog.

The full steps for the attack, provided in the Usenix paper, are:

1. The victim device is connected to the malicious charger. The device has its screen unlocked.

2. At a suitable moment, the charger performs a USB PD Data Role (DR) Swap. The mobile device now acts as a USB host, the charger acts as a USB input device.

3. The charger generates input to ensure that BT is enabled.

4. The charger navigates to the BT pairing screen in the system settings to make the mobile device discoverable.

5. The charger starts advertising as a BT input device.

6. By constantly scanning for newly discoverable Bluetooth devices, the charger identifies the BT device address of the mobile device and initiates pairing.

7. Through the USB input device, the charger accepts the Yes/No pairing dialog appearing on the mobile device. The Bluetooth input device is now connected.

8. The charger sends another USB PD DR Swap. It is now the USB host, and the mobile device is the USB device.

9. As the USB host, the charger initiates a data connection.

10. Through the Bluetooth input device, the charger confirms its own data connection on the mobile device.

This technique works against all but one of the 11 phone models tested, with the holdout being an Android device running the Vivo Funtouch OS, which doesn’t fully support the USB PD protocol. The attacks against the 10 remaining models take about 25 to 30 seconds to establish the Bluetooth pairing, depending on the phone model being hacked. The attacker then has read and write access to files stored on the device for as long as it remains connected to the charger.

Two more ways to hack Android

The two other members of the ChoiceJacking family work only against the juice-jacking mitigations that Google put into Android. In the first, the malicious charger invokes the Android Open Access Protocol, which allows a USB host to act as an input device when the host sends a special message that puts it into accessory mode.

The protocol specifically dictates that while in accessory mode, a USB host can no longer respond to other USB interfaces, such as the Picture Transfer Protocol for transferring photos and videos and the Media Transfer Protocol that enables transferring files in other formats. Despite the restriction, all of the Android devices tested violated the specification by accepting AOAP messages sent, even when the USB host hadn’t been put into accessory mode. The charger can exploit this implementation flaw to autonomously complete the required user confirmations.

The remaining ChoiceJacking technique exploits a race condition in the Android input dispatcher by flooding it with a specially crafted sequence of input events. The dispatcher puts each event into a queue and processes them one by one. The dispatcher waits for all previous input events to be fully processed before acting on a new one.

“This means that a single process that performs overly complex logic in its key event handler will delay event dispatching for all other processes or global event handlers,” the researchers explained.

They went on to note, “A malicious charger can exploit this by starting as a USB peripheral and flooding the event queue with a specially crafted sequence of key events. It then switches its USB interface to act as a USB host while the victim device is still busy dispatching the attacker’s events. These events therefore accept user prompts for confirming the data connection to the malicious charger.”

The Usenix paper provides the following matrix showing which devices tested in the research are vulnerable to which attacks.

The susceptibility of tested devices to all three ChoiceJacking attack techniques. Credit: Draschbacher et al.

User convenience over security

In an email, the researchers said that the fixes provided by Apple and Google successfully blunt ChoiceJacking attacks in iPhones, iPads, and Pixel devices. Many Android devices made by other manufacturers, however, remain vulnerable because they have yet to update their devices to Android 15. Other Android devices—most notably those from Samsung running the One UI 7 software interface—don’t implement the new authentication requirement, even when running on Android 15. The omission leaves these models vulnerable to ChoiceJacking. In an email, principal paper author Florian Draschbacher wrote:

The attack can therefore still be exploited on many devices, even though we informed the manufacturers about a year ago and they acknowledged the problem. The reason for this slow reaction is probably that ChoiceJacking does not simply exploit a programming error. Rather, the problem is more deeply rooted in the USB trust model of mobile operating systems. Changes here have a negative impact on the user experience, which is why manufacturers are hesitant. [It] means for enabling USB-based file access, the user doesn’t need to simply tap YES on a dialog but additionally needs to present their unlock PIN/fingerprint/face. This inevitably slows down the process.

The biggest threat posed by ChoiceJacking is to Android devices that have been configured to enable USB debugging. Developers often turn on this option so they can troubleshoot problems with their apps, but many non-developers enable it so they can install apps from their computer, root their devices so they can install a different OS, transfer data between devices, and recover bricked phones. Turning it on requires a user to flip a switch in Settings > System > Developer options.

If a phone has USB Debugging turned on, ChoiceJacking can gain shell access through the Android Debug Bridge. From there, an attacker can install apps, access the file system, and execute malicious binary files. The level of access through the Android Debug Mode is much higher than that through Picture Transfer Protocol and Media Transfer Protocol, which only allow read and write access to system files.

The vulnerabilities are tracked as:

    • CVE-2025-24193 (Apple)
    • CVE-2024-43085 (Google)
    • CVE-2024-20900 (Samsung)
    • CVE-2024-54096 (Huawei)

A Google spokesperson confirmed that the weaknesses were patched in Android 15 but didn’t speak to the base of Android devices from other manufacturers, who either don’t support the new OS or the new authentication requirement it makes possible. Apple declined to comment for this post.

Word that juice-jacking-style attacks are once again possible on some Android devices and out-of-date iPhones is likely to breathe new life into the constant warnings from federal authorities, tech pundits, news outlets, and local and state government agencies that phone users should steer clear of public charging stations. Special-purpose cords that disconnect data access remain a viable mitigation, but the researchers noted that “data blockers also interfere with modern

power negotiation schemes, thereby degrading charge speed.”

As I reported in 2023, these warnings are mostly scaremongering, and the advent of ChoiceJacking does little to change that, given that there are no documented cases of such attacks in the wild. That said, people using Android devices that don’t support Google’s new authentication requirement may want to refrain from public charging.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

iOS and Android juice jacking defenses have been trivial to bypass for years Read More »

perplexity-will-come-to-moto-phones-after-exec-testified-google-limited-access

Perplexity will come to Moto phones after exec testified Google limited access

Shevelenko was also asked about Chrome, which the DOJ would like to force Google to sell. Like an OpenAI executive said on Monday, Shevelenko confirmed Perplexity would be interested in buying the browser from Google.

Motorola has all the AI

There were some vague allusions during the trial that Perplexity would come to Motorola phones this year, but we didn’t know just how soon that was. With the announcement of its 2025 Razr devices, Moto has confirmed a much more expansive set of AI features. Parts of the Motorola AI experience are powered by Gemini, Copilot, Meta, and yes, Perplexity.

While Gemini gets top billing as the default assistant app, other firms have wormed their way into different parts of the software. Perplexity’s app will be preloaded, and anyone who buys the new Razrs. Owners will also get three free months of Perplexity Pro. This is the first time Perplexity has had a smartphone distribution deal, but it won’t be shown prominently on the phone. When you start a Motorola device, it will still look like a Google playground.

While it’s not the default assistant, Perplexity is integrated into the Moto AI platform. The new Razrs will proactively suggest you perform an AI search when accessing certain features like the calendar or browsing the web under the banner “Explore with Perplexity.” The Perplexity app has also been optimized to work with the external screen on Motorola’s foldables.

Moto AI also has elements powered by other AI systems. For example, Microsoft Copilot will appear in Moto AI with an “Ask Copilot” option. And Meta’s Llama model powers a Moto AI feature called Catch Me Up, which summarizes notifications from select apps.

It’s unclear why Motorola leaned on four different AI providers for a single phone. It probably helps that all these companies are desperate to entice users to bulk up their market share. Perplexity confirmed that no money changed hands in this deal—it’s on Moto phones to acquire more users. That might be tough with Gemini getting priority placement, though.

Perplexity will come to Moto phones after exec testified Google limited access Read More »

openai-wants-to-buy-chrome-and-make-it-an-“ai-first”-experience

OpenAI wants to buy Chrome and make it an “AI-first” experience

According to Turley, OpenAI would throw its proverbial hat in the ring if Google had to sell. When asked if OpenAI would want Chrome, he was unequivocal. “Yes, we would, as would many other parties,” Turley said.

OpenAI has reportedly considered building its own Chromium-based browser to compete with Chrome. Several months ago, the company hired former Google developers Ben Goodger and Darin Fisher, both of whom worked to bring Chrome to market.

Close-up of Google Chrome Web Browser web page on the web browser. Chrome is widely used web browser developed by Google.

Credit: Getty Images

It’s not hard to see why OpenAI might want a browser, particularly Chrome with its 4 billion users and 67 percent market share. Chrome would instantly give OpenAI a massive install base of users who have been incentivized to use Google services. If OpenAI were running the show, you can bet ChatGPT would be integrated throughout the experience—Turley said as much, predicting an “AI-first” experience. The user data flowing to the owner of Chrome could also be invaluable in training agentic AI models that can operate browsers on the user’s behalf.

Interestingly, there’s so much discussion about who should buy Chrome, but relatively little about spinning off Chrome into an independent company. Google has contended that Chrome can’t survive on its own. However, the existence of Google’s multibillion-dollar search placement deals, which the DOJ wants to end, suggests otherwise. Regardless, if Google has to sell, and OpenAI has the cash, we might get the proposed “AI-first” browsing experience.

OpenAI wants to buy Chrome and make it an “AI-first” experience Read More »