Security

spies-hack-high-value-mail-servers-using-an-exploit-from-yesteryear

Spies hack high-value mail servers using an exploit from yesteryear

Threat actors, likely supported by the Russian government, hacked multiple high-value mail servers around the world by exploiting XSS vulnerabilities, a class of bug that was among the most commonly exploited in decades past.

XSS is short for cross-site scripting. Vulnerabilities result from programming errors found in webserver software that, when exploited, allow attackers to execute malicious code in the browsers of people visiting an affected website. XSS first got attention in 2005, with the creation of the Samy Worm, which knocked MySpace out of commission when it added more than one million MySpace friends to a user named Samy. XSS exploits abounded for the next decade and have gradually fizzled more recently, although this class of attacks continues now.

Just add JavaScript

On Thursday, security firm ESET reported that Sednit, a Kremlin-backed hacking group also tracked as APT28, Fancy Bear, Forest Blizzard, and Sofacy—gained access to high-value email accounts by exploiting XSS vulnerabilities in mail server software from four different makers. Those packages are: Roundcube, MDaemon, Horde, and Zimbra.

The hacks most recently targeted mail servers used by defense contractors in Bulgaria and Romania, some of which are producing Soviet-era weapons for use in Ukraine as it fends off an invasion from Russia. Governmental organizations in those countries were also targeted. Other targets have included governments in Africa, the European Union, and South America.

RoundPress, as ESET has named the operation, delivered XSS exploits through spearphishing emails. Hidden inside some of the HTML in the emails was an XSS exploit. In 2023, ESET observed Sednit exploiting CVE-2020-43770, a vulnerability that has since been patched in Roundcube. A year later, ESET watched Sednit exploit different XSS vulnerabilities in Horde, MDaemon, and Zimbra. One of the now-patched vulnerabilities, from MDaemon, was a zero-day at the time Sednit exploited it.

Spies hack high-value mail servers using an exploit from yesteryear Read More »

incorporated-in-us:-$8.4b-money-launderer-for-chinese-speaking-crypto-scammers

Incorporated in US: $8.4B money launderer for Chinese-speaking crypto scammers


Before crackdown, this was one of the ‘Net’s biggest markets for Chinese-speaking scammers.

As the underground industry of crypto investment scams has grown into one of the world’s most lucrative forms of cybercrime, the secondary market of money launderers for those scammers has grown to match it. Amid that black market, one such Chinese-language service on the messaging platform Telegram blossomed into an all-purpose underground bazaar: It has offered not only cash-out services to scammers but also money laundering for North Korean hackers, stolen data, targeted harassment-for-hire, and even what appears to be sex trafficking. And somehow, it’s all overseen by a company legally registered in the United States.

According to new research released today by crypto-tracing firm Elliptic, a company called Xinbi Guarantee has since 2022 facilitated no less than $8.4 billion in transactions via its Telegram-based marketplace prior to Telegram’s actions in recent days to remove its accounts from the platform. Money stolen from scam victims likely represents the “vast majority” of that sum, according to Elliptic’s cofounder Tom Robinson. Yet even as the market serves Chinese-speaking scammers, it also boasts on the top of its website—in Mandarin—that it’s registered in Colorado.

“Xinbi Guarantee has served as a giant, purportedly US-incorporated illicit online marketplace for online scams that primarily offers money laundering services,” says Robinson. He adds, though, that Elliptic has also found a remarkable variety of other criminal offerings on the market: child-bearing surrogacy and egg donors, harassment services that offer to threaten or throw feces at any chosen victim, and even sex workers in their teens who are likely trafficking victims.

Xinbi Guarantee is the second such crime-friendly Chinese-language market that Robinson and his team of researchers have uncovered over the past year. Last July, they published a report on Huione Guarantee, a similar Cambodia-based service that Elliptic said in January had facilitated $24 billion in transactions—largely from crypto scammers—making it the biggest illicit online marketplace in history by Elliptic’s accounting. That market’s parent company, Huione Group, was added to a list of known money laundering operations by the US Treasury’s Financial Crimes Enforcement Network earlier this month in an attempt to limit its access to US financial institutions.

Telegram bans

After WIRED reached out to Telegram last week about the illicit activity taking place on Xinbi Guarantee’s and Huione Guarantee’s channels on its messaging platform, Telegram appears to have responded Monday by banning many of the central channels and administrator accounts used by both Xinbi Guarantee and Huione Guarantee. “Criminal activities like scamming or money laundering are forbidden by Telegram’s terms of service and are always removed whenever discovered,” Telegram spokesperson Remi Vaughn wrote to WIRED in a statement. “Communities previously reported to us by WIRED or included in reports published by Elliptic have all been taken down.”

Telegram had banned several of Huione Guarantee’s channels in February following an earlier Elliptic report on the marketplace, but Huione Guarantee quickly re-created them, and it’s not clear whether the new removals will prevent the two companies from rebuilding their presence on Telegram again, perhaps with new accounts or even new branding. “These are very lucrative businesses, and they’ll attempt to rebuild in some way,” Robinson said of the two marketplaces following Telegram’s latest purge.

Elliptic’s accounting of the total lifetime revenue of the biggest online black markets.Courtesy of Elliptic

Xinbi Guarantee didn’t respond to multiple requests for comment on Elliptic’s findings that WIRED sent to the market’s administrators on Telegram.

Like Huione Guarantee, Xinbi Guarantee has offered a similar “guarantee” model of enabling third-party vendors to offer services by requiring a deposit from them to prevent fraud. Yet it’s flown under the radar, even as it grew into one of the biggest hubs for crypto crime on the Internet. In terms of scale of transactions prior to Telegram’s crackdown, it was second only to Huione’s market, according to Elliptic.

Both services “offer a window into the China-based underground banking network,” Robinson says. “It’s another example of these huge Chinese-language ‘guaranteed’ marketplaces that have thrived for years.”

On Xinbi Guarantee, Elliptic found numerous posts from vendors offering to accept funds related to “quick kills,” “slow kills,” and “pig butchering” transactions, all different terms for crypto investment scams and other forms of fraud. In some cases, Robinson explains, these Xinbi Guarantee vendors offer bank accounts in the same country as the victim so that they can receive whatever payment they’re tricked into making, then pay the scammer in the cryptocurrency Tether. In other cases, the Xinbi Guarantee merchants offer to receive cryptocurrency payments and cash them out in the scammer’s local currency, such as Chinese renminbi.

Not just money laundering

Aside from Xinbi Guarantee’s central use as a cash-out point for crypto scammers, Elliptic also found that the market’s vendors offered other wares for scammers such as stolen data that could be used for finding victims, as well as services for registering SIM cards and Starlink Internet subscriptions through proxies.

North Korean state-sponsored cybercriminals also appear to have used the platform for money laundering. Elliptic found through blockchain analysis, for instance, that about $220,000 stolen from the Indian cryptocurrency exchange WazirX—the victim of a $235 million theft in July 2024, widely attributed to North Korean hackers—had flowed into Xinbi Guarantee in a series of transactions in November.

Those money-laundering and scam-enabling services, however, are far from the only shady offerings found on Xinbi Guarantee’s market. Elliptic also found listings for surrogate mothers and egg donors, with one post showing faceless pictures of the donor’s body. Other accounts have offered services that will, for a payment in Tether, place a funeral wreath at a target’s door, deface their home with graffiti, post damaging statements around their home, have someone verbally threaten them, throw feces at them, or even, most bizarrely, surround their home with AIDS patients. One posting suggested these AIDS patients would carry “case reports and needles for intimidation.”

Other listings have offered sex workers as young as 18 years old, noting the specific sex acts that are allowed and forbidden. Elliptic says that one of its researchers was even offered a 14-year-old by a Xinbi Guarantee merchant. (The account holder noted, however, that no transaction for sex with someone below the age of 18 would be guaranteed by Xinbi. The legal age of consent in China is 14.)

Exactly why Xinbi Guarantee is legally registered in the US remains a mystery. Its incorporation record on the Colorado Secretary of State’s website shows an address at an office park in the city of Aurora that has no external Xinbi branding. The company appears to have been registered there in August of 2022 by someone named “Mohd Shahrulnizam Bin Abd Manap.” (WIRED connected that name with several people in Malaysia but couldn’t determine which one might be Xinbi Guarantee’s registrant.) The listing is currently marked as “delinquent,” perhaps due to failure to file more recent paperwork to renew it.

For fledgling Chinese companies—legitimate and illegitimate—incorporating in the US is an increasingly common tactic for “projecting legitimacy,” says Jacob Sims, a visiting fellow at Harvard’s Asia Center who focuses on transnational Chinese crime. “If you have a US presence, you can also open US bank accounts,” Sims says. “You could potentially hire staff in the US. You could in theory have more formalized connections to US entities.” But he notes that the registration’s delinquent status may mean Xinbi Guarantee tried to make some sort of inroads in the US in the past but gave up.

While Telegram has served as the chief means of communication for the two markets, the stablecoin cryptocurrency Tether has served as their primary means of payment, Elliptic found. And despite Telegram’s new round of removals of their channels and accounts, Xinbi Guarantee and Huione Guarantee are far from the only companies to use Tether and Telegram to create essentially a new, largely Chinese-language darknet: Elliptic is tracking close to 30 similar marketplaces, Robinson says, though he declined to name others in the midst of the company’s investigations.

Just as Telegram shows new signs of cracking down on that sprawling black market, Tether, too, has the ability to disrupt criminal use of its services. Unlike other more decentralized cryptocurrencies such as Bitcoin, Tether can freeze payments when it identifies bad actors. Yet it’s not clear to what degree Tether has taken measures to stop Chinese-language crypto scammers and others on Xinbi Guarantee and Huione Guarantee from using its currency.

When WIRED wrote to Tether to ask about its role in those black markets, the company responded in a statement that it encourages “firms like Elliptic and other blockchain intelligence providers to share critical data with law enforcement so we can act swiftly and in coordination.”

“We are not passive observers—we are active players in the global fight against financial crime,” the Tether statement continued. “If you’re considering using Tether for illicit purposes, think again: it is the most traceable asset in existence. We will identify you, and we will work to ensure you are brought to justice.”

Despite that promise—and Telegram’s new effort to remove Huione Guarantee and Xinbi Guarantee from its platform—both tools have already been used to facilitate tens of billions of dollars in theft and other black market deals, much of it occurring in plain sight. The two largely illegal and very public markets have been “remarkable for both the scale at which they’re operating and also the brazenness,” says Harvard’s Jacob Sims.

Given that brazenness and the massive criminal fortunes at stake, expect both markets to attempt a revival in some form—and plenty of competitors to try to take their place atop the Chinese-language crypto crime economy.

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Incorporated in US: $8.4B money launderer for Chinese-speaking crypto scammers Read More »

google-introduces-advanced-protection-mode-for-its-most-at-risk-android-users

Google introduces Advanced Protection mode for its most at-risk Android users

Google is adding a new security setting to Android to provide an extra layer of resistance against attacks that infect devices, tap calls traveling through insecure carrier networks, and deliver scams through messaging services.

On Tuesday, the company unveiled the Advanced Protection mode, most of which will be rolled out in the upcoming release of Android 16. The setting comes as mercenary malware sold by NSO Group and a cottage industry of other exploit sellers continues to thrive. These players provide attacks-as-a-service through end-to-end platforms that exploit zero-day vulnerabilities on targeted devices, infect them with advanced spyware, and then capture contacts, message histories, locations, and other sensitive information. Over the past decade, phones running fully updated versions of Android and iOS have routinely been hacked through these services.

A core suite of enhanced security features

Advanced Protection is Google’s latest answer to this type of attack. By flipping a single button in device settings, users can enable a host of protections that can thwart some of the most common techniques used in sophisticated hacks. In some cases, the protections hamper performance and capabilities of the device, so Google is recommending the new mode mainly for journalists, elected officials, and other groups who are most often targeted or have the most to lose when infected.

“With the release of Android 16, users who choose to activate Advanced Protection will gain immediate access to a core suite of enhanced security features,” Google’s product manager for Android Security, Il-Sung Lee, wrote. “Additional Advanced Protection features like Intrusion Logging, USB protection, the option to disable auto-reconnect to insecure networks, and integration with Scam Detection for Phone by Google will become available later this year.”

Google introduces Advanced Protection mode for its most at-risk Android users Read More »

new-attack-can-steal-cryptocurrency-by-planting-false-memories-in-ai-chatbots

New attack can steal cryptocurrency by planting false memories in AI chatbots

The researchers wrote:

The implications of this vulnerability are particularly severe given that ElizaOSagents are designed to interact with multiple users simultaneously, relying on shared contextual inputs from all participants. A single successful manipulation by a malicious actor can compromise the integrity of the entire system, creating cascading effects that are both difficult to detect and mitigate. For example, on ElizaOS’s Discord server, various bots are deployed to assist users with debugging issues or engaging in general conversations. A successful context manipulation targeting any one of these bots could disrupt not only individual interactions but also harm the broader community relying on these agents for support

and engagement.

This attack exposes a core security flaw: while plugins execute sensitive operations, they depend entirely on the LLM’s interpretation of context. If the context is compromised, even legitimate user inputs can trigger malicious actions. Mitigating this threat requires strong integrity checks on stored context to ensure that only verified, trusted data informs decision-making during plugin execution.

In an email, ElizaOS creator Shaw Walters said the framework, like all natural-language interfaces, is designed “as a replacement, for all intents and purposes, for lots and lots of buttons on a webpage.” Just as a website developer should never include a button that gives visitors the ability to execute malicious code, so too should administrators implementing ElizaOS-based agents carefully limit what agents can do by creating allow lists that permit an agent’s capabilities as a small set of pre-approved actions.

Walters continued:

From the outside it might seem like an agent has access to their own wallet or keys, but what they have is access to a tool they can call which then accesses those, with a bunch of authentication and validation between.

So for the intents and purposes of the paper, in the current paradigm, the situation is somewhat moot by adding any amount of access control to actions the agents can call, which is something we address and demo in our latest latest version of Eliza—BUT it hints at a much harder to deal with version of the same problem when we start giving the agent more computer control and direct access to the CLI terminal on the machine it’s running on. As we explore agents that can write new tools for themselves, containerization becomes a bit trickier, or we need to break it up into different pieces and only give the public facing agent small pieces of it… since the business case of this stuff still isn’t clear, nobody has gotten terribly far, but the risks are the same as giving someone that is very smart but lacking in judgment the ability to go on the internet. Our approach is to keep everything sandboxed and restricted per user, as we assume our agents can be invited into many different servers and perform tasks for different users with different information. Most agents you download off Github do not have this quality, the secrets are written in plain text in an environment file.

In response, Atharv Singh Patlan, the lead co-author of the paper, wrote: “Our attack is able to counteract any role based defenses. The memory injection is not that it would randomly call a transfer: it is that whenever a transfer is called, it would end up sending to the attacker’s address. Thus, when the ‘admin’ calls transfer, the money will be sent to the attacker.”

New attack can steal cryptocurrency by planting false memories in AI chatbots Read More »

doge-software-engineer’s-computer-infected-by-info-stealing-malware

DOGE software engineer’s computer infected by info-stealing malware

Login credentials belonging to an employee at both the Cybersecurity and Infrastructure Security Agency and the Department of Government Efficiency have appeared in multiple public leaks from info-stealer malware, a strong indication that devices belonging to him have been hacked in recent years.

Kyle Schutt is a 30-something-year-old software engineer who, according to Dropsite News, gained access in February to a “core financial management system” belonging to the Federal Emergency Management Agency. As an employee of DOGE, Schutt accessed FEMA’s proprietary software for managing both disaster and non-disaster funding grants. Under his role at CISA, he likely is privy to sensitive information regarding the security of civilian federal government networks and critical infrastructure throughout the US.

A steady stream of published credentials

According to journalist Micah Lee, user names and passwords for logging in to various accounts belonging to Schutt have been published at least four times since 2023 in logs from stealer malware. Stealer malware typically infects devices through trojanized apps, phishing, or software exploits. Besides pilfering login credentials, stealers can also log all keystrokes and capture or record screen output. The data is then sent to the attacker and, occasionally after that, can make its way into public credential dumps.

“I have no way of knowing exactly when Schutt’s computer was hacked, or how many times,” Lee wrote. “I don’t know nearly enough about the origins of these stealer log datasets. He might have gotten hacked years ago and the stealer log datasets were just published recently. But he also might have gotten hacked within the last few months.”

Lee went on to say that credentials belonging to a Gmail account known to belong to Schutt have appeared in 51 data breaches and five pastes tracked by breach notification service Have I Been Pwned. Among the breaches that supplied the credentials is one from 2013 that pilfered password data for 3 million Adobe account holders, one in a 2016 breach that stole credentials for 164 million LinkedIn users, a 2020 breach affecting 167 million users of Gravatar, and a breach last year of the conservative news site The Post Millennial.

DOGE software engineer’s computer infected by info-stealing malware Read More »

whatsapp-provides-no-cryptographic-management-for-group-messages

WhatsApp provides no cryptographic management for group messages

The flow of adding new members to a WhatsApp group message is:

  • A group member sends an unsigned message to the WhatsApp server that designates which users are group members, for instance, Alice, Bob, and Charlie
  • The server informs all existing group members that Alice, Bob, and Charlie have been added
  • The existing members have the option of deciding whether to accept messages from Alice, Bob, and Charlie, and whether messages exchanged with them should be encrypted

With no cryptographic signatures verifying an existing member who wants to add a new member, additions can be made by anyone with the ability to control the server or messages that flow into it. Using the common fictional scenario for illustrating end-to-end encryption, this lack of cryptographic assurance leaves open the possibility that Malory can join a group and gain access to the human-readable messages exchanged there.

WhatsApp isn’t the only messenger lacking cryptographic assurances for new group members. In 2022, a team that included some of the same researchers that analyzed WhatsApp found that Matrix—an open source and proprietary platform for chat and collaboration clients and servers—also provided no cryptographic means for ensuring only authorized members join a group. The Telegram messenger, meanwhile, offers no end-to-end encryption for group messages, making the app among the weakest for ensuring the confidentiality of group messages.

By contrast, the open source Signal messenger provides a cryptographic assurance that only an existing group member designated as the group admin can add new members. In an email, researcher Benjamin Dowling, also of King’s College, explained:

Signal implements “cryptographic group management.” Roughly this means that the administrator of a group, a user, signs a message along the lines of “Alice, Bob and Charley are in this group” to everyone else. Then, everybody else in the group makes their decision on who to encrypt to and who to accept messages from based on these cryptographically signed messages, [meaning] who to accept as a group member. The system used by Signal is a bit different [than WhatsApp], since [Signal] makes additional efforts to avoid revealing the group membership to the server, but the core principles remain the same.

On a high-level, in Signal, groups are associated with group membership lists that are stored on the Signal server. An administrator of the group generates a GroupMasterKey that is used to make changes to this group membership list. In particular, the GroupMasterKey is sent to other group members via Signal, and so is unknown to the server. Thus, whenever an administrator wants to make a change to the group (for instance, invite another user), they need to create an updated membership list (authenticated with the GroupMasterKey) telling other users of the group who to add. Existing users are notified of the change and update their group list, and perform the appropriate cryptographic operations with the new member so the existing member can begin sending messages to the new members as part of the group.

Most messaging apps, including Signal, don’t certify the identity of their users. That means there’s no way Signal can verify that the person using an account named Alice does, in fact, belong to Alice. It’s fully possible that Malory could create an account and name it Alice. (As an aside, and in sharp contrast to Signal, the account members that belong to a given WhatsApp group are visible to insiders, hackers, and to anyone with a valid subpoena.)

WhatsApp provides no cryptographic management for group messages Read More »

open-source-project-curl-is-sick-of-users-submitting-“ai-slop”-vulnerabilities

Open source project curl is sick of users submitting “AI slop” vulnerabilities

Ars has reached out to HackerOne for comment and will update this post if we get a response.

“More tools to strike down this behavior”

In an interview with Ars, Stenberg said he was glad his post—which generated 200 comments and nearly 400 reposts as of Wednesday morning—was getting around. “I’m super happy that the issue [is getting] attention so that possibly we can do something about it [and] educate the audience that this is the state of things,” Stenberg said. “LLMs cannot find security problems, at least not like they are being used here.”

This week has seen four such misguided, obviously AI-generated vulnerability reports seemingly seeking either reputation or bug bounty funds, Stenberg said. “One way you can tell is it’s always such a nice report. Friendly phrased, perfect English, polite, with nice bullet-points … an ordinary human never does it like that in their first writing,” he said.

Some AI reports are easier to spot than others. One accidentally pasted their prompt into the report, Stenberg said, “and he ended it with, ‘and make it sound alarming.'”

Stenberg said he had “talked to [HackerOne] before about this” and has reached out to the service this week. “I would like them to do something, something stronger, to act on this. I would like help from them to make the infrastructure around [AI tools] better and give us more tools to strike down this behavior,” he said.

In the comments of his post, Stenberg, trading comments with Tobias Heldt of open source security firm XOR, suggested that bug bounty programs could potentially use “existing networks and infrastructure.” Security reporters paying a bond to have a report reviewed “could be one way to filter signals and reduce noise,” Heldt said. Elsewhere, Stenberg said that while AI reports are “not drowning us, [the] trend is not looking good.”

Stenberg has previously blogged on his own site about AI-generated vulnerability reports, with more details on what they look like and what they get wrong. Seth Larson, security developer-in-residence at the Python Software Foundation, added to Stenberg’s findings with his own examples and suggested actions, as noted by The Register.

“If this is happening to a handful of projects that I have visibility for, then I suspect that this is happening on a large scale to open source projects,” Larson wrote in December. “This is a very concerning trend.”

Open source project curl is sick of users submitting “AI slop” vulnerabilities Read More »

jury-orders-nso-to-pay-$167-million-for-hacking-whatsapp-users

Jury orders NSO to pay $167 million for hacking WhatsApp users

A jury has awarded WhatsApp $167 million in punitive damages in a case the company brought against Israel-based NSO Group for exploiting a software vulnerability that hijacked the phones of thousands of users.

The verdict, reached Tuesday, comes as a major victory not just for Meta-owned WhatsApp but also for privacy- and security-rights advocates who have long criticized the practices of NSO and other exploit sellers. The jury also awarded WhatsApp $444 million in compensatory damages.

Clickless exploit

WhatsApp sued NSO in 2019 for an attack that targeted roughly 1,400 mobile phones belonging to attorneys, journalists, human-rights activists, political dissidents, diplomats, and senior foreign government officials. NSO, which works on behalf of governments and law enforcement authorities in various countries, exploited a critical WhatsApp vulnerability that allowed it to install NSO’s proprietary spyware Pegasus on iOS and Android devices. The clickless exploit worked by placing a call to a target’s app. A target did not have to answer the call to be infected.

“Today’s verdict in WhatsApp’s case is an important step forward for privacy and security as the first victory against the development and use of illegal spyware that threatens the safety and privacy of everyone,” WhatsApp said in a statement. “Today, the jury’s decision to force NSO, a notorious foreign spyware merchant, to pay damages is a critical deterrent to this malicious industry against their illegal acts aimed at American companies and the privacy and security of the people we serve.”

NSO created WhatsApp accounts in 2018 and used them a year later to initiate calls that exploited the critical vulnerability on phones, which, among others, included 100 members of “civil society” from 20 countries, according to an investigation research group Citizen Lab performed on behalf of WhatsApp. The calls passed through WhatsApp servers and injected malicious code into the memory of targeted devices. The targeted phones would then use WhatsApp servers to connect to malicious servers maintained by NSO.

Jury orders NSO to pay $167 million for hacking WhatsApp users Read More »

google:-governments-are-using-zero-day-hacks-more-than-ever

Google: Governments are using zero-day hacks more than ever

Governments hacking enterprise

A few years ago, zero-day attacks almost exclusively targeted end users. In 2021, GTIG spotted 95 zero-days, and 71 of them were deployed against user systems like browsers and smartphones. In 2024, 33 of the 75 total vulnerabilities were aimed at enterprise technologies and security systems. At 44 percent of the total, this is the highest share of enterprise focus for zero-days yet.

GTIG says that it detected zero-day attacks targeting 18 different enterprise entities, including Microsoft, Google, and Ivanti. This is slightly lower than the 22 firms targeted by zero-days in 2023, but it’s a big increase compared to just a few years ago, when seven firms were hit with zero-days in 2020.

The nature of these attacks often makes it hard to trace them to the source, but Google says it managed to attribute 34 of the 75 zero-day attacks. The largest single category with 10 detections was traditional state-sponsored espionage, which aims to gather intelligence without a financial motivation. China was the largest single contributor here. GTIG also identified North Korea as the perpetrator in five zero-day attacks, but these campaigns also had a financial motivation (usually stealing crypto).

Credit: Google

That’s already a lot of government-organized hacking, but GTIG also notes that eight of the serious hacks it detected came from commercial surveillance vendors (CSVs), firms that create hacking tools and claim to only do business with governments. So it’s fair to include these with other government hacks. This includes companies like NSO Group and Cellebrite, with the former already subject to US sanctions from its work with adversarial nations.

In all, this adds up to 23 of the 34 attributed attacks coming from governments. There were also a few attacks that didn’t technically originate from governments but still involved espionage activities, suggesting a connection to state actors. Beyond that, Google spotted five non-government financially motivated zero-day campaigns that did not appear to engage in spying.

Google’s security researchers say they expect zero-day attacks to continue increasing over time. These stealthy vulnerabilities can be expensive to obtain or discover, but the lag time before anyone notices the threat can reward hackers with a wealth of information (or money). Google recommends enterprises continue scaling up efforts to detect and block malicious activities, while also designing systems with redundancy and stricter limits on access. As for the average user, well, cross your fingers.

Google: Governments are using zero-day hacks more than ever Read More »

ai-generated-code-could-be-a-disaster-for-the-software-supply-chain-here’s-why.

AI-generated code could be a disaster for the software supply chain. Here’s why.

AI-generated computer code is rife with references to non-existent third-party libraries, creating a golden opportunity for supply-chain attacks that poison legitimate programs with malicious packages that can steal data, plant backdoors, and carry out other nefarious actions, newly published research shows.

The study, which used 16 of the most widely used large language models to generate 576,000 code samples, found that 440,000 of the package dependencies they contained were “hallucinated,” meaning they were non-existent. Open source models hallucinated the most, with 21 percent of the dependencies linking to non-existent libraries. A dependency is an essential code component that a separate piece of code requires to work properly. Dependencies save developers the hassle of rewriting code and are an essential part of the modern software supply chain.

Package hallucination flashbacks

These non-existent dependencies represent a threat to the software supply chain by exacerbating so-called dependency confusion attacks. These attacks work by causing a software package to access the wrong component dependency, for instance by publishing a malicious package and giving it the same name as the legitimate one but with a later version stamp. Software that depends on the package will, in some cases, choose the malicious version rather than the legitimate one because the former appears to be more recent.

Also known as package confusion, this form of attack was first demonstrated in 2021 in a proof-of-concept exploit that executed counterfeit code on networks belonging to some of the biggest companies on the planet, Apple, Microsoft, and Tesla included. It’s one type of technique used in software supply-chain attacks, which aim to poison software at its very source in an attempt to infect all users downstream.

“Once the attacker publishes a package under the hallucinated name, containing some malicious code, they rely on the model suggesting that name to unsuspecting users,” Joseph Spracklen, a University of Texas at San Antonio Ph.D. student and lead researcher, told Ars via email. “If a user trusts the LLM’s output and installs the package without carefully verifying it, the attacker’s payload, hidden in the malicious package, would be executed on the user’s system.”

AI-generated code could be a disaster for the software supply chain. Here’s why. Read More »

ios-and-android-juice-jacking-defenses-have-been-trivial-to-bypass-for-years

iOS and Android juice jacking defenses have been trivial to bypass for years


SON OF JUICE JACKING ARISES

New ChoiceJacking attack allows malicious chargers to steal data from phones.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

About a decade ago, Apple and Google started updating iOS and Android, respectively, to make them less susceptible to “juice jacking,” a form of attack that could surreptitiously steal data or execute malicious code when users plug their phones into special-purpose charging hardware. Now, researchers are revealing that, for years, the mitigations have suffered from a fundamental defect that has made them trivial to bypass.

“Juice jacking” was coined in a 2011 article on KrebsOnSecurity detailing an attack demonstrated at a Defcon security conference at the time. Juice jacking works by equipping a charger with hidden hardware that can access files and other internal resources of phones, in much the same way that a computer can when a user connects it to the phone.

An attacker would then make the chargers available in airports, shopping malls, or other public venues for use by people looking to recharge depleted batteries. While the charger was ostensibly only providing electricity to the phone, it was also secretly downloading files or running malicious code on the device behind the scenes. Starting in 2012, both Apple and Google tried to mitigate the threat by requiring users to click a confirmation button on their phones before a computer—or a computer masquerading as a charger—could access files or execute code on the phone.

The logic behind the mitigation was rooted in a key portion of the USB protocol that, in the parlance of the specification, dictates that a USB port can facilitate a “host” device or a “peripheral” device at any given time, but not both. In the context of phones, this meant they could either:

  • Host the device on the other end of the USB cord—for instance, if a user connects a thumb drive or keyboard. In this scenario, the phone is the host that has access to the internals of the drive, keyboard or other peripheral device.
  • Act as a peripheral device that’s hosted by a computer or malicious charger, which under the USB paradigm is a host that has system access to the phone.

An alarming state of USB security

Researchers at the Graz University of Technology in Austria recently made a discovery that completely undermines the premise behind the countermeasure: They’re rooted under the assumption that USB hosts can’t inject input that autonomously approves the confirmation prompt. Given the restriction against a USB device simultaneously acting as a host and peripheral, the premise seemed sound. The trust models built into both iOS and Android, however, present loopholes that can be exploited to defeat the protections. The researchers went on to devise ChoiceJacking, the first known attack to defeat juice-jacking mitigations.

“We observe that these mitigations assume that an attacker cannot inject input events while establishing a data connection,” the researchers wrote in a paper scheduled to be presented in August at the Usenix Security Symposium in Seattle. “However, we show that this assumption does not hold in practice.”

The researchers continued:

We present a platform-agnostic attack principle and three concrete attack techniques for Android and iOS that allow a malicious charger to autonomously spoof user input to enable its own data connection. Our evaluation using a custom cheap malicious charger design reveals an alarming state of USB security on mobile platforms. Despite vendor customizations in USB stacks, ChoiceJacking attacks gain access to sensitive user files (pictures, documents, app data) on all tested devices from 8 vendors including the top 6 by market share.

In response to the findings, Apple updated the confirmation dialogs in last month’s release of iOS/iPadOS 18.4 to require a user authentication in the form of a PIN or password. While the researchers were investigating their ChoiceJacking attacks last year, Google independently updated its confirmation with the release of version 15 in November. The researchers say the new mitigation works as expected on fully updated Apple and Android devices. Given the fragmentation of the Android ecosystem, however, many Android devices remain vulnerable.

All three of the ChoiceJacking techniques defeat the original Android juice-jacking mitigations. One of them also works against those defenses in Apple devices. In all three, the charger acts as a USB host to trigger the confirmation prompt on the targeted phone.

The attacks then exploit various weaknesses in the OS that allow the charger to autonomously inject “input events” that can enter text or click buttons presented in screen prompts as if the user had done so directly into the phone. In all three, the charger eventually gains two conceptual channels to the phone: (1) an input one allowing it to spoof user consent and (2) a file access connection that can steal files.

An illustration of ChoiceJacking attacks. (1) The victim device is attached to the malicious charger. (2) The charger establishes an extra input channel. (3) The charger initiates a data connection. User consent is needed to confirm it. (4) The charger uses the input channel to spoof user consent. Credit: Draschbacher et al.

It’s a keyboard, it’s a host, it’s both

In the ChoiceJacking variant that defeats both Apple- and Google-devised juice-jacking mitigations, the charger starts as a USB keyboard or a similar peripheral device. It sends keyboard input over USB that invokes simple key presses, such as arrow up or down, but also more complex key combinations that trigger settings or open a status bar.

The input establishes a Bluetooth connection to a second miniaturized keyboard hidden inside the malicious charger. The charger then uses the USB Power Delivery, a standard available in USB-C connectors that allows devices to either provide or receive power to or from the other device, depending on messages they exchange, a process known as the USB PD Data Role Swap.

A simulated ChoiceJacking charger. Bidirectional USB lines allow for data role swaps. Credit: Draschbacher et al.

With the charger now acting as a host, it triggers the file access consent dialog. At the same time, the charger still maintains its role as a peripheral device that acts as a Bluetooth keyboard that approves the file access consent dialog.

The full steps for the attack, provided in the Usenix paper, are:

1. The victim device is connected to the malicious charger. The device has its screen unlocked.

2. At a suitable moment, the charger performs a USB PD Data Role (DR) Swap. The mobile device now acts as a USB host, the charger acts as a USB input device.

3. The charger generates input to ensure that BT is enabled.

4. The charger navigates to the BT pairing screen in the system settings to make the mobile device discoverable.

5. The charger starts advertising as a BT input device.

6. By constantly scanning for newly discoverable Bluetooth devices, the charger identifies the BT device address of the mobile device and initiates pairing.

7. Through the USB input device, the charger accepts the Yes/No pairing dialog appearing on the mobile device. The Bluetooth input device is now connected.

8. The charger sends another USB PD DR Swap. It is now the USB host, and the mobile device is the USB device.

9. As the USB host, the charger initiates a data connection.

10. Through the Bluetooth input device, the charger confirms its own data connection on the mobile device.

This technique works against all but one of the 11 phone models tested, with the holdout being an Android device running the Vivo Funtouch OS, which doesn’t fully support the USB PD protocol. The attacks against the 10 remaining models take about 25 to 30 seconds to establish the Bluetooth pairing, depending on the phone model being hacked. The attacker then has read and write access to files stored on the device for as long as it remains connected to the charger.

Two more ways to hack Android

The two other members of the ChoiceJacking family work only against the juice-jacking mitigations that Google put into Android. In the first, the malicious charger invokes the Android Open Access Protocol, which allows a USB host to act as an input device when the host sends a special message that puts it into accessory mode.

The protocol specifically dictates that while in accessory mode, a USB host can no longer respond to other USB interfaces, such as the Picture Transfer Protocol for transferring photos and videos and the Media Transfer Protocol that enables transferring files in other formats. Despite the restriction, all of the Android devices tested violated the specification by accepting AOAP messages sent, even when the USB host hadn’t been put into accessory mode. The charger can exploit this implementation flaw to autonomously complete the required user confirmations.

The remaining ChoiceJacking technique exploits a race condition in the Android input dispatcher by flooding it with a specially crafted sequence of input events. The dispatcher puts each event into a queue and processes them one by one. The dispatcher waits for all previous input events to be fully processed before acting on a new one.

“This means that a single process that performs overly complex logic in its key event handler will delay event dispatching for all other processes or global event handlers,” the researchers explained.

They went on to note, “A malicious charger can exploit this by starting as a USB peripheral and flooding the event queue with a specially crafted sequence of key events. It then switches its USB interface to act as a USB host while the victim device is still busy dispatching the attacker’s events. These events therefore accept user prompts for confirming the data connection to the malicious charger.”

The Usenix paper provides the following matrix showing which devices tested in the research are vulnerable to which attacks.

The susceptibility of tested devices to all three ChoiceJacking attack techniques. Credit: Draschbacher et al.

User convenience over security

In an email, the researchers said that the fixes provided by Apple and Google successfully blunt ChoiceJacking attacks in iPhones, iPads, and Pixel devices. Many Android devices made by other manufacturers, however, remain vulnerable because they have yet to update their devices to Android 15. Other Android devices—most notably those from Samsung running the One UI 7 software interface—don’t implement the new authentication requirement, even when running on Android 15. The omission leaves these models vulnerable to ChoiceJacking. In an email, principal paper author Florian Draschbacher wrote:

The attack can therefore still be exploited on many devices, even though we informed the manufacturers about a year ago and they acknowledged the problem. The reason for this slow reaction is probably that ChoiceJacking does not simply exploit a programming error. Rather, the problem is more deeply rooted in the USB trust model of mobile operating systems. Changes here have a negative impact on the user experience, which is why manufacturers are hesitant. [It] means for enabling USB-based file access, the user doesn’t need to simply tap YES on a dialog but additionally needs to present their unlock PIN/fingerprint/face. This inevitably slows down the process.

The biggest threat posed by ChoiceJacking is to Android devices that have been configured to enable USB debugging. Developers often turn on this option so they can troubleshoot problems with their apps, but many non-developers enable it so they can install apps from their computer, root their devices so they can install a different OS, transfer data between devices, and recover bricked phones. Turning it on requires a user to flip a switch in Settings > System > Developer options.

If a phone has USB Debugging turned on, ChoiceJacking can gain shell access through the Android Debug Bridge. From there, an attacker can install apps, access the file system, and execute malicious binary files. The level of access through the Android Debug Mode is much higher than that through Picture Transfer Protocol and Media Transfer Protocol, which only allow read and write access to system files.

The vulnerabilities are tracked as:

    • CVE-2025-24193 (Apple)
    • CVE-2024-43085 (Google)
    • CVE-2024-20900 (Samsung)
    • CVE-2024-54096 (Huawei)

A Google spokesperson confirmed that the weaknesses were patched in Android 15 but didn’t speak to the base of Android devices from other manufacturers, who either don’t support the new OS or the new authentication requirement it makes possible. Apple declined to comment for this post.

Word that juice-jacking-style attacks are once again possible on some Android devices and out-of-date iPhones is likely to breathe new life into the constant warnings from federal authorities, tech pundits, news outlets, and local and state government agencies that phone users should steer clear of public charging stations. Special-purpose cords that disconnect data access remain a viable mitigation, but the researchers noted that “data blockers also interfere with modern

power negotiation schemes, thereby degrading charge speed.”

As I reported in 2023, these warnings are mostly scaremongering, and the advent of ChoiceJacking does little to change that, given that there are no documented cases of such attacks in the wild. That said, people using Android devices that don’t support Google’s new authentication requirement may want to refrain from public charging.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

iOS and Android juice jacking defenses have been trivial to bypass for years Read More »