Biz & IT

clickfix-may-be-the-biggest-security-threat-your-family-has-never-heard-of

ClickFix may be the biggest security threat your family has never heard of

Another campaign, documented by Sekoia, targeted Windows users. The attackers behind it first compromise a hotel’s account for Booking.com or another online travel service. Using the information stored in the compromised accounts, the attackers contact people with pending reservations, an ability that builds immediate trust with many targets, who are eager to comply with instructions, lest their stay be canceled.

The site eventually presents a fake CAPTCHA notification that bears an almost identical look and feel to those required by content delivery network Cloudflare. The proof the notification requires for confirmation that there’s a human behind the keyboard is to copy a string of text and paste it into the Windows terminal. With that, the machine is infected with malware tracked as PureRAT.

Push Security, meanwhile, reported a ClickFix campaign with a page “adapting to the device that you’re visiting from.” Depending on the OS, the page will deliver payloads for Windows or macOS. Many of these payloads, Microsoft said, are LOLbins, the name for binaries that use a technique known as living off the land. These scripts rely solely on native capabilities built into the operating system. With no malicious files being written to disk, endpoint protection is further hamstrung.

The commands, which are often base-64 encoded to make them unreadable to humans, are often copied inside the browser sandbox, a part of most browsers that accesses the Internet in an isolated environment designed to protect devices from malware or harmful scripts. Many security tools are unable to observe and flag these actions as potentially malicious.

The attacks can also be effective given the lack of awareness. Many people have learned over the years to be suspicious of links in emails or messengers. In many users’ minds, the precaution doesn’t extend to sites that instruct them to copy a piece of text and paste it into an unfamiliar window. When the instructions come in emails from a known hotel or at the top of Google results, targets can be further caught off guard.

With many families gathering in the coming weeks for various holiday dinners, ClickFix scams are worth mentioning to those family members who ask for security advice. Microsoft Defender and other endpoint protection programs offer some defenses against these attacks, but they can, in some cases, be bypassed. That means that, for now, awareness is the best countermeasure.

ClickFix may be the biggest security threat your family has never heard of Read More »

researchers-isolate-memorization-from-problem-solving-in-ai-neural-networks

Researchers isolate memorization from problem-solving in AI neural networks


The hills and valleys of knowledge

Basic arithmetic ability lives in the memorization pathways, not logic circuits.

When engineers build AI language models like GPT-5 from training data, at least two major processing features emerge: memorization (reciting exact text they’ve seen before, like famous quotes or passages from books) and what you might call “reasoning” (solving new problems using general principles). New research from AI startup Goodfire.ai provides the first potentially clear evidence that these different functions actually work through completely separate neural pathways in the model’s architecture.

The researchers discovered that this separation proves remarkably clean. In a preprint paper released in late October, they described that when they removed the memorization pathways, models lost 97 percent of their ability to recite training data verbatim but kept nearly all their “logical reasoning” ability intact.

For example, at layer 22 in Allen Institute for AI’s OLMo-7B language model, the researchers ranked all the weight components (the mathematical values that process information) from high to low based on a measure called “curvature” (which we’ll explain more below). When they examined these ranked components, the bottom 50 percent of weight components showed 23 percent higher activation on memorized data, while the top 10 percent showed 26 percent higher activation on general, non-memorized text.

In other words, the components that specialize in memorization clustered at the bottom of their ranking, while problem-solving components clustered at the top. This mechanistic split enabled the researchers to surgically remove memorization while preserving other capabilities. They found they could delete the bottom-ranked components to eliminate memorization while keeping the top-ranked ones that handle problem-solving.

Perhaps most surprisingly, the researchers found that arithmetic operations seem to share the same neural pathways as memorization rather than logical reasoning. When they removed memorization circuits, mathematical performance plummeted to 66 percent while logical tasks remained nearly untouched. This discovery may explain why AI language models notoriously struggle with math without the use of external tools. They’re attempting to recall arithmetic from a limited memorization table rather than computing it, like a student who memorized times tables but never learned how multiplication works. The finding suggests that at current scales, language models treat “2+2=4” more like a memorized fact than a logical operation.

It’s worth noting that “reasoning” in AI research covers a spectrum of abilities that don’t necessarily match what we might call reasoning in humans. The logical reasoning that survived memory removal in this latest research includes tasks like evaluating true/false statements and following if-then rules, which are essentially applying learned patterns to new inputs. This also differs from the deeper “mathematical reasoning” required for proofs or novel problem-solving, which current AI models struggle with even when their pattern-matching abilities remain intact.

Looking ahead, if the information removal techniques receive further development in the future, AI companies could potentially one day remove, say, copyrighted content, private information, or harmful memorized text from a neural network without destroying the model’s ability to perform transformative tasks. However, since neural networks store information in distributed ways that are still not completely understood, for the time being, the researchers say their method “cannot guarantee complete elimination of sensitive information.” These are early steps in a new research direction for AI.

Traveling the neural landscape

To understand how researchers from Goodfire distinguished memorization from reasoning in these neural networks, it helps to know about a concept in AI called the “loss landscape.” The “loss landscape” is a way of visualizing how wrong or right an AI model’s predictions are as you adjust its internal settings (which are called “weights”).

Imagine you’re tuning a complex machine with millions of dials. The “loss” measures the number of mistakes the machine makes. High loss means many errors, low loss means few errors. The “landscape” is what you’d see if you could map out the error rate for every possible combination of dial settings.

During training, AI models essentially “roll downhill” in this landscape (gradient descent), adjusting their weights to find the valleys where they make the fewest mistakes. This process provides AI model outputs, like answers to questions.

Figure 1: Overview of our approach. We collect activations and gradients from a sample of training data (a), which allows us to approximate loss curvature w.r.t. a weight matrix using K-FAC (b). We decompose these weight matrices into components (each the same size as the matrix), ordered from high to low curvature. In language models, we show that data from different tasks interacts with parts of the spectrum of components differently (c).

Figure 1 from the paper “From Memorization to Reasoning in the Spectrum of Loss Curvature.” Credit: Merullo et al.

The researchers analyzed the “curvature” of the loss landscapes of particular AI language models, measuring how sensitive the model’s performance is to small changes in different neural network weights. Sharp peaks and valleys represent high curvature (where tiny changes cause big effects), while flat plains represent low curvature (where changes have minimal impact). They used these curvature values to rank the weight components from high to low, as mentioned earlier.

Using a technique called K-FAC (Kronecker-Factored Approximate Curvature), they found that individual memorized facts create sharp spikes in this landscape, but because each memorized item spikes in a different direction, when averaged together they create a flat profile. Meanwhile, reasoning abilities that many different inputs rely on maintain consistent moderate curves across the landscape, like rolling hills that remain roughly the same shape regardless of the direction from which you approach them.

“Directions that implement shared mechanisms used by many inputs add coherently and remain high-curvature on average,” the researchers write, describing reasoning pathways. In contrast, memorization uses “idiosyncratic sharp directions associated with specific examples” that appear flat when averaged across data.

Different tasks reveal a spectrum of mechanisms

The researchers tested their technique on multiple AI systems to verify the findings held across different architectures. They primarily used Allen Institute’s OLMo-2 family of open language models, specifically the 7 billion- and 1 billion-parameter versions, chosen because their training data is openly accessible. For vision models, they trained custom 86 million-parameter Vision Transformers (ViT-Base models) on ImageNet with intentionally mislabeled data to create controlled memorization. They also validated their findings against existing memorization removal methods like BalancedSubnet to establish performance benchmarks.

The team tested their discovery by selectively removing low-curvature weight components from these trained models. Memorized content dropped to 3.4 percent recall from nearly 100 percent. Meanwhile, logical reasoning tasks maintained 95 to 106 percent of baseline performance.

These logical tasks included Boolean expression evaluation, logical deduction puzzles where solvers must track relationships like “if A is taller than B,” object tracking through multiple swaps, and benchmarks like BoolQ for yes/no reasoning, Winogrande for common sense inference, and OpenBookQA for science questions requiring reasoning from provided facts. Some tasks fell between these extremes, revealing a spectrum of mechanisms.

Mathematical operations and closed-book fact retrieval shared pathways with memorization, dropping to 66 to 86 percent performance after editing. The researchers found arithmetic particularly brittle. Even when models generated identical reasoning chains, they failed at the calculation step after low-curvature components were removed.

Figure 3: Sensitivity of different kinds of tasks to ablation of flatter eigenvectors. Parametric knowledge retrieval, arithmetic, and memorization are brittle, but openbook fact retrieval and logical reasoning is robust and maintain around 100% of original performance.

Figure 3 from the paper “From Memorization to Reasoning in the Spectrum of Loss Curvature.” Credit: Merullo et al.

“Arithmetic problems themselves are memorized at the 7B scale, or because they require narrowly used directions to do precise calculations,” the team explains. Open-book question answering, which relies on provided context rather than internal knowledge, proved most robust to the editing procedure, maintaining nearly full performance.

Curiously, the mechanism separation varied by information type. Common facts like country capitals barely changed after editing, while rare facts like company CEOs dropped 78 percent. This suggests models allocate distinct neural resources based on how frequently information appears in training.

The K-FAC technique outperformed existing memorization removal methods without needing training examples of memorized content. On unseen historical quotes, K-FAC achieved 16.1 percent memorization versus 60 percent for the previous best method, BalancedSubnet.

Vision transformers showed similar patterns. When trained with intentionally mislabeled images, the models developed distinct pathways for memorizing wrong labels versus learning correct patterns. Removing memorization pathways restored 66.5 percent accuracy on previously mislabeled images.

Limits of memory removal

However, the researchers acknowledged that their technique isn’t perfect. Once-removed memories might return if the model receives more training, as other research has shown that current unlearning methods only suppress information rather than completely erasing it from the neural network’s weights. That means the “forgotten” content can be reactivated with just a few training steps targeting those suppressed areas.

The researchers also can’t fully explain why some abilities, like math, break so easily when memorization is removed. It’s unclear whether the model actually memorized all its arithmetic or whether math just happens to use similar neural circuits as memorization. Additionally, some sophisticated capabilities might look like memorization to their detection method, even when they’re actually complex reasoning patterns. Finally, the mathematical tools they use to measure the model’s “landscape” can become unreliable at the extremes, though this doesn’t affect the actual editing process.

This article was updated on November 11, 2025 at 9: 16 am to clarify an explanation about sorting weights by curvature.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Researchers isolate memorization from problem-solving in AI neural networks Read More »

researchers-surprised-that-with-ai,-toxicity-is-harder-to-fake-than-intelligence

Researchers surprised that with AI, toxicity is harder to fake than intelligence

The next time you encounter an unusually polite reply on social media, you might want to check twice. It could be an AI model trying (and failing) to blend in with the crowd.

On Wednesday, researchers from the University of Zurich, University of Amsterdam, Duke University, and New York University released a study revealing that AI models remain easily distinguishable from humans in social media conversations, with overly friendly emotional tone serving as the most persistent giveaway. The research, which tested nine open-weight models across Twitter/X, Bluesky, and Reddit, found that classifiers developed by the researchers detected AI-generated replies with 70 to 80 percent accuracy.

The study introduces what the authors call a “computational Turing test” to assess how closely AI models approximate human language. Instead of relying on subjective human judgment about whether text sounds authentic, the framework uses automated classifiers and linguistic analysis to identify specific features that distinguish machine-generated from human-authored content.

“Even after calibration, LLM outputs remain clearly distinguishable from human text, particularly in affective tone and emotional expression,” the researchers wrote. The team, led by Nicolò Pagan at the University of Zurich, tested various optimization strategies, from simple prompting to fine-tuning, but found that deeper emotional cues persist as reliable tells that a particular text interaction online was authored by an AI chatbot rather than a human.

The toxicity tell

In the study, researchers tested nine large language models: Llama 3.1 8B, Llama 3.1 8B Instruct, Llama 3.1 70B, Mistral 7B v0.1, Mistral 7B Instruct v0.2, Qwen 2.5 7B Instruct, Gemma 3 4B Instruct, DeepSeek-R1-Distill-Llama-8B, and Apertus-8B-2509.

When prompted to generate replies to real social media posts from actual users, the AI models struggled to match the level of casual negativity and spontaneous emotional expression common in human social media posts, with toxicity scores consistently lower than authentic human replies across all three platforms.

To counter this deficiency, the researchers attempted optimization strategies (including providing writing examples and context retrieval) that reduced structural differences like sentence length or word count, but variations in emotional tone persisted. “Our comprehensive calibration tests challenge the assumption that more sophisticated optimization necessarily yields more human-like output,” the researchers concluded.

Researchers surprised that with AI, toxicity is harder to fake than intelligence Read More »

wipers-from-russia’s-most-cut-throat-hackers-rain-destruction-on-ukraine

Wipers from Russia’s most cut-throat hackers rain destruction on Ukraine

One of the world’s most ruthless and advanced hacking groups, the Russian state-controlled Sandworm, launched a series of destructive cyberattacks in the country’s ongoing war against neighboring Ukraine, researchers reported Thursday.

In April, the group targeted a Ukrainian university with two wipers, a form of malware that aims to permanently destroy sensitive data and often the infrastructure storing it. One wiper, tracked under the name Sting, targeted fleets of Windows computers by scheduling a task named DavaniGulyashaSdeshka, a phrase derived from Russian slang that loosely translates to “eat some goulash,” researchers from ESET said. The other wiper is tracked as Zerlot.

A not-so-common target

Then, in June and September, Sandworm unleashed multiple wiper variants against a host of Ukrainian critical infrastructure targets, including organizations active in government, energy, and logistics. The targets have long been in the crosshairs of Russian hackers. There was, however, a fourth, less common target—organizations in Ukraine’s grain industry.

“Although all four have previously been documented as targets of wiper attacks at some point since 2022, the grain sector stands out as a not-so-frequent target,” ESET said. “Considering that grain export remains one of Ukraine’s main sources of revenue, such targeting likely reflects an attempt to weaken the country’s war economy.”

Wipers have been a favorite tool of Russian hackers since at least 2012, with the spreading of the NotPetya worm. The self-replicating malware originally targeted Ukraine, but eventually caused international chaos when it spread globally in a matter of hours. The worm resulted in tens of billions of dollars in financial damages after it shut down thousands of organizations, many for days or weeks.

Wipers from Russia’s most cut-throat hackers rain destruction on Ukraine Read More »

google-plans-secret-ai-military-outpost-on-tiny-island-overrun-by-crabs

Google plans secret AI military outpost on tiny island overrun by crabs

Christmas Island Shire President Steve Pereira told Reuters that the council is examining community impacts before approving construction. “There is support for it, providing this data center actually does put back into the community with infrastructure, employment, and adding economic value to the island,” Pereira said.

That’s great, but what about the crabs?

Christmas Island’s annual crab migration is a natural phenomenon that Sir David Attenborough reportedly once described as one of his greatest TV moments when he visited the site in 1990.

Every year, millions of crabs emerge from the forest and swarm across roads, streams, rocks, and beaches to reach the ocean, where each female can produce up to 100,000 eggs. The tiny baby crabs that survive take about nine days to march back inland to the safety of the plateau.

While Google is seeking environmental approvals for its subsea cables, the timing could prove delicate for Christmas Island’s most famous residents. According to Parks Australia, the island’s annual red crab migration has already begun for 2025, with a major spawning event expected in just a few weeks, around November 15–16.

During peak migration times, sections of roads close at short notice as crabs move between forest and sea, and the island has built special crab bridges over roads to protect the migrating masses.

Parks Australia notes that while the migration happens annually, few baby crabs survive the journey from sea to forest most years, as they’re often eaten by fish, manta rays, and whale sharks. The successful migrations that occur only once or twice per decade (when large numbers of babies actually survive) are critical for maintaining the island’s red crab population.

How Google’s facility might coexist with 100 million marching crustaceans remains to be seen. But judging by the size of the event, it seems clear that it’s the crab’s world, and we’re just living in it.

Google plans secret AI military outpost on tiny island overrun by crabs Read More »

5-ai-developed-malware-families-analyzed-by-google-fail-to-work-and-are-easily-detected

5 AI-developed malware families analyzed by Google fail to work and are easily detected

The assessments provide a strong counterargument to the exaggerated narratives being trumpeted by AI companies, many seeking new rounds of venture funding, that AI-generated malware is widespread and part of a new paradigm that poses a current threat to traditional defenses.

A typical example is Anthropic, which recently reported its discovery of a threat actor that used its Claude LLM to “develop, market, and distribute several variants of ransomware, each with advanced evasion capabilities, encryption, and anti-recovery mechanisms.” The company went on to say: “Without Claude’s assistance, they could not implement or troubleshoot core malware components, like encryption algorithms, anti-analysis techniques, or Windows internals manipulation.”

Startup ConnectWise recently said that generative AI was “lowering the bar of entry for threat actors to get into the game.” The post cited a separate report from OpenAI that found 20 separate threat actors using its ChatGPT AI engine to develop malware for tasks including identifying vulnerabilities, developing exploit code, and debugging that code. BugCrowd, meanwhile, said that in a survey of self-selected individuals, “74 percent of hackers agree that AI has made hacking more accessible, opening the door for newcomers to join the fold.”

In some cases, the authors of such reports note the same limitations noted in this article. Wednesday’s report from Google says that in its analysis of AI tools used to develop code for managing command-and-control channels and obfuscating its operations “we did not see evidence of successful automation or any breakthrough capabilities.” OpenAI said much the same thing. Still, these disclaimers are rarely made prominently and are often downplayed in the resulting frenzy to portray AI-assisted malware as posing a near-term threat.

Google’s report provides at least one other useful finding. One threat actor that exploited the company’s Gemini AI model was able to bypass its guardrails by posing as white-hat hackers doing research for participation in a capture-the-flag game. These competitive exercises are designed to teach and demonstrate effective cyberattack strategies to both participants and onlookers.

Such guardrails are built into all mainstream LLMs to prevent them from being used maliciously, such as in cyberattacks and self-harm. Google said it has since better fine-tuned the countermeasure to resist such ploys.

Ultimately, the AI-generated malware that has surfaced to date suggests that it’s mostly experimental, and the results aren’t impressive. The events are worth monitoring for developments that show AI tools producing new capabilities that were previously unknown. For now, though, the biggest threats continue to predominantly rely on old-fashioned tactics.

5 AI-developed malware families analyzed by Google fail to work and are easily detected Read More »

openai-signs-massive-ai-compute-deal-with-amazon

OpenAI signs massive AI compute deal with Amazon

On Monday, OpenAI announced it has signed a seven-year, $38 billion deal to buy cloud services from Amazon Web Services to power products like ChatGPT and Sora. It’s the company’s first big computing deal after a fundamental restructuring last week that gave OpenAI more operational and financial freedom from Microsoft.

The agreement gives OpenAI access to hundreds of thousands of Nvidia graphics processors to train and run its AI models. “Scaling frontier AI requires massive, reliable compute,” OpenAI CEO Sam Altman said in a statement. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”

OpenAI will reportedly use Amazon Web Services immediately, with all planned capacity set to come online by the end of 2026 and room to expand further in 2027 and beyond. Amazon plans to roll out hundreds of thousands of chips, including Nvidia’s GB200 and GB300 AI accelerators, in data clusters built to power ChatGPT’s responses, generate AI videos, and train OpenAI’s next wave of models.

Wall Street apparently liked the deal, because Amazon shares hit an all-time high on Monday morning. Meanwhile, shares for long-time OpenAI investor and partner Microsoft briefly dipped following the announcement.

Massive AI compute requirements

It’s no secret that running generative AI models for hundreds of millions of people currently requires a lot of computing power. Amid chip shortages over the past few years, finding sources of that computing muscle has been tricky. OpenAI is reportedly working on its own GPU hardware to help alleviate the strain.

But for now, the company needs to find new sources of Nvidia chips, which accelerate AI computations. Altman has previously said that the company plans to spend $1.4 trillion to develop 30 gigawatts of computing resources, an amount that is enough to roughly power 25 million US homes, according to Reuters.

OpenAI signs massive AI compute deal with Amazon Read More »

two-windows-vulnerabilities,-one-a-0-day,-are-under-active-exploitation

Two Windows vulnerabilities, one a 0-day, are under active exploitation

Two Windows vulnerabilities—one a zero-day that has been known to attackers since 2017 and the other a critical flaw that Microsoft initially tried and failed to patch recently—are under active exploitation in widespread attacks targeting a swath of the Internet, researchers say.

The zero-day went undiscovered until March, when security firm Trend Micro said it had been under active exploitation since 2017, by as many as 11 separate advanced persistent threats (APTs). These APT groups, often with ties to nation-states, relentlessly attack specific individuals or groups of interest. Trend Micro went on to say that the groups were exploiting the vulnerability, then tracked as ZDI-CAN-25373, to install various known post-exploitation payloads on infrastructure located in nearly 60 countries, with the US, Canada, Russia, and Korea being the most common.

A large-scale, coordinated operation

Seven months later, Microsoft still hasn’t patched the vulnerability, which stems from a bug in the Windows Shortcut binary format. The Windows component makes opening apps or accessing files easier and faster by allowing a single binary file to invoke them without having to navigate to their locations. In recent months, the ZDI-CAN-25373 tracking designation has been changed to CVE-2025-9491.

On Thursday, security firm Arctic Wolf reported that it observed a China-aligned threat group, tracked as UNC-6384, exploiting CVE-2025-9491 in attacks against various European nations. The final payload is a widely used remote access trojan known as PlugX. To better conceal the malware, the exploit keeps the binary file encrypted in the RC4 format until the final step in the attack.

“The breadth of targeting across multiple European nations within a condensed timeframe suggests either a large-scale coordinated intelligence collection operation or deployment of multiple parallel operational teams with shared tooling but independent targeting,” Arctic Wolf said. “The consistency in tradecraft across disparate targets indicates centralized tool development and operational security standards even if execution is distributed across multiple teams.”

Two Windows vulnerabilities, one a 0-day, are under active exploitation Read More »

after-teen-death-lawsuits,-character.ai-will-restrict-chats-for-under-18-users

After teen death lawsuits, Character.AI will restrict chats for under-18 users

Lawsuits and safety concerns

Character.AI was founded in 2021 by Noam Shazeer and Daniel De Freitas, two former Google engineers, and raised nearly $200 million from investors. Last year, Google agreed to pay about $3 billion to license Character.AI’s technology, and Shazeer and De Freitas returned to Google.

But the company now faces multiple lawsuits alleging that its technology contributed to teen deaths. Last year, the family of 14-year-old Sewell Setzer III sued Character.AI, accusing the company of being responsible for his death. Setzer died by suicide after frequently texting and conversing with one of the platform’s chatbots. The company faces additional lawsuits, including one from a Colorado family whose 13-year-old daughter, Juliana Peralta, died by suicide in 2023 after using the platform.

In December, Character.AI announced changes, including improved detection of violating content and revised terms of service, but those measures did not restrict underage users from accessing the platform. Other AI chatbot services, such as OpenAI’s ChatGPT, have also come under scrutiny for their chatbots’ effects on young users. In September, OpenAI introduced parental control features intended to give parents more visibility into how their kids use the service.

The cases have drawn attention from government officials, which likely pushed Character.AI to announce the changes for under-18 chat access. Steve Padilla, a Democrat in California’s State Senate who introduced the safety bill, told The New York Times that “the stories are mounting of what can go wrong. It’s important to put reasonable guardrails in place so that we protect people who are most vulnerable.”

On Tuesday, Senators Josh Hawley and Richard Blumenthal introduced a bill to bar AI companions from use by minors. In addition, California Governor Gavin Newsom this month signed a law, which takes effect on January 1, requiring AI companies to have safety guardrails on chatbots.

After teen death lawsuits, Character.AI will restrict chats for under-18 users Read More »

nvidia-hits-record-$5-trillion-mark-as-ceo-dismisses-ai-bubble-concerns

Nvidia hits record $5 trillion mark as CEO dismisses AI bubble concerns

Partnerships and government contracts fuel optimism

At the GTC conference on Tuesday, Nvidia’s CEO went out of his way to repeatedly praise Donald Trump and his policies for accelerating domestic tech investment while warning that excluding China from Nvidia’s ecosystem could limit US access to half the world’s AI developers. The overall event stressed Nvidia’s role as an American company, with Huang even nodding to Trump’s signature slogan in his sign-off by thanking the audience for “making America great again.”

Trump’s cooperation is paramount for Nvidia because US export controls have effectively blocked Nvidia’s AI chips from China, costing the company billions of dollars in revenue. Bob O’Donnell of TECHnalysis Research told Reuters that “Nvidia clearly brought their story to DC to both educate and gain favor with the US government. They managed to hit most of the hottest and most influential topics in tech.”

Beyond the political messaging, Huang announced a series of partnerships and deals that apparently helped ease investor concerns about Nvidia’s future. The company announced collaborations with Uber Technologies, Palantir Technologies, and CrowdStrike Holdings, among others. Nvidia also revealed a $1 billion investment in Nokia to support the telecommunications company’s shift toward AI and 6G networking.

The agreement with Uber will power a fleet of 100,000 self-driving vehicles with Nvidia technology, with automaker Stellantis among the first to deliver the robotaxis. Palantir will pair Nvidia’s technology with its Ontology platform to use AI techniques for logistics insights, with Lowe’s as an early adopter. Eli Lilly plans to build what Nvidia described as the most powerful supercomputer owned and operated by a pharmaceutical company, relying on more than 1,000 Blackwell AI accelerator chips.

The $5 trillion valuation surpasses the total cryptocurrency market value and equals roughly half the size of the pan European Stoxx 600 equities index, Reuters notes. At current prices, Huang’s stake in Nvidia would be worth about $179.2 billion, making him the world’s eighth-richest person.

Nvidia hits record $5 trillion mark as CEO dismisses AI bubble concerns Read More »

new-physical-attacks-are-quickly-diluting-secure-enclave-defenses-from-nvidia,-amd,-and-intel

New physical attacks are quickly diluting secure enclave defenses from Nvidia, AMD, and Intel


On-chip TEEs withstand rooted OSes but fall instantly to cheap physical attacks.

Trusted execution environments, or TEEs, are everywhere—in blockchain architectures, virtually every cloud service, and computing involving AI, finance, and defense contractors. It’s hard to overstate the reliance that entire industries have on three TEEs in particular: Confidential Compute from Nvidia, SEV-SNP from AMD, and SGX and TDX from Intel. All three come with assurances that confidential data and sensitive computing can’t be viewed or altered, even if a server has suffered a complete compromise of the operating kernel.

A trio of novel physical attacks raises new questions about the true security offered by these TEES and the exaggerated promises and misconceptions coming from the big and small players using them.

The most recent attack, released Tuesday, is known as TEE.fail. It defeats the latest TEE protections from all three chipmakers. The low-cost, low-complexity attack works by placing a small piece of hardware between a single physical memory chip and the motherboard slot it plugs into. It also requires the attacker to compromise the operating system kernel. Once this three-minute attack is completed, Confidential Compute, SEV-SNP, and TDX/SDX can no longer be trusted. Unlike the Battering RAM and Wiretap attacks from last month—which worked only against CPUs using DDR4 memory—TEE.fail works against DDR5, allowing them to work against the latest TEEs.

Some terms apply

All three chipmakers exclude physical attacks from threat models for their TEEs, also known as secure enclaves. Instead, assurances are limited to protecting data and execution from viewing or tampering, even when the kernel OS running the processor has been compromised. None of the chipmakers make these carveouts prominent, and they sometimes provide confusing statements about the TEE protections offered.

Many users of these TEEs make public assertions about the protections that are flat-out wrong, misleading, or unclear. All three chipmakers and many TEE users focus on the suitability of the enclaves for protecting servers on a network edge, which are often located in remote locations, where physical access is a top threat.

“These features keep getting broken, but that doesn’t stop vendors from selling them for these use cases—and people keep believing them and spending time using them,” said HD Moore, a security researcher and the founder and CEO of runZero.

He continued:

Overall, it’s hard for a customer to know what they are getting when they buy confidential computing in the cloud. For on-premise deployments, it may not be obvious that physical attacks (including side channels) are specifically out of scope. This research shows that server-side TEEs are not effective against physical attacks, and even more surprising, Intel and AMD consider these out of scope. If you were expecting TEEs to provide private computing in untrusted data centers, these attacks should change your mind.

Those making these statements run the gamut from cloud providers to AI engines, blockchain platforms, and even the chipmakers themselves. Here are some examples:

  • Cloudflare says it’s using Secure Memory Encryption—the encryption engine driving SEV—to safeguard confidential data from being extracted from a server if it’s stolen.
  • In a post outlining the possibility of using the TEEs to secure confidential information discussed in chat sessions, Anthropic says the enclave “includes protections against physical attacks.”
  • Microsoft marketing (here and here) devotes plenty of ink to discussing TEE protections without ever noting the exclusion.
  • Meta, paraphrasing the Confidential Computing Consortium, says TEE security provides protections against malicious “system administrators, the infrastructure owner, or anyone else with physical access to the hardware.” SEV-SNP is a key pillar supporting the security of Meta’s WhatsApp Messenger.
  • Even Nvidia claims that its TEE security protects against “infrastructure owners such as cloud providers, or anyone with physical access to the servers.”
  • The maker of the Signal private messenger assures users that its use of SGX means that “keys associated with this encryption never leave the underlying CPU, so they’re not accessible to the server owners or anyone else with access to server infrastructure.” Signal has long relied on SGX to protect contact-discovery data.

I counted more than a dozen other organizations providing assurances that were similarly confusing, misleading, or false. Even Moore—a security veteran with more than three decades of experience—told me: “The surprising part to me is that Intel/AMD would blanket-state that physical access is somehow out of scope when it’s the entire point.”

In fairness, some TEE users build additional protections on top of the TEEs provided out of the box. Meta, for example, said in an email that the WhatsApp implementation of SEV-SNP uses protections that would block TEE.fail attackers from impersonating its servers. The company didn’t dispute that TEE.fail could nonetheless pull secrets from the AMD TEE.

The Cloudflare theft protection, meanwhile, relies on SME—the engine driving SEV-SNP encryption. The researchers didn’t directly test SME against TEE.fail. They did note that SME uses deterministic encryption, the cryptographic property that causes all three TEEs to fail. (More about the role of deterministic encryption later.)

Others who misstate the TEEs’ protections provide more accurate descriptions elsewhere. Given all the conflicting information, it’s no wonder there’s confusion.

How do you know where the server is? You don’t.

Many TEE users run their infrastructure inside cloud providers such as AWS, Azure, or Google, where protections against supply-chain and physical attacks are extremely robust. That raises the bar for a TEE.fail-style attack significantly. (Whether the services could be compelled by governments with valid subpoenas to attack their own TEE is not clear.)

All these caveats notwithstanding, there’s often (1) little discussion of the growing viability of cheap, physical attacks, (2) no evidence (yet) that implementations not vulnerable to the three attacks won’t fall to follow-on research, or (3) no way for parties relying on TEEs to know where the servers are running and whether they’re free from physical compromise.

“We don’t know where the hardware is,” Daniel Genkin, one of the researchers behind both TEE.fail and Wiretap, said in an interview. “From a user perspective, I don’t even have a way to verify where the server is. Therefore, I have no way to verify if it’s in a reputable facility or an attacker’s basement.”

In other words, parties relying on attestations from servers in the cloud are once again reduced to simply trusting other people’s computers. As Moore observed, solving that problem is precisely the reason TEEs exist.

In at least two cases, involving the blockchain services Secret Network and Crust, the loss of TEE protections made it possible for any untrusted user to present cryptographic attestations. Both platforms used the attestations to verify that a blockchain node operated by one user couldn’t tamper with the execution or data passing to another user’s nodes. The Wiretap hack on SGX made it possible for users to run the sensitive data and executions outside of the TEE altogether while still providing attestations to the contrary. In the AMD attack, the attacker could decrypt the traffic passing through the TEE.

Both Secret Network and Crust added mitigations after learning of the possible physical attacks with Wiretap and Battering RAM. Given the lack of clear messaging, other TEE users are likely making similar mistakes.

A predetermined weakness

The root cause of all three physical attacks is the choice of deterministic encryption. This form of encryption produces the same ciphertext each time the same plaintext is encrypted with the same key. A TEE.fail attacker can copy ciphertext strings and use them in replay attacks. (Probabilistic encryption, by contrast, resists such attacks because the same plaintext can encrypt to a wide range of ciphertexts that are randomly chosen during the encryption process.)

TEE.fail works not only against SGX but also a more advanced Intel TEE known as TDX. The attack also defeats the protections provided by the latest Nvidia Confidential Compute and AMD SEV-SNP TEEs. Attacks against TDX and SGX can extract the Attestation Key, an ECDSA secret that certifies to a remote party that it’s running up-to-date software and can’t expose data or execution running inside the enclave. This Attestation Key is in turn signed by an Intel X.509 digital certificate providing cryptographic assurances that the ECDSA key can be trusted. TEE.fail works against all Intel CPUs currently supporting TDX and SDX.

With possession of the key, the attacker can use the compromised server to peer into data or tamper with the code flowing through the enclave and send the relying party an assurance that the device is secure. With this key, even CPUs built by other chipmakers can send an attestation that the hardware is protected by the Intel TEEs.

GPUs equipped with Nvidia Confidential Compute don’t bind attestation reports to the specific virtual machine protected by a specific GPU. TEE.fail exploits this weakness by “borrowing” a valid attestation report from a GPU run by the attacker and using it to impersonate the GPU running Confidential Compute. The protection is available on Nvidia’s H100/200 and B100/200 server GPUs.

“This means that we can convince users that their applications (think private chats with LLMs or Large Language Models) are being protected inside the GPU’s TEE while in fact it is running in the clear,” the researchers wrote on a website detailing the attack. “As the attestation report is ‘borrowed,’ we don’t even own a GPU to begin with.”

SEV-SNP (Secure Encrypted Virtualization-Secure Nested Paging) uses ciphertext hiding in AMD’s EPYC CPUs based on the Zen 5 architecture. AMD added it to prevent a previous attack known as Cipherleaks, which allowed malicious hypervisors to extract cryptographic keys stored in the enclaves of a virtual machine. Ciphertext, however, doesn’t stop physical attacks. With the ability to reopen the side channel that Cipherleaks relies on, TEE.fail can steal OpenSSL credentials and other key material based on constant-time encryption.

Cheap, quick, and the size of a briefcase

“Now that we have interpositioned DDR5 traffic, our work shows that even the most modern of TEEs across all vendors with available hardware is vulnerable to cheap physical attacks,” Genkin said.

The equipment required by TEE.fail runs off-the-shelf gear that costs less than $1,000. One of the devices the researchers built fits into a 17-inch briefcase, so it can be smuggled into a facility housing a TEE-protected server. Once the physical attack is performed, the device does not need to be connected again. Attackers breaking TEEs on servers they operate have no need for stealth, allowing them to use a larger device, which the researchers also built.

A logic analyzer attached to an interposer.

The researchers demonstrated attacks against an array of services that rely on the chipmakers’ TEE protections. (For ethical reasons, the attacks were carried out against infrastructure that was identical to but separate from the targets’ networks.) Some of the attacks included BuilderNet, dstack, and Secret Network.

BuilderNet is a network of Ethereum block builders that uses TDX to prevent parties from snooping on others’ data and to ensure fairness and that proof currency is redistributed honestly. The network builds blocks valued at millions of dollars each month.

“We demonstrated that a malicious operator with an attestation key could join BuilderNet and obtain configuration secrets, including the ability to decrypt confidential orderflow and access the Ethereum wallet for paying validators,” the TEE.fail website explained. “Additionally, a malicious operator could build arbitrary blocks or frontrun (i.e., construct a new transaction with higher fees to ensure theirs is executed first) the confidential transactions for profit while still providing deniability.”

To date, the researchers said, BuilderNet hasn’t provided mitigations. Attempts to reach BuilderNet officials were unsuccessful.

dstack is a tool for building confidential applications that run on top of virtual machines protected by Nvidia Confidential Compute. The researchers used TEE.fail to forge attestations certifying that a workload was performed by the TDX using the Nvidia protection. It also used the “borrowed” attestations to fake ownership of GPUs that a relying party trusts.

Secret Network is a platform billing itself as the “first mainnet blockchain with privacy-preserving smart contracts,” in part by encrypting on-chain data and execution with SGX. The researchers showed that TEE.fail could extract the “Concensus Seed,” the primary network-side private key encrypting confidential transactions on the Secret Network. As noted, after learning of Wiretap, the Secret Network eliminated this possibility by establishing a “curated” allowlist of known, trusted nodes allowed on the network and suspended the acceptance of new nodes. Academic or not, the ability to replicate the attack using TEE.fail shows that Wiretap wasn’t a one-off success.

A tough nut to crack

As explained earlier, the root cause of all the TEE.fail attacks is deterministic encryption, which forms the basis for protections in all three chipmakers’ TEEs. This weaker form of encryption wasn’t always used in TEEs. When Intel initially rolled out SGX, the feature was put in client CPUs, not server ones, to prevent users from building devices that could extract copyrighted content such as high-definition video.

Those early versions encrypted no more than 256MB of RAM, a small enough space to use the much stronger probabilistic form of encryption. The TEEs built into server chips, by contrast, must often encrypt terabytes of RAM. Probabilistic encryption doesn’t scale to that size without serious performance penalties. Finding a solution that accommodates this overhead won’t be easy.

One mitigation over the short term is to ensure that each 128-bit block of ciphertext has sufficient entropy. Adding random plaintext to the blocks prevents ciphertext repetition. The researchers say the entropy can be added by building a custom memory layout that inserts a 64-bit counter with a random initial value to each 64-bit block before encrypting it.

The last countermeasure the researchers proposed is adding location verification to the attestation mechanism. While insider and supply chain attacks remain a possibility inside even the most reputable cloud services, strict policies make them much less feasible. Even those mitigations, however, don’t foreclose the threat of a government agency with a valid subpoena ordering an organization to run such an attack inside their network.

In a statement, Nvidia said:

NVIDIA is aware of this research. Physical controls in addition to trust controls such as those provided by Intel TDX reduce the risk to GPUs for this style of attack, based on our discussions with the researchers. We will provide further details once the research is published.

Intel spokesman Jerry Bryant said:

Fully addressing physical attacks on memory by adding more comprehensive confidentiality, integrity and anti-replay protection results in significant trade-offs to Total Cost of Ownership. Intel continues to innovate in this area to find acceptable solutions that offer better balance between protections and TCO trade-offs.

The company has published responses here and here reiterating that physical attacks are out of scope for both TDX and SGX

AMD didn’t respond to a request for comment.

Stuck on Band-Aids

For now, TEE.fail, Wiretap, and Battering RAM remain a persistent threat that isn’t solved with the use of default implementations of the chipmakers’ secure enclaves. The most effective mitigation for the time being is for TEE users to understand the limitations and curb uses that the chipmakers say aren’t a part of the TEE threat model. Secret Network tightening requirements for operators joining the network is an example of such a mitigation.

Moore, the founder and CEO of RunZero, said that companies with big budgets can rely on custom solutions built by larger cloud services. AWS, for example, makes use of the Nitro Card, which is built using ASIC chips that accelerate processing using TEEs. Google’s proprietary answer is Titanium.

“It’s a really hard problem,” Moore said. “I’m not sure what the current state of the art is, but if you can’t afford custom hardware, the best you can do is rely on the CPU provider’s TEE, and this research shows how weak this is from the perspective of an attacker with physical access. The enclave is really a Band-Aid or hardening mechanism over a really difficult problem, and it’s both imperfect and dangerous if compromised, for all sorts of reasons.”

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

New physical attacks are quickly diluting secure enclave defenses from Nvidia, AMD, and Intel Read More »

expert-panel-will-determine-agi-arrival-in-new-microsoft-openai-agreement

Expert panel will determine AGI arrival in new Microsoft-OpenAI agreement

In May, OpenAI abandoned its plan to fully convert to a for-profit company after pressure from regulators and critics. The company instead shifted to a modified approach where the nonprofit board would retain control while converting its for-profit subsidiary into a public benefit corporation (PBC).

What changed in the agreement

The revised deal extends Microsoft’s intellectual property rights through 2032 and now includes models developed after AGI is declared. Microsoft holds IP rights to OpenAI’s model weights, architecture, inference code, and fine-tuning code until the expert panel confirms AGI or through 2030, whichever comes first. The new agreement also codifies that OpenAI can formally release open-weight models (like gpt-oss) that meet requisite capability criteria.

However, Microsoft’s rights to OpenAI’s research methods, defined as confidential techniques used in model development, will expire at those same thresholds. The agreement explicitly excludes Microsoft from having rights to OpenAI’s consumer hardware products.

The deal allows OpenAI to develop some products jointly with third parties. API products built with other companies must run exclusively on Azure, but non-API products can operate on any cloud provider. This gives OpenAI more flexibility to partner with other technology companies while keeping Microsoft as its primary infrastructure provider.

Under the agreement, Microsoft can now pursue AGI development alone or with partners other than OpenAI. If Microsoft uses OpenAI’s intellectual property to build AGI before the expert panel makes a declaration, those models must exceed compute thresholds that are larger than what current leading AI models require for training.

The revenue-sharing arrangement between the companies will continue until the expert panel verifies that AGI has been reached, though payments will extend over a longer period. OpenAI has committed to purchasing $250 billion in Azure services, and Microsoft no longer holds a right of first refusal to serve as OpenAI’s compute provider. This lets OpenAI shop around for cloud infrastructure if it chooses, though the massive Azure commitment suggests it will remain the primary provider.

Expert panel will determine AGI arrival in new Microsoft-OpenAI agreement Read More »