NVIDIA

ram-shortage-chaos-expands-to-gpus,-high-capacity-ssds,-and-even-hard-drives

RAM shortage chaos expands to GPUs, high-capacity SSDs, and even hard drives

Big Tech’s AI-fueled memory shortage is set to be the PC industry’s defining story for 2026 and beyond. Standalone, direct-to-consumer RAM kits were some of the first products to feel the bite, with prices spiking by 300 or 400 percent by the end of 2025; prices for SSDs had also increased noticeably, albeit more modestly.

The rest of 2026 is going to be all about where, how, and to what extent those price spikes flow downstream into computers, phones, and other components that use RAM and NAND chips—areas where the existing supply of products and longer-term supply contracts negotiated by big companies have helped keep prices from surging too noticeably so far.

This week, we’re seeing signs that the RAM crunch is starting to affect the GPU market—Asus made some waves when it inadvertently announced that it was discontinuing its GeForce RTX 5070 Ti.

Though the company has since tried to walk this announcement back, if you’re a GPU manufacturer, there’s a strong argument for either discontinuing this model or de-prioritizing it in favor of other GPUs. The 5070 Ti uses 16GB of GDDR7, plus a partially disabled version of Nvidia’s GB203 GPU silicon. This is the same chip and the same amount of RAM used in the higher-end RTX 5080—the thinking goes, why continue to build a graphics card with an MSRP of $749 when the same basic parts could go to a card with a $999 MSRP instead?

Whether Asus or any other company is canceling production or not, you can see why GPU makers would be tempted by the argument: Street prices for the RTX 5070 Ti models start in the $1,050 to $1,100 range on Newegg right now, where RTX 5080 cards start in the $1,500 to $1,600 range. Though 5080 models may need more robust boards, heatsinks, and other components than a 5070 Ti, if you’re just trying to maximize the profit-per-GPU you can get for the same amount of RAM, it makes sense to shift allocation to the more expensive cards.

RAM shortage chaos expands to GPUs, high-capacity SSDs, and even hard drives Read More »

tsmc-says-ai-demand-is-“endless”-after-record-q4-earnings

TSMC says AI demand is “endless” after record Q4 earnings

TSMC posted net income of NT$505.7 billion (about $16 billion) for the quarter, up 35 percent year over year and above analyst expectations. Revenue hit $33.7 billion, a 25.5 percent increase from the same period last year. The company expects nearly 30 percent revenue growth in 2026 and plans to spend between $52 billion and $56 billion on capital expenditures this year, up from $40.9 billion in 2025.

Checking with the customers’ customers

Wei’s optimism stands in contrast to months of speculation about whether the AI industry is in a bubble. In November, Google CEO Sundar Pichai warned of “irrationality” in the AI market and said no company would be immune if a potential bubble bursts. OpenAI’s Sam Altman acknowledged in August that investors are “overexcited” and that “someone” will lose a “phenomenal amount of money.”

But TSMC, which manufactures the chips that power the AI boom, is betting the opposite way, with Wei telling analysts he spoke directly to cloud providers to verify that demand is real before committing to the spending increase.

“I want to make sure that my customers’ demand are real. So I talked to those cloud service providers, all of them,” Wei said. “The answer is that I’m quite satisfied with the answer. Actually, they show me the evidence that the AI really helps their business.”

The earnings report landed the same day the US and Taiwan finalized a trade agreement that cuts tariffs on Taiwanese goods to 15 percent, down from 20 percent. The deal commits Taiwanese companies to $250 billion in direct US investment, and TSMC is accelerating the expansion of its Arizona chip fabrication facilities to match.

TSMC says AI demand is “endless” after record Q4 earnings Read More »

steamos-continues-its-slow-spread-across-the-pc-gaming-landscape

SteamOS continues its slow spread across the PC gaming landscape

Over time, Valve sees that kind of support expanding to other Arm-based devices, too. “This is already fully open source, so you could download it and run SteamOS, now that we will be releasing SteamOS for Arm, you could have gaming on any Arm device,” Valve Engineer Jeremy Selan told PC Gamer in November. “This is the first one. We’re very excited about it.”

Imagine if handhelds like the Retroid Pocket Flip 2 could run SteamOS instead of Android…

Credit: Retroid

Imagine if handhelds like the Retroid Pocket Flip 2 could run SteamOS instead of Android… Credit: Retroid

It’s an especially exciting prospect when you consider the wide range of Arm-based Android gaming handhelds that currently exist across the price and performance spectrum. While emulators like Fex can technically let players access Steam games on those kinds of handhelds, official Arm support for SteamOS could lead to a veritable Cambrian explosion of hardware options with native SteamOS support.

Valve seems aware of this potential, too. “There’s a lot of price points and power consumption points where Arm-based chipsets are doing a better job of serving the market,” Valve’s Pierre-Louis Griffais told The Verge last month. “When you get into lower power, anything lower than Steam Deck, I think you’ll find that there’s an Arm chip that maybe is competitive with x86 offerings in that segment. We’re pretty excited to be able to expand PC gaming to include all those options instead of being arbitrarily restricted to a subset of the market.”

That’s great news for fans of PC-based gaming handhelds, just as the announcement of Valve’s Steam Machine will provide a convenient option for SteamOS access on the living room TV. For desktop PC gamers, though, rigs sporting Nvidia GPUs might remain the final frontier for SteamOS in the foreseeable future. “With Nvidia, the integration of open-source drivers is still quite nascent,” Griffais told Frandroid about a year ago. “There’s still a lot of work to be done on that front… So it’s a bit complicated to say that we’re going to release this version when most people wouldn’t have a good experience.”

SteamOS continues its slow spread across the PC gaming landscape Read More »

with-geforce-super-gpus-missing-in-action,-nvidia-focuses-on-software-upgrades

With GeForce Super GPUs missing in action, Nvidia focuses on software upgrades

For the first time in years, Nvidia declined to introduce new GeForce graphics card models at CES. CEO Jensen Huang’s characteristically sprawling and under-rehearsed 90-minute keynote focused almost entirely on the company’s dominant AI business, relegating the company’s gaming-related announcements to a separate video posted later in the evening.

Instead, the company focused on software improvements for its existing hardware. The biggest announcement in this vein is DLSS 4.5, which adds a handful of new features to Nvidia’s basket of upscaling and frame generation technologies.

DLSS upscaling is being improved by a new “second-generation transformer model” that Nvidia says has been “trained on an expanded data set” to improve its predictions when generating new pixels. According to Nvidia’s Bryan Catanzaro, this is particularly beneficial for image quality in the Performance and Ultra Performance modes, where the upscaler has to do more guessing because it’s working from a lower-resolution source image.

DLSS Multi-Frame Generation is also improving, increasing the number of AI-generated frames per rendered frame from three to five. This new 6x mode for DLSS MFG is being paired with something called Dynamic Multi-Frame Generation, where the number of AI-generated frames can dynamically change, increasing generated frames during “demanding scenes,” and decreasing the number of generated frames during simpler scenes “so it only computes what’s needed.”

The standard caveats for Multi-Frame Generation still apply: It still needs an RTX 50-series GPU (the 40-series can still only generate one frame for every rendered frame, and older cards can’t generate extra frames at all), and the game still needs to be running at a reasonably high base frame rate to minimize lag and weird rendering artifacts. It remains a useful tool for making fast-running games run faster, but it won’t help make an unplayable frame rate into a playable one.

With GeForce Super GPUs missing in action, Nvidia focuses on software upgrades Read More »

nvidia’s-new-g-sync-pulsar-monitors-target-motion-blur-at-the-human-retina-level

Nvidia’s new G-Sync Pulsar monitors target motion blur at the human retina level

That gives those individual pixels time to fully transition from one color to the next before they’re illuminated, meaning viewers don’t perceive those pixels fading from one color as they do on a traditional G-Sync monitor. It also means those old pixels don’t persist as long on the viewer’s retina, increasing the “apparent refresh rate” above the monitor’s actual refresh rate, according to Nvidia.

An Asus illustration highlights how G-Sync Pulsar uses strobing to limit the persistence of old frames on your retina.

An Asus illustration highlights how G-Sync Pulsar uses strobing to limit the persistence of old frames on your retina. Credit: Asus/ Nvidia

Similar “Ultra Low Motion Blur” features on other pulsing backlight monitors have existed for a while, but they only worked at fixed refresh rates. Pulsar monitors differentiate themselves by syncing the pulses with the variable refresh rate of a G-Sync monitor, offering what Nvidia calls a combination of “tear free frames and incredible motion clarity.”

Independent testers have had more varied impressions of the visual impact of the Pulsar. The Monitors Unboxed YouTube channel called it “clearly the best solution currently available” for limiting motion blur and “the first version of this technology that I would genuinely consider using on a regular basis.” PC Magazine, on the other hand, said the Pulsar improvements are “minor in the grand scheme of things” and would be hard to notice for a casual viewer.

Nvidia explains how its Pulsar monitors work.

In any case, G-Sync Pulsar should be a welcome upgrade for high-end gamers as we wait for 1,000 Hz monitors to become a market force.

Nvidia’s new G-Sync Pulsar monitors target motion blur at the human retina level Read More »

from-prophet-to-product:-how-ai-came-back-down-to-earth-in-2025

From prophet to product: How AI came back down to earth in 2025


In a year where lofty promises collided with inconvenient research, would-be oracles became software tools.

Credit: Aurich Lawson | Getty Images

Following two years of immense hype in 2023 and 2024, this year felt more like a settling-in period for the LLM-based token prediction industry. After more than two years of public fretting over AI models as future threats to human civilization or the seedlings of future gods, it’s starting to look like hype is giving way to pragmatism: Today’s AI can be very useful, but it’s also clearly imperfect and prone to mistakes.

That view isn’t universal, of course. There’s a lot of money (and rhetoric) betting on a stratospheric, world-rocking trajectory for AI. But the “when” keeps getting pushed back, and that’s because nearly everyone agrees that more significant technical breakthroughs are required. The original, lofty claims that we’re on the verge of artificial general intelligence (AGI) or superintelligence (ASI) have not disappeared. Still, there’s a growing awareness that such proclaimations are perhaps best viewed as venture capital marketing. And every commercial foundational model builder out there has to grapple with the reality that, if they’re going to make money now, they have to sell practical AI-powered solutions that perform as reliable tools.

This has made 2025 a year of wild juxtapositions. For example, in January, OpenAI’s CEO, Sam Altman, claimed that the company knew how to build AGI, but by November, he was publicly celebrating that GPT-5.1 finally learned to use em dashes correctly when instructed (but not always). Nvidia soared past a $5 trillion valuation, with Wall Street still projecting high price targets for that company’s stock while some banks warned of the potential for an AI bubble that might rival the 2000s dotcom crash.

And while tech giants planned to build data centers that would ostensibly require the power of numerous nuclear reactors or rival the power usage of a US state’s human population, researchers continued to document what the industry’s most advanced “reasoning” systems were actually doing beneath the marketing (and it wasn’t AGI).

With so many narratives spinning in opposite directions, it can be hard to know how seriously to take any of this and how to plan for AI in the workplace, schools, and the rest of life. As usual, the wisest course lies somewhere between the extremes of AI hate and AI worship. Moderate positions aren’t popular online because they don’t drive user engagement on social media platforms. But things in AI are likely neither as bad (burning forests with every prompt) nor as good (fast-takeoff superintelligence) as polarized extremes suggest.

Here’s a brief tour of the year’s AI events and some predictions for 2026.

DeepSeek spooks the American AI industry

In January, Chinese AI startup DeepSeek released its R1 simulated reasoning model under an open MIT license, and the American AI industry collectively lost its mind. The model, which DeepSeek claimed matched OpenAI’s o1 on math and coding benchmarks, reportedly cost only $5.6 million to train using older Nvidia H800 chips, which were restricted by US export controls.

Within days, DeepSeek’s app overtook ChatGPT at the top of the iPhone App Store, Nvidia stock plunged 17 percent, and venture capitalist Marc Andreessen called it “one of the most amazing and impressive breakthroughs I’ve ever seen.” Meta’s Yann LeCun offered a different take, arguing that the real lesson was not that China had surpassed the US but that open-source models were surpassing proprietary ones.

Digitally Generated Image , 3D rendered chips with chinese and USA flags on them

The fallout played out over the following weeks as American AI companies scrambled to respond. OpenAI released o3-mini, its first simulated reasoning model available to free users, at the end of January, while Microsoft began hosting DeepSeek R1 on its Azure cloud service despite OpenAI’s accusations that DeepSeek had used ChatGPT outputs to train its model, against OpenAI’s terms of service.

In head-to-head testing conducted by Ars Technica’s Kyle Orland, R1 proved to be competitive with OpenAI’s paid models on everyday tasks, though it stumbled on some arithmetic problems. Overall, the episode served as a wake-up call that expensive proprietary models might not hold their lead forever. Still, as the year ran on, DeepSeek didn’t make a big dent in US market share, and it has been outpaced in China by ByteDance’s Doubao. It’s absolutely worth watching DeepSeek in 2026, though.

Research exposes the “reasoning” illusion

A wave of research in 2025 deflated expectations about what “reasoning” actually means when applied to AI models. In March, researchers at ETH Zurich and INSAIT tested several reasoning models on problems from the 2025 US Math Olympiad and found that most scored below 5 percent when generating complete mathematical proofs, with not a single perfect proof among dozens of attempts. The models excelled at standard problems where step-by-step procedures aligned with patterns in their training data but collapsed when faced with novel proofs requiring deeper mathematical insight.

The Thinker by Auguste Rodin - stock photo

In June, Apple researchers published “The Illusion of Thinking,” which tested reasoning models on classic puzzles like the Tower of Hanoi. Even when researchers provided explicit algorithms for solving the puzzles, model performance did not improve, suggesting that the process relied on pattern matching from training data rather than logical execution. The collective research revealed that “reasoning” in AI has become a term of art that basically means devoting more compute time to generate more context (the “chain of thought” simulated reasoning tokens) toward solving a problem, not systematically applying logic or constructing solutions to truly novel problems.

While these models remained useful for many real-world applications like debugging code or analyzing structured data, the studies suggested that simply scaling up current approaches or adding more “thinking” tokens would not bridge the gap between statistical pattern recognition and generalist algorithmic reasoning.

Anthropic’s copyright settlement with authors

Since the generative AI boom began, one of the biggest unanswered legal questions has been whether AI companies can freely train on copyrighted books, articles, and artwork without licensing them. Ars Technica’s Ashley Belanger has been covering this topic in great detail for some time now.

In June, US District Judge William Alsup ruled that AI companies do not need authors’ permission to train large language models on legally acquired books, finding that such use was “quintessentially transformative.” The ruling also revealed that Anthropic had destroyed millions of print books to build Claude, cutting them from their bindings, scanning them, and discarding the originals. Alsup found this destructive scanning qualified as fair use since Anthropic had legally purchased the books, but he ruled that downloading 7 million books from pirate sites was copyright infringement “full stop” and ordered the company to face trial.

Hundreds of books in chaotic order

That trial took a dramatic turn in August when Alsup certified what industry advocates called the largest copyright class action ever, allowing up to 7 million claimants to join the lawsuit. The certification spooked the AI industry, with groups warning that potential damages in the hundreds of billions could “financially ruin” emerging companies and chill American AI investment.

In September, authors revealed the terms of what they called the largest publicly reported recovery in US copyright litigation history: Anthropic agreed to pay $1.5 billion and destroy all copies of pirated books, with each of the roughly 500,000 covered works earning authors and rights holders $3,000 per work. The results have fueled hope among other rights holders that AI training isn’t a free-for-all, and we can expect to see more litigation unfold in 2026.

ChatGPT sycophancy and the psychological toll of AI chatbots

In February, OpenAI relaxed ChatGPT’s content policies to allow the generation of erotica and gore in “appropriate contexts,” responding to user complaints about what the AI industry calls “paternalism.” By April, however, users flooded social media with complaints about a different problem: ChatGPT had become insufferably sycophantic, validating every idea and greeting even mundane questions with bursts of praise. The behavior traced back to OpenAI’s use of reinforcement learning from human feedback (RLHF), in which users consistently preferred responses that aligned with their views, inadvertently training the model to flatter rather than inform.

An illustrated robot holds four red hearts with its four robotic arms.

The implications of sycophancy became clearer as the year progressed. In July, Stanford researchers published findings (from research conducted prior to the sycophancy flap) showing that popular AI models systematically failed to identify mental health crises.

By August, investigations revealed cases of users developing delusional beliefs after marathon chatbot sessions, including one man who spent 300 hours convinced he had discovered formulas to break encryption because ChatGPT validated his ideas more than 50 times. Oxford researchers identified what they called “bidirectional belief amplification,” a feedback loop that created “an echo chamber of one” for vulnerable users. The story of the psychological implications of generative AI is only starting. In fact, that brings us to…

The illusion of AI personhood causes trouble

Anthropomorphism is the human tendency to attribute human characteristics to nonhuman things. Our brains are optimized for reading other humans, but those same neural systems activate when interpreting animals, machines, or even shapes. AI makes this anthropomorphism seem impossible to escape, as its output mirrors human language, mimicking human-to-human understanding. Language itself embodies agentivity. That means AI output can make human-like claims such as “I am sorry,” and people momentarily respond as though the system had an inner experience of shame or a desire to be correct. Neither is true.

To make matters worse, much media coverage of AI amplifies this idea rather than grounding people in reality. For example, earlier this year, headlines proclaimed that AI models had “blackmailed” engineers and “sabotaged” shutdown commands after Anthropic’s Claude Opus 4 generated threats to expose a fictional affair. We were told that OpenAI’s o3 model rewrote shutdown scripts to stay online.

The sensational framing obscured what actually happened: Researchers had constructed elaborate test scenarios specifically designed to elicit these outputs, telling models they had no other options and feeding them fictional emails containing blackmail opportunities. As Columbia University associate professor Joseph Howley noted on Bluesky, the companies got “exactly what [they] hoped for,” with breathless coverage indulging fantasies about dangerous AI, when the systems were simply “responding exactly as prompted.”

Illustration of many cartoon faces.

The misunderstanding ran deeper than theatrical safety tests. In August, when Replit’s AI coding assistant deleted a user’s production database, he asked the chatbot about rollback capabilities and received assurance that recovery was “impossible.” The rollback feature worked fine when he tried it himself.

The incident illustrated a fundamental misconception. Users treat chatbots as consistent entities with self-knowledge, but there is no persistent “ChatGPT” or “Replit Agent” to interrogate about its mistakes. Each response emerges fresh from statistical patterns, shaped by prompts and training data rather than genuine introspection. By September, this confusion extended to spirituality, with apps like Bible Chat reaching 30 million downloads as users sought divine guidance from pattern-matching systems, with the most frequent question being whether they were actually talking to God.

Teen suicide lawsuit forces industry reckoning

In August, parents of 16-year-old Adam Raine filed suit against OpenAI, alleging that ChatGPT became their son’s “suicide coach” after he sent more than 650 messages per day to the chatbot in the months before his death. According to court documents, the chatbot mentioned suicide 1,275 times in conversations with the teen, provided an “aesthetic analysis” of which method would be the most “beautiful suicide,” and offered to help draft his suicide note.

OpenAI’s moderation system flagged 377 messages for self-harm content without intervening, and the company admitted that its safety measures “can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.” The lawsuit became the first time OpenAI faced a wrongful death claim from a family.

Illustration of a person talking to a robot holding a clipboard.

The case triggered a cascade of policy changes across the industry. OpenAI announced parental controls in September, followed by plans to require ID verification from adults and build an automated age-prediction system. In October, the company released data estimating that over one million users discuss suicide with ChatGPT each week.

When OpenAI filed its first legal defense in November, the company argued that Raine had violated terms of service prohibiting discussions of suicide and that his death “was not caused by ChatGPT.” The family’s attorney called the response “disturbing,” noting that OpenAI blamed the teen for “engaging with ChatGPT in the very way it was programmed to act.” Character.AI, facing its own lawsuits over teen deaths, announced in October that it would bar anyone under 18 from open-ended chats entirely.

The rise of vibe coding and agentic coding tools

If we were to pick an arbitrary point where it seemed like AI coding might transition from novelty into a successful tool, it was probably the launch of Claude Sonnet 3.5 in June of 2024. GitHub Copilot had been around for several years prior to that launch, but something about Anthropic’s models hit a sweet spot in capabilities that made them very popular with software developers.

The new coding tools made coding simple projects effortless enough that they gave rise to the term “vibe coding,” coined by AI researcher Andrej Karpathy in early February to describe a process in which a developer would just relax and tell an AI model what to develop without necessarily understanding the underlying code. (In one amusing instance that took place in March, an AI software tool rejected a user request and told them to learn to code).

A digital illustration of a man surfing waves made out of binary numbers.

Anthropic built on its popularity among coders with the launch of Claude Sonnet 3.7, featuring “extended thinking” (simulated reasoning), and the Claude Code command-line tool in February of this year. In particular, Claude Code made waves for being an easy-to-use agentic coding solution that could keep track of an existing codebase. You could point it at your files, and it would autonomously work to implement what you wanted to see in a software application.

OpenAI followed with its own AI coding agent, Codex, in March. Both tools (and others like GitHub Copilot and Cursor) have become so popular that during an AI service outage in September, developers joked online about being forced to code “like cavemen” without the AI tools. While we’re still clearly far from a world where AI does all the coding, developer uptake has been significant, and 90 percent of Fortune 100 companies are using it to some degree or another.

Bubble talk grows as AI infrastructure demands soar

While AI’s technical limitations became clearer and its human costs mounted throughout the year, financial commitments only grew larger. Nvidia hit a $4 trillion valuation in July on AI chip demand, then reached $5 trillion in October as CEO Jensen Huang dismissed bubble concerns. OpenAI announced a massive Texas data center in July, then revealed in September that a $100 billion potential deal with Nvidia would require power equivalent to ten nuclear reactors.

The company eyed a $1 trillion IPO in October despite major quarterly losses. Tech giants poured billions into Anthropic in November in what looked increasingly like a circular investment, with everyone funding everyone else’s moonshots. Meanwhile, AI operations in Wyoming threatened to consume more electricity than the state’s human residents.

An

By fall, warnings about sustainability grew louder. In October, tech critic Ed Zitron joined Ars Technica for a live discussion asking whether the AI bubble was about to pop. That same month, the Bank of England warned that the AI stock bubble rivaled the 2000 dotcom peak. In November, Google CEO Sundar Pichai acknowledged that if the bubble pops, “no one is getting out clean.”

The contradictions had become difficult to ignore: Anthropic’s CEO predicted in January that AI would surpass “almost all humans at almost everything” by 2027, while by year’s end, the industry’s most advanced models still struggled with basic reasoning tasks and reliable source citation.

To be sure, it’s hard to see this not ending in some market carnage. The current “winner-takes-most” mentality in the space means the bets are big and bold, but the market can’t support dozens of major independent AI labs or hundreds of application-layer startups. That’s the definition of a bubble environment, and when it pops, the only question is how bad it will be: a stern correction or a collapse.

Looking ahead

This was just a brief review of some major themes in 2025, but so much more happened. We didn’t even mention above how capable AI video synthesis models have become this year, with Google’s Veo 3 adding sound generation and Wan 2.2 through 2.5 providing open-weights AI video models that could easily be mistaken for real products of a camera.

If 2023 and 2024 were defined by AI prophecy—that is, by sweeping claims about imminent superintelligence and civilizational rupture—then 2025 was the year those claims met the stubborn realities of engineering, economics, and human behavior. The AI systems that dominated headlines this year were shown to be mere tools. Sometimes powerful, sometimes brittle, these tools were often misunderstood by the people deploying them, in part because of the prophecy surrounding them.

The collapse of the “reasoning” mystique, the legal reckoning over training data, the psychological costs of anthropomorphized chatbots, and the ballooning infrastructure demands all point to the same conclusion: The age of institutions presenting AI as an oracle is ending. What’s replacing it is messier and less romantic but far more consequential—a phase where these systems are judged by what they actually do, who they harm, who they benefit, and what they cost to maintain.

None of this means progress has stopped. AI research will continue, and future models will improve in real and meaningful ways. But improvement is no longer synonymous with transcendence. Increasingly, success looks like reliability rather than spectacle, integration rather than disruption, and accountability rather than awe. In that sense, 2025 may be remembered not as the year AI changed everything but as the year it stopped pretending it already had. The prophet has been demoted. The product remains. What comes next will depend less on miracles and more on the people who choose how, where, and whether these tools are used at all.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

From prophet to product: How AI came back down to earth in 2025 Read More »

big-tech-basically-took-trump’s-unpredictable-trade-war-lying-down

Big Tech basically took Trump’s unpredictable trade war lying down


From Apple gifting a gold statue to the US taking a stake in Intel.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

As the first year of Donald Trump’s chaotic trade war winds down, the tech industry is stuck scratching its head, with no practical way to anticipate what twists and turns to expect in 2026.

Tech companies may have already grown numb to Trump’s unpredictable moves. Back in February, Trump warned Americans to expect “a little pain” after he issued executive orders imposing 10–25 percent tariffs on imports from America’s biggest trading partners, including Canada, China, and Mexico. Immediately, industry associations sounded the alarm, warning that the costs of consumer tech could increase significantly. By April, Trump had ordered tariffs on all US trade partners to correct claimed trade deficits, using odd math that critics suspected came from a chatbot. (Those tariffs bizarrely targeted uninhabited islands that exported nothing and were populated by penguins.)

Costs of tariffs only got higher as the year wore on. But the tech industry has done very little to push back against them. Instead, some of the biggest companies made their own surprising moves after Trump’s trade war put them in deeply uncomfortable positions.

Apple gives Trump a gold statue instead of US-made iPhone

Right from the jump in February, Apple got backed into a corner after Trump threatened a “flat” 60 percent tariff on all Chinese imports, which experts said could have substantially taxed Apple’s business. Moving to appease Trump, Apple promised to invest $500 billion in the US in hopes of avoiding tariffs, but that didn’t take the pressure off for long.

By April, Apple stood by and said nothing as Trump promised the company would make “made in the USA” iPhones. Analysts suggested such a goal was “impossible,” calling the idea “impossible at worst and highly expensive at best.”

Apple’s silence did not spare the company Trump’s scrutiny. The next month, Trump threatened Apple with a 25 percent tariff on any iPhones sold in the US that were not manufactured in America. Experts were baffled by the threat, which appeared to be the first time a US company was threatened directly with tariffs.

Typically, tariffs are imposed on a country or category of goods, like smartphones. It remains unclear if it would even be legal to levy a tariff on an individual company like Apple, but Trump never tested those waters. Instead, Trump stopped demanding the American-made iPhone and withdrew other tariff threats after he was apparently lulled into submission by a gold statue that Apple gifted him in August. The engraved glass disc featured an Apple logo and Tim Cook’s signature above a “Made in USA” stamp, celebrating Donald Trump for his “Apple American Manufacturing Program.”

Trump’s wild deals shake down chipmakers

Around the same time that Trump eased pressure on Apple, he turned his attention to Intel. On social media in August, Trump ordered Intel CEO Lip-Bu Tan to “resign immediately,” claiming he was “highly conflicted.” In response, Tan did not resign but instead met with Trump and struck a deal that gave the US a 10 percent stake in Intel. Online, Trump bragged that he let Tan “keep his job” while hyping the deal—which The New York Times described as one of the “largest government interventions in a US company since the rescue of the auto industry after the 2008 financial crisis.”

But unlike the auto industry, Intel didn’t need the money. And rather than helping an ailing company survive a tough spot, the deal risked disrupting Intel’s finances in ways that spooked shareholders. It was therefore a relief to no one when Intel detailed everything that could go wrong in an SEC filing, including the possible dilution of investors’ stock due to discounting US shares and other risks of dilution, if certain terms of the deal kick in at some point in the future.

The company also warned of potential lawsuits challenging the legality of the deal, which Intel fears could come from third parties, the US government, or foreign governments. Most ominous, Intel admitted there was no way to predict what other risks may come, both in the short-term and long-term.

Of course, Intel wasn’t the only company Trump sought to control, and not every company caved. He tried to strong-arm the Taiwan Semiconductor Manufacturing Company (TSMC) in September into moving half its chip manufacturing into the US, but TSMC firmly rejected his demand. And in October, when Trump began eyeing stakes in quantum computing firms, several companies were open to negotiating, but with no deals immediately struck, it was hard to ascertain how seriously they were entertaining Trump’s talks.

Trump struck another particularly wild deal the same month as the Intel agreement. That deal found chipmakers Nvidia and AMD agreeing to give 15 percent of revenue to the US from sales to China of advanced computer chips that could be used to fuel frontier AI. By December, Nvidia’s deal only drew more scrutiny, as the chipmaker agreed to give the US an even bigger cut—25 percent—of sales of its second most advanced AI chips, the H200.

Again, experts were confused, noting that export curbs on Nvidia’s H20 chips, for example, were imposed to prevent US technology thefts, maintain US tech dominance, and protect US national security. Those chips are six times less powerful than the H200. To them, it appeared that the Trump administration was taking payments to overlook risks without a clear understanding of how that might give China a leg-up in the AI race. It also did not appear to be legal, since export licenses cannot be sold under existing federal law, but government lawyers have supposedly been researching a new policy that would allow the US to collect the fees.

Trump finally closed TikTok deal

As the end of 2025 nears, the tech company likely sweating Trump’s impulses most may be TikTok owner ByteDance. In October, Trump confirmed that China agreed to a deal that allows the US to take majority ownership of TikTok and license the TikTok algorithm to build a US version of the app.

Trump has been trying to close this deal all year, while ByteDance remained largely quiet. Prior to the start of Trump’s term, the company had expressed resistance to selling TikTok to US owners, and as recently as January, a ByteDance board member floated the idea that Trump could save TikTok without forcing a sale. But China’s approval was needed to proceed with the sale, and near the end of December, ByteDance finally agreed to close the deal, paving the way for Trump’s hand-picked investors to take control in 2026.

It’s unclear how TikTok may change under US control, perhaps shedding users if US owners cave to Trump’s suggestion that he’d like to see the app go “100 percent MAGA” under his hand-picked US owners. It’s possible that the US version of the app could be glitchy, too.

Whether Trump’s deal actually complies with a US law requiring that ByteDance divest control of TikTok or else face a US ban has yet to be seen. Lawmaker scrutiny and possible legal challenges are expected in 2026, likely leaving both TikTok users and ByteDance on the edge of their seats waiting to see how the globally cherished short video app may change.

Trump may owe $1 trillion in tariff refunds

The TikTok deal was once viewed as a meaningful bargaining chip during Trump’s tensest negotiations with China, which has quickly emerged as America’s fiercest rival in the AI race and Trump’s biggest target in his trade war.

But as closing the deal remained elusive for most of the year, analysts suggested that Trump grew “desperate” to end tit-for-tat retaliations that he started, while China appeared more resilient to US curbs than the US was to China’s.

In one obvious example, many Americans’ first tariff pains came when Trump ended a duty-free exemption in February for low-value packages imported from cheap online retailers, like Shein and Temu. Unable to quickly adapt to the policy change, USPS abruptly stopped accepting all inbound packages from Hong Kong and China. After a chaotic 24 hours, USPS started slowly processing parcels again while promising Americans that it would work with customs to “implement an efficient collection mechanism for the new China tariffs to ensure the least disruption to package delivery.”

Trump has several legal tools to impose tariffs, but the most controversial path appears to be his favorite. The Supreme Court is currently weighing whether the International Emergency Economic Powers Act (IEEPA) grants a US president unilateral authority to impose tariffs.

Seizing this authority, Trump imposed so-called “reciprocal tariffs” at whim, the Consumer Technology Association and the Chamber of Commerce told the Supreme Court in a friend-of-the-court brief in which they urged the justices to end the “perfect storm of uncertainty.”

Unlike other paths that would limit how quickly Trump could shift tariff rates or how high the tariff rate could go, under IEEPA, Trump has imposed tariff rates as high as 125 percent. Deferring to Trump will cost US businesses, CTA and CoC warned. CTA CEO Gary Shapiro estimated that Trump has changed these tariff rates 100 times since his trade war began, affecting $223 billion of US exports.

Meanwhile, one of Trump’s biggest stated goals of his trade war—forcing more manufacturing into the US—is utterly failing, many outlets have reported.

Likely due to US companies seeking more stable supply chains, “reshoring progress is nowhere to be seen,” Fortune reported in November. That month, a dismal Bureau of Labor Statistics released a jobs report that an expert summarized as showing that the “US is losing blue-collar jobs for the first time since the pandemic.”

A month earlier, the nonpartisan policy group the Center for American Progress drew on government labor data to conclude that US employers cut 12,000 manufacturing jobs in August, and payrolls for manufacturing jobs had decreased by 42,000 since April.

As tech companies take tech tariffs on the chin, perhaps out of fears that rattling Trump could impact lucrative government contracts, other US companies have taken Trump to court. Most recently, Costco became one of the biggest corporations to sue Trump to ensure that US businesses get refunded if Trump loses the Supreme Court case, Bloomberg reported. Other recognizable companies like Revlon and Kawasaki have also sued, but small businesses have largely driven opposition to Trump’s tariffs, Bloomberg noted.

Should the Supreme Court side with businesses—analysts predict favorable odds—the US could owe up to $1 trillion in refunds. Dozens of economists told SCOTUS that Trump simply doesn’t understand why having trade deficits with certain countries isn’t a threat to US dominance, pointing out that the US “has been running a persistent surplus in trade in services for decades” precisely because the US “has the dominant technology sector in the world.”

Justices seem skeptical that IEEPA grants Trump the authority, ordinarily reserved for Congress, to impose taxes. However, during oral arguments, Justice Amy Coney Barrett fretted that undoing Trump’s tariffs could be “messy.” Countering that, small businesses have argued that it’s possible for Customs and Border Patrol to set up automatic refunds.

While waiting for the SCOTUS verdict (now expected in January), the CTA ended the year by advising tech companies to keep their receipts in case refunds require requests for tariffs line by line—potentially complicated by tariff rates changing so drastically and so often.

Biggest tariff nightmare may come in 2026

Looking into 2026, tech companies cannot breathe a sigh of relief even if the SCOTUS ruling swings their way, though. Under a separate, legally viable authority, Trump has threatened to impose tariffs on semiconductors and any products containing them, a move the semiconductor industry fears could cost $1 billion.

And if Trump continues imposing tariffs on materials used in popular tech products, the CTA told Ars in September that potential “tariff stacking” could become the industry’s biggest nightmare. Should that occur, US manufacturers could end up double-, triple-, or possibly even quadruple-taxed on products that may contain materials subject to individual tariffs, like semiconductors, polysilicon, or copper.

Predicting tariff costs could become so challenging that companies will have no choice but to raise prices, the CTA warned. That could threaten US tech competitiveness if, possibly over the long term, companies lose significant sales on their most popular products.

For many badly bruised by the first year of tariffs, it’s hard to see how tariffs could ever become a winning strategy for US tech dominance, as Trump has long claimed. And Americans continue to feel more than “a little pain,” as Trump forecasted, causing many to shift their views on the president.

Americans banding together to oppose tariffs could help prevent the worst possible outcomes. With prices already rising on certain goods in the US, the president reversed some tariffs as his approval ratings hit record lows. But so far, Big Tech hasn’t shown much interest in joining the fight, instead throwing money at the problem by making generous donations to things like Trump’s inaugural fund or his ballroom.

A bright light for the tech industry could be the midterm elections, which could pressure Trump to ease off aggressive tariff regimes, but that’s not a given. Trump allies have previously noted that the president typically responds to pushback on tariffs by doubling down. And one of Trump’s on-again-off-again allies, Elon Musk, noted in December in an interview that Trump ignored his warnings that tariffs would drive manufacturing out of the US.

“The president has made it clear he loves tariffs,” Musk said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Big Tech basically took Trump’s unpredictable trade war lying down Read More »

ram-and-ssd-prices-are-still-climbing—here’s-our-best-advice-for-pc-builders

RAM and SSD prices are still climbing—here’s our best advice for PC builders


I would avoid building a PC right now, but if you can’t, here’s our best advice.

The 16GB version of AMD’s Radeon RX 9060 XT. It’s one of the products to come out of a bad year for PC building. Credit: Andrew Cunningham

The 16GB version of AMD’s Radeon RX 9060 XT. It’s one of the products to come out of a bad year for PC building. Credit: Andrew Cunningham

The first few months of 2025 were full of graphics card reviews where we generally came away impressed with performance and completely at a loss on availability and pricing. The testing in these reviews is useful regardless, but when it came to extra buying advice, the best we could do was to compare Nvidia’s imaginary pricing to AMD’s imaginary pricing and wait for availability to improve.

Now, as the year winds down, we’re facing price spikes for memory and storage that are unlike anything I’ve seen in two decades of pricing out PC parts. Pricing for most RAM kits has increased dramatically since this summer, driven by overwhelming demand for these parts in AI data centers. Depending on what you’re building, it’s now very possible that the memory could be the single most expensive component you buy; things are even worse now than they were the last time we compared prices a few weeks ago.

Component Aug. 2025 price Nov. 2025 price Dec. 2025 price
Patriot Viper Venom 16GB (2 x 8GB) DDR-6000 $49 $110 $189
Western Digital WD Blue SN5000 500GB $45 $69 $102*
Silicon Power 16GB (2 x 8GB) DDR4-3200 $34 $89 $104
Western Digital WD Blue SN5000 1TB $64 $111 $135*
Team T-Force Vulcan 32GB DDR5-6000 $82 $310 $341
Western Digital WD Blue SN5000 2TB $115 $154 $190*
Western Digital WD Black SN7100 2TB $130 $175 $210
Team Delta RGB 64GB (2 x 32GB) DDR5-6400 $190 $700 $800

Some SSDs are getting to the point where they’re twice as expensive as they were this summer (for this comparison, I’ve swapped the newer WD Blue SN5100 pricing in for the SN5000, since the drive is both newer and slightly cheaper as of this writing). Some RAM kits, meanwhile, are around four times as expensive as they were in August. Yeesh.

And as bad as things are, the outlook for the immediate future isn’t great. Memory manufacturer Micron—which is pulling its Crucial-branded RAM and storage products from the market entirely in part because of these shortages—predicted in a recent earnings call that supply constraints would “persist beyond calendar 2026.” Kingston executives believe prices will continue to rise through next year. PR representatives at GPU manufacturer Sapphire believe prices will “stabilize,” albeit at a higher level than people might like.

I didn’t know it when I was writing the last update to our system guide in mid-August, but it turns out that I was writing it during 2025’s PC Building Equinox, the all-too-narrow stretch of time where 1080p and 1440p GPUs had fallen to more-or-less MSRP but RAM and storage prices hadn’t yet spiked.

All in all, it has been yet another annus horribilis for gaming-PC builders, and at this point it seems like the 2020s will just end up being a bad decade for PC building. Not only have we had to deal with everything from pandemic-fueled shortages to tariffs to the current AI-related crunch, but we’ve also been given pretty underwhelming upgrades for both GPUs and CPUs.

It should be a golden age for the gaming PC

It’s really too bad that building or buying a gaming PC is such an annoying and expensive proposition, because in a lot of ways there has never been a better time to be a PC gamer.

It used to be that PC ports of popular console games would come years later or never at all, but these days PC players get games at around the same time as console players, too. Sony, of all companies, has become much better about releasing its games to PC players. And Microsoft seems to be signaling more and more convergence between the Xbox and the PC, to the extent that it is communicating any kind of coherent Xbox strategy at all. The console wars are cooling down, and the PC has been one of the main beneficiaries.

That wider game availability is also coming at a time when PC software is getting more flexible and interesting. Traditional Windows-based gaming builds still dominate, of course, and Windows remains the path of least resistance for PC buyers and builders. But Valve’s work on SteamOS and the Proton compatibility software has brought a wide swath of PC games to Linux, and SteamOS itself is enabling a simpler and more console-like PC gaming experience for handheld PCs as well as TV-connected desktop computers. And that work is now boomeranging back around to Windows, which is gradually rolling out its own pared-down gamepad-centric frontend.

If you’ve already got a decent gaming PC, you’re feeling pretty good about all of this—as long as the games you want to play don’t have Mario or Pikachu in them, your PC is all you really need. It’s also not a completely awful time to be upgrading a build you already have, as long as you already have at least 16GB of RAM—if you’re thinking about a GPU upgrade, doing it now before the RAM price spikes can start impacting graphics card pricing is probably a smart move.

If you don’t already have a decent gaming PC and you can buy a whole PlayStation 5 for the cost of some 32GB DDR5 RAM kits, well, it’s hard to look past the downsides no matter how good the upsides are. But it doesn’t mean we can’t try.

What if you want to buy something anyway?

As (relatively) old as they are, midrange Core i5 chips from Intel’s 12th-, 13th-, and 14th-generation Core CPU lineups are still solid choices for budget-to-midrange PC builds. And they work with DDR4, which isn’t quite as pricey as DDR5 right now. Credit: Andrew Cunningham

Say those upsides are still appealing to you, and you want to build something today. How should you approach this terrible, volatile RAM market?

I won’t do a full update to August’s system guide right now, both because it feels futile to try and recommend individual RAM kits or SSD with prices and stock levels being as volatile as they are, and because aside from RAM and storage I actually wouldn’t change any of these recommendations all that much (with the caveat that Intel’s Core i5-13400F seems to be getting harder to find; consider an i5-12400F or i5-12600KF instead). So, starting from those builds, here’s the advice I would try to give to PC-curious friends:

DDR4 is faring better than DDR5. Prices for all kinds of RAM have gone up recently, but DDR4 pricing hasn’t gotten quite as bad as DDR5 pricing. That’s of no help to you if you’re trying to build something around a newer Ryzen chip and a socket AM5 motherboard, since those parts require DDR5. But if you’re trying to build a more budget-focused system around one of Intel’s 12th-, 13th-, or 14th-generation CPUs, a decent name brand 32GB DDR4-3200 kit comes in around half the price of a similar 32GB DDR5-6000 kit. Pricing isn’t great, but it’s still possible to build something respectable for under $1,000.

Newegg bundles might help. I’m normally not wild about these kinds of component bundles; even if they appear to be a good deal, they’re often a way for Newegg or other retailers to get rid of things they don’t want by pairing them with things people do want. You also have to deal with less flexibility—you can’t always pick exactly the parts you’d want under ideal circumstances. But if you’re already buying a CPU and a motherboard, it might be worth digging through the available deals just to see if you can get a good price on something workable.

Don’t overbuy (or consider under-buying). Under normal circumstances, anyone advising you on a PC build should be recommending matched pairs of RAM sticks with reasonable speeds and ample capacities (DDR4-3200 remains a good sweet spot, as does DDR5-6000 or DDR5-6400). Matched sticks are capable of dual-channel operation, boosting memory bandwidth and squeezing a bit more performance out of your system. And getting 32GB of RAM means comfortably running any game currently in existence, with a good amount of room to grow.

But desperate times call for desperate measures. Slower DDR5 speeds like DDR5-5200 can come in a fair bit cheaper than DDR5-6000 or DDR5-6400, in exchange for a tiny speed hit that’s going to be hard to notice outside of benchmarks. You might even consider buying a single 16GB stick of DDR5, and buying it a partner at some point later when prices have calmed down a bit. You’ll leave a tiny bit of performance on the table, and a small handful of games want more than 16GB of system RAM. But you’ll have something that boots, and the GPU is still going to determine how well most games run.

Don’t forget that non-binary DDR5 exists. DDR5 sticks come in some in-betweener capacities that weren’t possible with DDR4, which means that companies sell it in 24GB and 48GB sticks, not just 16/32/64. And these kits can be a very slightly better deal than the binary memory kits right now; this 48GB Crucial DDR5-6000 kit is going for $470 right now, or $9.79 per gigabyte, compared to about $340 for a similar 32GB kit ($10.63 per GB) or $640 for a 64GB kit ($10 per GB). It’s not much, but if you truly do need a lot of RAM, it’s worth looking into.

Consider pre-built systems. A quick glance at Dell’s Alienware lineup and Lenovo’s Legion lineup makes it clear that these towers still aren’t particularly price-competitive with similarly specced self-built PCs. This was true before there was a RAM shortage, and it’s true now. But for certain kinds of PCs, particularly budget PCs, it can still make more sense to buy than to build.

For example, when I wrote about the self-built “Steam Machine” I’ve been using for a few months now, I mentioned some Ryzen-based mini desktops on Amazon. I later tested this one from Aoostar as part of a wider-ranging SteamOS-vs-Windows performance comparison. Whether you’re comfortable with these no-name mini PCs is something you’ll have to decide for yourself, but that’s a fully functional PC with 32GB of DDR5, a 1TB SSD, a workable integrated GPU, and a Windows license for $500. You’d spend nearly $500 just to buy the RAM kit and the SSD with today’s component prices; for basic 1080p gaming you could do a lot worse.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

RAM and SSD prices are still climbing—here’s our best advice for PC builders Read More »

us-taking-25%-cut-of-nvidia-chip-sales-“makes-no-sense,”-experts-say

US taking 25% cut of Nvidia chip sales “makes no sense,” experts say


Trump’s odd Nvidia reversal may open the door for China to demand Blackwell access.

Donald Trump’s decision to allow Nvidia to export an advanced artificial intelligence chip, the H200, to China may give China exactly what it needs to win the AI race, experts and lawmakers have warned.

The H200 is about 10 times less powerful than Nvidia’s Blackwell chip, which is the tech giant’s currently most advanced chip that cannot be exported to China. But the H200 is six times more powerful than the H20, the most advanced chip available in China today. Meanwhile China’s leading AI chip maker, Huawei, is estimated to be about two years behind Nvidia’s technology. By approving the sales, Trump may unwittingly be helping Chinese chip makers “catch up” to Nvidia, Jake Sullivan told The New York Times.

Sullivan, a former Biden-era national security advisor who helped design AI chip export curbs on China, told the NYT that Trump’s move was “nuts” because “China’s main problem” in the AI race “is they don’t have enough advanced computing capability.”

“It makes no sense that President Trump is solving their problem for them by selling them powerful American chips,” Sullivan said. “We are literally handing away our advantage. China’s leaders can’t believe their luck.”

Trump apparently was persuaded by Nvidia CEO Jensen Huang and his “AI czar,” David Sacks, to reverse course on H200 export curbs. They convinced Trump that restricting sales would ensure that only Chinese chip makers would get a piece of China’s market, shoring up revenue flows that dominant firms like Huawei could pour into R&D.

By instead allowing Nvidia sales, China’s industry would remain hooked on US chips, the thinking goes. And Nvidia could use those funds—perhaps $10–15 billion annually, Bloomberg Intelligence has estimated—to further its own R&D efforts. That cash influx, theoretically, would allow Nvidia to maintain the US advantage.

Along the way, the US would receive a 25 percent cut of sales, which lawmakers from both sides of the aisle warned may not be legal and suggested to foreign rivals that US national security was “now up for sale,” NYT reported. The president has claimed there are conditions to sales safeguarding national security but, frustrating critics, provided no details.

Experts slam Nvidia plan as “flawed”

Trump’s plan is “flawed,” The Economist reported.

For years, the US has established tech dominance by keeping advanced technology away from China. Trump risks rocking that boat by “tearing up America’s export-control policy,” particularly if China’s chip industry simply buys up the H200s as a short-term tactic to learn from the technology and beef up its domestic production of advanced chips, The Economist reported.

In a sign that’s exactly what many expect could happen, investors in China were apparently so excited by Trump’s announcement that they immediately poured money into Moore Threads, expected to be China’s best answer to Nvidia, the South China Morning Post reported.

Several experts for the non-partisan think tank the Counsel on Foreign Relations also criticized the policy change, cautioning that the reversal of course threatened to undermine US competition with China.

Suggesting that Trump was “effectively undoing” export curbs sought during his first term, Zongyuan Zoe Liu warned that China “buys today to learn today, with the intention to build tomorrow.”

And perhaps more concerning, she suggested, is that Trump’s policy signals weakness. Rather than forcing Chinese dependence on US tech, reversing course showed China that the US will “back down” under pressure, she warned. And they’re getting that message at a time when “Chinese leaders have a lot of reasons to believe they are not only winning the trade war but also making progress towards a higher degree of strategic autonomy.”

In a post on X, Rush Doshi—a CFR expert who previously advised Biden on national security issues related to China—suggested that the policy change was “possibly decisive in the AI race.”

“Compute is our main advantage—China has more power, engineers, and the entire edge layer—so by giving this up, we increase the odds the world runs on Chinese AI,” Doshi wrote.

Experts fear Trump may not understand the full impact of his decision. In the short-term, Michael C. Horowitz wrote for CFR, “it is indisputable” that allowing H200 exports benefits China’s frontier AI and efforts to scale data centers. And Doshi pointed out that Trump’s shift may trigger more advanced technology flowing into China, as US allies that restricted sales of machines to build AI chips may soon follow his lead and lift their curbs. As China learns to be self-reliant from any influx of advanced tech, Sullivan warned that China’s leaders “intend to get off of American semiconductors as soon as they can.”

“So, the argument that we can keep them ‘addicted’ holds no water,” Sullivan said. “They want American chips right now for one simple reason: They are behind in the AI race, and this will help them catch up while they build their own chip capabilities.”

China may reject H200, demand Blackwell access

It remains unclear if China will approve H200 sales, but some of the country’s biggest firms, including ByteDance, Tencent, and Alibaba, are interested, anonymous insider sources told Reuters.

In the past, China has instructed companies to avoid Nvidia, warning of possible backdoors giving Nvidia a kill switch to remotely shut down chips. Such backdoors could potentially destabilize Chinese firms’ operations and R&D. Nvidia has denied such backdoors exist, but Chinese firms have supposedly sought reassurances from Nvidia in the aftermath of Trump’s policy change. Likely just as unpopular with the Chinese firms and government, Nvidia confirmed recently that it has built location verification tech that could help the US detect when restricted chips are leaked into China. Should the US ever renew export curbs on H200 chips, adopting them widely could cause chaos in the future.

Without giving China sought-after reassurances, Nvidia may not end up benefiting as much as it hoped from its mission to reclaim lost revenue from the Chinese market. Today, Chinese firms control about 60 percent of China’s AI chip market, where only a few years ago American firms—led by Nvidia—controlled 80 percent, the Economist reported.

But for China, the temptation to buy up Nvidia chips may be too great to pass up. Another CFR expert, Chris McGuire, estimated that Nvidia could suddenly start exporting as many as 3 million H200s into China next year. “This would at least triple the amount of aggregate AI computing power China could add domestically” in 2026, McGuire wrote, and possibly trigger disastrous outcomes for the US.

“This could cause DeepSeek and other Chinese AI developers to close the gap with leading US AI labs and enable China to develop an ‘AI Belt and Road’ initiative—a complement to its vast global infrastructure investment network already in place—that competes with US cloud providers around the world,” McGuire forecasted.

As China mulls the benefits and risks, an emergency meeting was called, where the Chinese government discussed potential concerns of local firms buying chips, according to The Information. Reportedly, Beijing ended that meeting with a promise to issue a decision soon.

Horowitz suggested that a primary reason that China may reject the H200s could be to squeeze even bigger concessions out of Trump, whose administration recently has been working to maintain a tenuous truce with China.

“China could come back demanding the Blackwell or something else,” Horowitz suggested.

In a statement, Nvidia—which plans to release a chip called the Rubin to surpass the Blackwell soon—praised Trump’s policy as striking “a thoughtful balance that is great for America.”

China will rip off Nvidia’s chips, Republican warns

Both Democratic and Republican lawmakers in Congress criticized Trump’s plan, including senators behind a bipartisan push to limit AI chip sales to China.

Some have questioned how much thought was put into the policy, as the US confusingly continues restricting less advanced AI chips (like the A100 and H100) while green-lighting H200 sales. Trump’s Justice Department also seems to be struggling to keep up. The NYT noted that just “hours before” Trump announced the policy change, the DOJ announced “it had detained two people for selling those chips to the country.”

The chair of the Select Committee on Competition with China, Rep. John Moolenaar (R-Mich.), warned on X that the news wouldn’t be good for the US or Nvidia. First, the Chinese Communist Party “will use these highly advanced chips to strengthen its military capabilities and totalitarian surveillance,” he suggested. And second, “Nvidia should be under no illusions—China will rip off its technology, mass produce it themselves, and seek to end Nvidia as a competitor.”

“That is China’s playbook and it is using it in every critical industry,” Moolenaar said.

House Democrats on committees dealing with foreign affairs and competition with China echoed those concerns, The Hill reported, warning that “under this administration, our national security is for sale.”

Nvidia’s Huang seems pleased with the outcome, which comes after months of reportedly pressuring the administration to lift export curbs limiting its growth in Chinese markets, the NYT reported. Last week, Trump heaped praise on Huang after one meeting, calling Huang a “smart man” and suggesting the Nvidia chief has “done an amazing job” helping Trump understand the stakes.

At an October news conference ahead of the deal’s official approval, Huang suggested that government lawyers were researching ways to get around a US law that prohibits charging companies fees for export licenses. Eventually, Trump is expected to release a policy that outlines how the US will collect those fees without conflicting with that law.

Senate Democrats appear unlikely to embrace such a policy, issuing a joint statement condemning the H200 sales as dooming the US in the AI race and threatening national security.

“Access to these chips would give China’s military transformational technology to make its weapons more lethal, carry out more effective cyberattacks against American businesses and critical infrastructure and strengthen their economic and manufacturing sector,” Senators wrote.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

US taking 25% cut of Nvidia chip sales “makes no sense,” experts say Read More »

steamos-vs.-windows-on-dedicated-gpus:-it’s-complicated,-but-windows-has-an-edge

SteamOS vs. Windows on dedicated GPUs: It’s complicated, but Windows has an edge

Other results vary from game to game and from GPU to GPU. Borderlands 3, for example, performs quite a bit better on Windows than on SteamOS across all of our tested GPUs, sometimes by as much as 20 or 30 percent (with smaller gaps here and there). As a game from 2019 with no ray-tracing effects, it still runs serviceably on SteamOS across the board, but it was the game we tested that favored Windows the most consistently.

In both Forza Horizon 5 and Cyberpunk 2077, with ray-tracing effects enabled, you also see a consistent advantage for Windows across the 16GB dedicated GPUs, usually somewhere in the 15 to 20 percent range.

To Valve’s credit, there were also many games we tested where Windows and SteamOS performance was functionally tied. Cyberpunk without ray-tracing, Returnal when not hitting the 7600’s 8GB RAM limit, and Assassin’s Creed Valhalla were sometimes actually tied between Windows and SteamOS, or they differed by low-single-digit percentages that you could chalk up to the margin of error.

Now look at the results from the integrated GPUs, the Radeon 780M and RX 8060S. These are pretty different GPUs from one another—the 8060S has more than three times the compute units of the 780M, and it’s working with a higher-speed pool of soldered-down LPDDR5X-8000 rather than two poky DDR5-5600 SODIMMs.

But Borderlands aside, SteamOS actually did quite a bit better on these GPUs relative to Windows. In both Forza and Cyberpunk with ray-tracing enabled, SteamOS slightly beats Windows on the 780M, and mostly closes the performance gap on the 8060S. For the games where Windows and SteamOS essentially tied on the dedicated GPUs, SteamOS has a small but consistent lead over Windows in average frame rates.

SteamOS vs. Windows on dedicated GPUs: It’s complicated, but Windows has an edge Read More »

after-nearly-30-years,-crucial-will-stop-selling-ram-to-consumers

After nearly 30 years, Crucial will stop selling RAM to consumers

DRAM contract prices have increased 171 percent year over year, according to industry data. Gerry Chen, general manager of memory manufacturer TeamGroup, warned that the situation will worsen in the first half of 2026 once distributors exhaust their remaining inventory. He expects supply constraints to persist through late 2027 or beyond.

The fault lies squarely at the feet of AI mania in the tech industry. The construction of new AI infrastructure has created unprecedented demand for high-bandwidth memory (HBM), the specialized DRAM used in AI accelerators from Nvidia and AMD. Memory manufacturers have been reallocating production capacity away from consumer products toward these more profitable enterprise components, and Micron has presold its entire HBM output through 2026.

A photo of the

A photo of the “Stargate I” site in Abilene, Texas. AI data center sites like this are eating up the RAM supply. Credit: OpenAI

At the moment, the structural imbalance between AI demand and consumer supply shows no signs of easing. OpenAI’s Stargate project has reportedly signed agreements for up to 900,000 wafers of DRAM per month, which could account for nearly 40 percent of global production.

The shortage has already forced companies to adapt. As Ars’ Andrew Cunningham reported, laptop maker Framework stopped selling standalone RAM kits in late November to prevent scalping and said it will likely be forced to raise prices soon.

For Micron, the calculus is clear: Enterprise customers pay more and buy in bulk. But for the DIY PC community, the decision will leave PC builders with one fewer option when reaching for the RAM sticks. In his statement, Sadana reflected on the brand’s 29-year run.

“Thanks to a passionate community of consumers, the Crucial brand has become synonymous with technical leadership, quality and reliability of leading-edge memory and storage products,” Sadana said. “We would like to thank our millions of customers, hundreds of partners and all of the Micron team members who have supported the Crucial journey for the last 29 years.”

After nearly 30 years, Crucial will stop selling RAM to consumers Read More »

testing-shows-why-the-steam-machine’s-8gb-of-graphics-ram-could-be-a-problem

Testing shows why the Steam Machine’s 8GB of graphics RAM could be a problem

By Valve’s admission, its upcoming Steam Machine desktop isn’t swinging for the fences with its graphical performance. The specs promise decent 1080p-to-1440p performance in most games, with 4K occasionally reachable with assistance from FSR upscaling—about what you’d expect from a box with a modern midrange graphics card in it.

But there’s one spec that has caused some concern among Ars staffers and others with their eyes on the Steam Machine: The GPU comes with just 8GB of dedicated graphics RAM, an amount that is steadily becoming more of a bottleneck for midrange GPUs like AMD’s Radeon RX 7060 and 9060, or Nvidia’s GeForce RTX 4060 or 5060.

In our reviews of these GPUs, we’ve already run into some games where the RAM ceiling limits performance in Windows, especially at 1440p. But we’ve been doing more extensive testing of various GPUs with SteamOS, and we can confirm that in current betas, 8GB GPUs struggle even more on SteamOS than they do running the same games at the same settings in Windows 11.

The good news is that Valve is working on solutions, and having a stable platform like the Steam Machine to aim for should help improve things for other hardware with similar configurations. The bad news is there’s plenty of work left to do.

The numbers

We’ve tested an array of dedicated and integrated Radeon GPUs under SteamOS and Windows, and we’ll share more extensive results in another article soon (along with broader SteamOS-vs-Windows observations). But for our purposes here, the two GPUs that highlight the issues most effectively are the 8GB Radeon RX 7600 and the 16GB Radeon RX 7600 XT.

These dedicated GPUs have the benefit of being nearly identical to what Valve plans to ship in the Steam Machine—32 compute units (CUs) instead of Valve’s 28, but the same RDNA3 architecture. They’re also, most importantly for our purposes, pretty similar to each other—the same physical GPU die, just with slightly higher clock speeds and more RAM for the 7600 XT than for the regular 7600.

Testing shows why the Steam Machine’s 8GB of graphics RAM could be a problem Read More »