NVIDIA

handful-of-users-claim-new-nvidia-gpus-are-melting-power-cables-again

Handful of users claim new Nvidia GPUs are melting power cables again

The 12VHPWR and 12V-2×6 connectors are both designed to solve a real problem: delivering hundreds of watts of power to high-end GPUs over a single cable rather than trying to fit multiple 8-pin power connectors onto these GPUs. In theory, swapping two to four 8-pin connectors for a single 12V-2×6 or 12VHPWR connector cuts down on the amount of board space OEMs must reserve for these connectors in their designs and the number of cables that users have to snake through the inside of their gaming PCs.

But while Nvidia, Intel, AMD, Qualcomm, Arm, and other companies are all PCI-SIG members and all had a hand in the design of the new standards, Nvidia is the only GPU company to use the 12VHPWR and 12V-2×6 connectors in most of its GPUs. AMD and Intel have continued to use the 8-pin power connector, and even some of Nvidia’s partners have stuck with 8-pin connectors for lower-end, lower-power cards like the RTX 4060 and 4070 series.

Both of the reported 5090 incidents involved third-party cables, one from custom PC part manufacturer MODDIY and one included with an FSP power supply, rather than the first-party 8-pin adapter that Nvidia supplies with GeForce GPUs. It’s much too early to say whether these cables (or Nvidia, or the design of the connector, or the affected users) caused the problem or whether this was just a coincidence.

We’ve contacted Nvidia to see whether it’s aware of and investigating the reports and will update this piece if we receive a response.

Handful of users claim new Nvidia GPUs are melting power cables again Read More »

amd-promises-“mainstream”-4k-gaming-with-next-gen-gpus-as-current-gen-gpu-sales-tank

AMD promises “mainstream” 4K gaming with next-gen GPUs as current-gen GPU sales tank

AMD announced its fourth-quarter earnings yesterday, and the numbers were mostly rosy: $7.7 billion in revenue and a 51 percent profit margin, compared to $6.2 billion and 47 percent a year ago. The biggest winner was the data center division, which made $3.9 billion thanks to Epyc server processors and Instinct AI accelerators, and Ryzen CPUs are also selling well, helping the company’s client segment earn $2.3 billion.

But if you were looking for a dark spot, you’d find it in the company’s gaming division, which earned a relatively small $563 million, down 59 percent from a year ago. AMD’s Lisa Su blamed this on both dedicated graphics card sales and sales from the company’s “semi-custom” chips (that is, the ones created specifically for game consoles like the Xbox and PlayStation).

Other data sources suggest that the response from GPU buyers to AMD’s Radeon RX 7000 series, launched between late 2022 and early 2024, has been lackluster. The Steam Hardware Survey, a noisy but broadly useful barometer for GPU market share, shows no RX 7000-series models in the top 50; only two of the GPUs (the 7900 XTX and 7700 XT) are used in enough gaming PCs to be mentioned on the list at all, with the others all getting lumped into the “other” category. Jon Peddie Research recently estimated that AMD was selling roughly one dedicated GPU for every seven or eight sold by Nvidia.

But hope springs eternal. Su confirmed on AMD’s earnings call that the new Radeon RX 9000-series cards, announced at CES last month, would be launching in early March. The Radeon RX 9070 and 9070 XT are both aimed toward the middle of the graphics card market, and Su said that both would bring “high-quality gaming to mainstream players.”

An opportunity, maybe

“Mainstream” could mean a lot of things. AMD’s CES slide deck positioned the 9070 series alongside Nvidia’s RTX 4070 Ti ($799) and 4070 Super ($599) and its own RTX 7900 XT, 7900 GRE, and 7800 XT (between $500 and $730 as of this writing), a pretty wide price spread that is still more expensive than an entire high-end console. The GPUs could still rely heavily on upscaling algorithms like AMD’s Fidelity Super Resolution (FSR) to hit playable frame rates at those resolutions, rather than targeting native 4K.

AMD promises “mainstream” 4K gaming with next-gen GPUs as current-gen GPU sales tank Read More »

i-agree-with-openai:-you-shouldn’t-use-other-peoples’-work-without-permission

I agree with OpenAI: You shouldn’t use other peoples’ work without permission

ChatGPT developer OpenAI and other players in the generative AI business were caught unawares this week by a Chinese company named DeepSeek, whose open source R1 simulated reasoning model provides results similar to OpenAI’s best paid models (with some notable exceptions) despite being created using just a fraction of the computing power.

Since ChatGPT, Stable Diffusion, and other generative AI models first became publicly available in late 2022 and 2023, the US AI industry has been undergirded by the assumption that you’d need ever-greater amounts of training data and compute power to continue improving their models and get—eventually, maybe—to a functioning version of artificial general intelligence, or AGI.

Those assumptions were reflected in everything from Nvidia’s stock price to energy investments and data center plans. Whether DeepSeek fundamentally upends those plans remains to be seen. But at a bare minimum, it has shaken investors who have poured money into OpenAI, a company that reportedly believes it won’t turn a profit until the end of the decade.

OpenAI CEO Sam Altman concedes that the DeepSeek R1 model is “impressive,” but the company is taking steps to protect its models (both language and business); OpenAI told the Financial Times and other outlets that it believed DeepSeek had used output from OpenAI’s models to train the R1 model, a method known as “distillation.” Using OpenAI’s models to train a model that will compete with OpenAI’s models is a violation of the company’s terms of service.

“We take aggressive, proactive countermeasures to protect our technology and will continue working closely with the US government to protect the most capable models being built here,” an OpenAI spokesperson told Ars.

So taking data without permission is bad, now?

I’m not here to say whether the R1 model is the product of distillation. What I can say is that it’s a little rich for OpenAI to suddenly be so very publicly concerned about the sanctity of proprietary data.

I agree with OpenAI: You shouldn’t use other peoples’ work without permission Read More »

nvidia-starts-to-wind-down-support-for-old-gpus,-including-the-long-lived-gtx-1060

Nvidia starts to wind down support for old GPUs, including the long-lived GTX 1060

Nvidia is launching the first volley of RTX 50-series GPUs based on its new Blackwell architecture, starting with the RTX 5090 and working downward from there. The company also appears to be winding down support for a few of its older GPU architectures, according to these CUDA release notes spotted by Tom’s Hardware.

The release notes say that CUDA support for the Maxwell, Pascal, and Volta GPU architectures “is considered feature-complete and will be frozen in an upcoming release.” While all of these architectures—which collectively cover GeForce GPUs from the old GTX 700 series all the way up through 2016’s GTX 1000 series, plus a couple of Quadro and Titan workstation cards—are still currently supported by Nvidia’s December Game Ready driver package, the end of new CUDA feature support suggests that these GPUs will eventually be dropped from these driver packages soon.

It’s common for Nvidia and AMD to drop support for another batch of architectures all at once every few years; Nvidia last dropped support for older cards in 2021, and AMD dropped support for several prominent GPUs in 2023. Both companies maintain a separate driver branch for some of their older cards but releases usually only happen every few months, and they focus on security updates, not on providing new features or performance optimizations for new games.

Nvidia starts to wind down support for old GPUs, including the long-lived GTX 1060 Read More »

nvidia-geforce-rtx-5090-costs-as-much-as-a-whole-gaming-pc—but-it-sure-is-fast

Nvidia GeForce RTX 5090 costs as much as a whole gaming PC—but it sure is fast


Even setting aside Frame Generation, this is a fast, power-hungry $2,000 GPU.

Credit: Andrew Cunningham

Credit: Andrew Cunningham

Nvidia’s GeForce RTX 5090 starts at $1,999 before you factor in upsells from the company’s partners or price increases driven by scalpers and/or genuine demand. It costs more than my entire gaming PC.

The new GPU is so expensive that you could build an entire well-specced gaming PC with Nvidia’s next-fastest GPU in it—the $999 RTX 5080, which we don’t have in hand yet—for the same money, or maybe even a little less with judicious component selection. It’s not the most expensive GPU that Nvidia has ever launched—2018’s $2,499 Titan RTX has it beat, and 2022’s RTX 3090 Ti also cost $2,000—but it’s safe to say it’s not really a GPU intended for the masses.

At least as far as gaming is concerned, the 5090 is the very definition of a halo product; it’s for people who demand the best and newest thing regardless of what it costs (the calculus is probably different for deep-pocketed people and companies who want to use them as some kind of generative AI accelerator). And on this front, at least, the 5090 is successful. It’s the newest and fastest GPU you can buy, and the competition is not particularly close. It’s also a showcase for DLSS Multi-Frame Generation, a new feature unique to the 50-series cards that Nvidia is leaning on heavily to make its new GPUs look better than they already are.

Founders Edition cards: Design and cooling

RTX 5090 RTX 4090 RTX 5080 RTX 4080 Super
CUDA cores 21,760 16,384 10,752 10,240
Boost clock 2,410 MHz 2,520 MHz 2,617 MHz 2,550 MHz
Memory bus width 512-bit 384-bit 256-bit 256-bit
Memory bandwidth 1,792 GB/s 1,008 GB/s 960 GB/s 736 GB/s
Memory size 32GB GDDR7 24GB GDDR6X 16GB GDDR7 16GB GDDR6X
TGP 575 W 450 W 360 W 320 W

We won’t spend too long talking about the specific designs of Nvidia’s Founders Edition cards since many buyers will experience the Blackwell GPUs with cards from Nvidia’s partners instead (the cards we’ve seen so far mostly look like the expected fare: gargantuan triple-slot triple-fan coolers, with varying degrees of RGB). But it’s worth noting that Nvidia has addressed a couple of my functional gripes with the 4090/4080-series design.

The first was the sheer dimensions of each card—not an issue unique to Nvidia, but one that frequently caused problems for me as someone who tends toward ITX-based PCs and smaller builds. The 5090 and 5080 FE designs are the same length and height as the 4090 and 4080 FE designs, but they only take up two slots instead of three, which will make them an easier fit for many cases.

Nvidia has also tweaked the cards’ 12VHPWR connector, recessing it into the card and mounting it at a slight angle instead of having it sticking straight out of the top edge. The height of the 4090/4080 FE design made some cases hard to close up once you factored in the additional height of a 12VHPWR cable or Nvidia’s many-tentacled 8-pin-to-12VHPWR adapter. The angled connector still extends a bit beyond the top of the card, but it’s easier to tuck the cable away so you can put the side back on your case.

Finally, Nvidia has changed its cooler—whereas most OEM GPUs mount all their fans on the top of the GPU, Nvidia has historically placed one fan on each side of the card. In a standard ATX case with the GPU mounted parallel to the bottom of the case, this wasn’t a huge deal—there’s plenty of room for that air to circulate inside the case and to be expelled by whatever case fans you have installed.

But in “sandwich-style” ITX cases, where a riser cable wraps around so the GPU can be mounted parallel to the motherboard, the fan on the bottom side of the GPU was poorly placed. In many sandwich-style cases, the GPU fan will dump heat against the back of the motherboard, making it harder to keep the GPU cool and creating heat problems elsewhere besides. The new GPUs mount both fans on the top of the cards.

Nvidia’s Founders Edition cards have had heat issues in the past—most notably the 30-series GPUs—and that was my first question going in. A smaller cooler plus a dramatically higher peak power draw seems like a recipe for overheating.

Temperatures for the various cards we re-tested for this review. The 5090 FE is the toastiest of all of them, but it still has a safe operating temperature.

At least for the 5090, the smaller cooler does mean higher temperatures—around 10 to 12 degrees Celsius higher when running the same benchmarks as the RTX 4090 Founders Edition. And while temperatures of around 77 degrees aren’t hugely concerning, this is sort of a best-case scenario, with an adequately cooled testbed case with the side panel totally removed and ambient temperatures at around 21° or 22° Celsius. You’ll just want to make sure you have a good amount of airflow in your case if you buy one of these.

Testbed notes

A new high-end Nvidia GPU is a good reason to tweak our test bed and suite of games, and we’ve done both here. Mainly, we added a 1050 W Thermaltake Toughpower GF A3 power supply—Nvidia recommends at least 1000 W for the 5090, and this one has a native 12VHPWR connector for convenience. We’ve also swapped the Ryzen 7 7800X3D for a slightly faster Ryzen 7 9800X3D to reduce the odds that the CPU will bottleneck performance as we try to hit high frame rates.

As for the suite of games, we’ve removed a couple of older titles and added some with built-in benchmarks that will tax these GPUs a bit more, especially at 4K with all the settings turned up. Those games include the RT Overdrive preset in the perennially punishing Cyberpunk 2077 and Black Myth: Wukong in Cinematic mode, both games where even the RTX 4090 struggles to hit 60 fps without an assist from DLSS. We’ve also added Horizon Zero Dawn Remastered, a recent release that doesn’t include ray-tracing effects but does support most DLSS 3 and FSR 3 features (including FSR Frame Generation).

We’ve tried to strike a balance between games with ray-tracing effects and games without it, though most AAA games these days include it, and modern GPUs should be able to handle it well (best of luck to AMD with its upcoming RDNA 4 cards).

For the 5090, we’ve run all tests in 4K—if you don’t care about running games in 4K, even if you want super-high frame rates at 1440p or for some kind of ultrawide monitor, the 5090 is probably overkill. When we run upscaling tests, we use the newest DLSS version available for Nvidia cards, the newest FSR version available for AMD cards, and the newest XeSS version available for Intel cards (not relevant here, just stating for the record), and we use the “Quality” setting (at 4K, that equates to an actual rendering version of 1440p).

Rendering performance: A lot faster, a lot more power-hungry

Before we talk about Frame Generation or “fake frames,” let’s compare apples to apples and just examine the 5090’s rendering performance.

The card mainly benefits from four things compared to the 4090: the updated Blackwell GPU architecture, a nearly 33 percent increase in the number of CUDA cores, an upgrade from GDDR6X to GDDR7, and a move from a 384-bit memory bus to a 512-bit bus. It also jumps from 24GB of RAM to 32GB, but games generally aren’t butting up against a 24GB limit yet, so the capacity increase by itself shouldn’t really change performance if all you’re focused on is gaming.

And for people who prioritize performance over all else, the 5090 is a big deal—it’s the first consumer graphics card from any company that is faster than a 4090, as Nvidia never spruced up the 4090 last year when it did its mid-generation Super refreshes of the 4080, 4070 Ti, and 4070.

Comparing natively rendered games at 4K, the 5090 is between 17 percent and 40 percent faster than the 4090, with most of the games we tested landing somewhere in the low to high 30 percent range. That’s an undeniably big bump, one that’s roughly commensurate with the increase in the number of CUDA cores. Tests run with DLSS enabled (both upscaling-only and with Frame Generation running in 2x mode) improve by roughly the same amount.

You could find things to be disappointed about if you went looking for them. That 30-something-percent performance increase comes with a 35 percent increase in power use in our testing under load with punishing 4K games—the 4090 tops out around 420 W, whereas the 5090 went all the way up to 573 W, with the 5090 coming closer to its 575 W TDP than the 4090 does to its theoretical 450 W maximum. The 50-series cards use the same TSMC 4N manufacturing process as the 40-series cards, and increasing the number of transistors without changing the process results in a chip that uses more power (though it should be said that capping frame rates, running at lower resolutions, or running less-demanding games can rein in that power use a bit).

Power draw under load goes up by an amount roughly commensurate with performance. The 4090 was already power-hungry; the 5090 is dramatically more so. Credit: Andrew Cunningham

The 5090’s 30-something percent increase over the 4090 might also seem underwhelming if you recall that the 4090 was around 55 percent faster than the previous-generation 3090 Ti while consuming about the same amount of power. To be even faster than a 4090 is no small feat—AMD’s fastest GPU is more in line with Nvidia’s 4080 Super—but if you’re comparing the two cards using the exact same tests, the relative leap is less seismic.

That brings us to Nvidia’s answer for that problem: DLSS 4 and its Multi-Frame Generation feature.

DLSS 4 and Multi-Frame Generation

As a refresher, Nvidia’s DLSS Frame Generation feature, as introduced in the GeForce 40-series, takes DLSS upscaling one step further. The upscaling feature inserted interpolated pixels into a rendered image to make it look like a sharper, higher-resolution image without having to do all the work of rendering all those pixels. DLSS FG would interpolate an entire frame between rendered frames, boosting your FPS without dramatically boosting the amount of work your GPU was doing. If you used DLSS upscaling and FG at the same time, Nvidia could claim that seven out of eight pixels on your screen were generated by AI.

DLSS Multi-Frame Generation (hereafter MFG, for simplicity’s sake) does the same thing, but it can generate one to three interpolated frames for every rendered frame. The marketing numbers have gone up, too; now, 15 out of every 16 pixels on your screen can be generated by AI.

Nvidia might point to this and say that the 5090 is over twice as fast as the 4090, but that’s not really comparing apples to apples. Expect this issue to persist over the lifetime of the 50-series. Credit: Andrew Cunningham

Nvidia provided reviewers with a preview build of Cyberpunk 2077 with DLSS MFG enabled, which gives us an example of how those settings will be exposed to users. For 40-series cards that only support the regular DLSS FG, you won’t notice a difference in games that support MFG—Frame Generation is still just one toggle you can turn on or off. For 50-series cards that support MFG, you’ll be able to choose from among a few options, just as you currently can with other DLSS quality settings.

The “2x” mode is the old version of DLSS FG and is supported by both the 50-series cards and 40-series GPUs; it promises one generated frame for every rendered frame (two frames total, hence “2x”). The “3x” and “4x” modes are new to the 50-series and promise two and three generated frames (respectively) for every rendered frame. Like the original DLSS FG, MFG can be used in concert with normal DLSS upscaling, or it can be used independently.

One problem with the original DLSS FG was latency—user input was only being sampled at the natively rendered frame rate, meaning you could be looking at 60 frames per second on your display but only having your input polled 30 times per second. Another is image quality; as good as the DLSS algorithms can be at guessing and recreating what a natively rendered pixel would look like, you’ll inevitably see errors, particularly in fine details.

Both these problems contribute to the third problem with DLSS FG: Without a decent underlying frame rate, the lag you feel and the weird visual artifacts you notice will both be more pronounced. So DLSS FG can be useful for turning 120 fps into 240 fps, or even 60 fps into 120 fps. But it’s not as helpful if you’re trying to get from 20 or 30 fps up to a smooth 60 fps.

We’ll be taking a closer look at the DLSS upgrades in the next couple of weeks (including MFG and the new transformer model, which will supposedly increase upscaling quality and supports all RTX GPUs). But in our limited testing so far, the issues with DLSS MFG are basically the same as with the first version of Frame Generation, just slightly more pronounced. In the built-in Cyberpunk 2077 benchmark, the most visible issues are with some bits of barbed-wire fencing, which get smoother-looking and less detailed as you crank up the number of AI-generated frames. But the motion does look fluid and smooth, and the frame rate counts are admittedly impressive.

But as we noted in last year’s 4090 review, the xx90 cards portray FG and MFG in the best light possible since the card is already capable of natively rendering such high frame rates. It’s on lower-end cards where the shortcomings of the technology become more pronounced. Nvidia might say that the upcoming RTX 5070 is “as fast as a 4090 for $549,” and it might be right in terms of the number of frames the card can put up on your screen every second. But responsiveness and visual fidelity on the 4090 will be better every time—AI is a good augmentation for rendered frames, but it’s iffy as a replacement for rendered frames.

A 4090, amped way up

Nvidia’s GeForce RTX 5090. Credit: Andrew Cunningham

The GeForce RTX 5090 is an impressive card—it’s the only consumer graphics card to be released in over two years that can outperform the RTX 4090. The main caveats are its sky-high power consumption and sky-high price; by itself, it costs as much (and consumes as much power as) an entire mainstream gaming PC. The card is aimed at people who care about speed way more than they care about price, but it’s still worth putting it into context.

The main controversy, as with the 40-series, is how Nvidia talks about its Frame Generation-inflated performance numbers. Frame Generation and Multi-Frame Generation are tools in a toolbox—there will be games where they make things look great and run fast with minimal noticeable impact to visual quality or responsiveness, games where those impacts are more noticeable, and games that never add support for the features at all. (As well-supported as DLSS generally is in new releases, it is incumbent upon game developers to add it—and update it when Nvidia puts out a new version.)

But using those Multi-Frame Generation-inflated FPS numbers to make topline comparisons to last-generation graphics cards just feels disingenuous. No, an RTX 5070 will not be as fast as an RTX 4090 for just $549, because not all games support DLSS MFG, and not all games that do support it will run it well. Frame Generation still needs a good base frame rate to start with, and the slower your card is, the more issues you might notice.

Fuzzy marketing aside, Nvidia is still the undisputed leader in the GPU market, and the RTX 5090 extends that leadership for what will likely be another entire GPU generation, since both AMD and Intel are focusing their efforts on higher-volume, lower-cost cards right now. DLSS is still generally better than AMD’s FSR, and Nvidia does a good job of getting developers of new AAA game releases to support it. And if you’re buying this GPU to do some kind of rendering work or generative AI acceleration, Nvidia’s performance and software tools are still superior. The misleading performance claims are frustrating, but Nvidia still gains a lot of real advantages from being as dominant and entrenched as it is.

The good

  • Usually 30-something percent faster than an RTX 4090
  • Redesigned Founders Edition card is less unwieldy than the bricks that were the 4090/4080 design
  • Adequate cooling, despite the smaller card and higher power use
  • DLSS Multi-Frame Generation is an intriguing option if you’re trying to hit 240 or 360 fps on your high-refresh-rate gaming monitor

The bad

  • Much higher power consumption than the 4090, which already consumed more power than any other GPU on the market
  • Frame Generation is good at making a game that’s running fast run faster, it’s not as good for bringing a slow game up to 60 Hz
  • Nvidia’s misleading marketing around Multi-Frame Generation is frustrating—and will likely be more frustrating for lower-end cards since they aren’t getting the same bumps to core count and memory interface that the 5090 gets

The ugly

  • You can buy a whole lot of PC for $2,000, and we wouldn’t bet on this GPU being easy to find at MSRP

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Nvidia GeForce RTX 5090 costs as much as a whole gaming PC—but it sure is fast Read More »

trump-announces-$500b-“stargate”-ai-infrastructure-project-with-agi-aims

Trump announces $500B “Stargate” AI infrastructure project with AGI aims

Video of the Stargate announcement conference at the White House.

Despite optimism from the companies involved, as CNN reports, past presidential investment announcements have yielded mixed results. In 2017, Trump and Foxconn unveiled plans for a $10 billion Wisconsin electronics factory promising 13,000 jobs. The project later scaled back to a $672 million investment with fewer than 1,500 positions. The facility now operates as a Microsoft AI data center.

The Stargate announcement wasn’t Trump’s only major AI move announced this week. It follows the newly inaugurated US president’s reversal of a 2023 Biden executive order on AI risk monitoring and regulation.

Altman speaks, Musk responds

On Tuesday, OpenAI CEO Sam Altman appeared at a White House press conference alongside Present Trump, Oracle CEO Larry Ellison, and SoftBank CEO Masayoshi Son to announce Stargate.

Altman said he thinks Stargate represents “the most important project of this era,” allowing AGI to emerge in the United States. He believes that future AI technology could create hundreds of thousands of jobs. “We wouldn’t be able to do this without you, Mr. President,” Altman added.

Responding to off-camera questions from Trump about AI’s potential to spur scientific development, Altman said he believes AI will accelerate the discoveries for cures of diseases like cancer and heart disease.

Screenshots of Elon Musk challenging the Stargate announcement on X.

Screenshots of Elon Musk challenging the Stargate announcement on X.

Meanwhile on X, Trump ally and frequent Altman foe Elon Musk immediately attacked the Stargate plan, writing, “They don’t actually have the money,” and following up with a claim that we cannot yet substantiate, saying, “SoftBank has well under $10B secured. I have that on good authority.”

Musk’s criticism has complex implications given his very close ties to Trump, his history of litigating against OpenAI (which he co-founded and later left), and his own goals with his xAI company.

Trump announces $500B “Stargate” AI infrastructure project with AGI aims Read More »

new-geforce-50-series-gpus:-there’s-the-$1,999-5090,-and-there’s-everything-else

New GeForce 50-series GPUs: There’s the $1,999 5090, and there’s everything else


Nvidia leans heavily on DLSS 4 and AI-generated frames for speed comparisons.

Nvidia’s RTX 5070, one of four new desktop GPUs announced this week. Credit: Nvidia

Nvidia’s RTX 5070, one of four new desktop GPUs announced this week. Credit: Nvidia

Nvidia has good news and bad news for people building or buying gaming PCs.

The good news is that three of its four new RTX 50-series GPUs are the same price or slightly cheaper than the RTX 40-series GPUs they’re replacing. The RTX 5080 is $999, the same price as the RTX 4080 Super; the 5070 Ti and 5070 are launching for $749 and $549, each $50 less than the 4070 Ti Super and 4070 Super.

The bad news for people looking for the absolute fastest card they can get is that the company is charging $1,999 for its flagship RTX 5090 GPU, significantly more than the $1,599 MSRP of the RTX 4090. If you want Nvidia’s biggest and best, it will cost at least as much as four high-end game consoles or a pair of decently specced midrange gaming PCs.

Pricing for the first batch of Blackwell-based RTX 50-series GPUs. Credit: Nvidia

Nvidia also announced a new version of its upscaling algorithm, DLSS 4. As with DLSS 3 and the RTX 40-series, DLSS 4’s flagship feature will be exclusive to the 50-series. It’s called DLSS Multi Frame Generation, and as the name implies, it takes the Frame Generation feature from DLSS 3 and allows it to generate even more frames. It’s why Nvidia CEO Jensen Huang claimed that the $549 RTX 5070 performed like the $1,599 RTX 4090; it’s also why those claims are a bit misleading.

The rollout will begin with the RTX 5090 and 5080 on January 30. The 5070 Ti and 5070 will follow at some point in February. All cards except the 5070 Ti will come in Nvidia-designed Founders Editions as well as designs made by Nvidia’s partners; the 5070 Ti isn’t getting a Founders Edition.

The RTX 5090 and 5080

RTX 5090 RTX 4090 RTX 5080 RTX 4080 Super
CUDA Cores 21,760 16,384 10,752 10,240
Boost Clock 2,410 MHz 2,520 MHz 2,617 MHz 2,550 MHz
Memory Bus Width 512-bit 384-bit 256-bit 256-bit
Memory Bandwidth 1,792 GB/s 1,008 GB/s 960 GB/s 736 GB/s
Memory size 32GB GDDR7 24GB GDDR6X 16GB GDDR7 16GB GDDR6X
TGP 575 W 450 W 360 W 320 W

The RTX 5090, based on Nvidia’s new Blackwell architecture, is a gigantic chip with 92 billion transistors in it. And while it is double the price of an RTX 5080, you also get double the GPU cores and double the RAM and nearly double the memory bandwidth. Even more than the 4090, it’s being positioned head and shoulders above the rest of the GPUs in the family, and the 5080’s performance won’t come remotely close to it.

Although $1,999 is a lot to ask for a graphics card, if Nvidia can consistently make the RTX 5090 available at $2,000, it could still be an improvement over the pricing of the 4090, which regularly sold for well over $1,599 over the course of its lifetime, due in part to pandemic-fueled GPU shortages, cryptocurrency mining, and the generative AI boom. Companies and other entities buying them as AI accelerators may restrict the availability of the 5090, too, but Nvidia’s highest GPU tier has been well out of the price range of most consumers for a while now.

Despite the higher power budget—as predicted, it’s 125 W higher than the 4090 at 450 W, and Nvidia recommends a 1,000 W power supply or better—the physical size of the 5090 Founders Edition is considerably smaller than the 4090, which was large enough that it had trouble fitting into some computer cases. Thanks to a “high-density PCB” and redesigned cooling system, the 5090 Founders Edition is a dual-slot card that ought to fit into small-form-factor systems much more easily than the 4090. Of course, this won’t stop most third-party 5090 GPUs from being gigantic triple-fan monstrosities, but it is apparently possible to make a reasonably sized version of the card.

Moving on to the 5080, it looks like more of a mild update from last year’s RTX 4080 Super, with a few hundred more CUDA cores, more memory bandwidth (thanks to the use of GDDR7, since the two GPUs share the same 256-bit interface), and a slightly higher power budget of 360 W (compared to 320 W for the 4080 Super).

Having more cores and faster memory, in addition to whatever improvements and optimizations come with the Blackwell architecture, should help the 5080 easily beat the 4080 Super. But it’s an open question as to whether it will be able to beat the 4090, at least before you consider any DLSS-related frame rate increases. The 4090 has 52 percent more GPU cores, a wider memory bus, and 8GB more memory.

5070 Ti and 5070

RTX 5070 Ti RTX 4070 Ti Super RTX 5070 RTX 4070 Super
CUDA Cores 8,960 8,448 6,144 7,168
Boost Clock 2,452 MHz 2,610 MHz 2,512 MHz 2,475 MHz
Memory Bus Width 256-bit 256-bit 192-bit 192-bit
Memory Bandwidth 896 GB/s 672 GB/s 672 GB/s 504 GB/s
Memory size 16GB GDDR7 16GB GDDR6X 12GB GDDR7 12GB GDDR6X
TGP 300 W 285 W 250 W 220 W

At $749 and $549, the 5070 Ti and 5070 are slightly more within reach for someone who’s trying to spend less than $2,000 on a new gaming PC. Both cards hew relatively closely to the specs of the 4070 Ti Super and 4070 Super, both of which are already solid 1440p and 4K graphics cards for many titles.

Like the 5080, the 5070 Ti includes a few hundred more CUDA cores, more memory bandwidth, and slightly higher power requirements compared to the 4070 Ti Super. That the card is $50 less than the 4070 Ti Super was at launch is a nice bonus—if it can come close to or beat the RTX 4080 for $250 less, it could be an appealing high-end option.

The RTX 5070 is alone in having fewer CUDA cores than its immediate predecessor—6,144, down from 7,168. It is an upgrade from the original 4070, which had 5,888 CUDA cores, and GDDR7 and slightly faster clock speeds may still help it outrun the 4070 Super; like the other 50-series cards, it also comes with a higher power budget. But right now this card is looking like the closest thing to a lateral move in the lineup, at least before you consider the additional frame-generation capabilities of DLSS 4.

DLSS 4 and fudging the numbers

Many of Nvidia’s most ostentatious performance claims—including the one that the RTX 5070 is as fast as a 4090—factors in DLSS 4’s additional AI-generated frames. Credit: Nvidia

When launching new 40-series cards over the last two years, it was common for Nvidia to publish a couple of different performance comparisons to last-gen cards: one with DLSS turned off and one with DLSS and the 40-series-exclusive Frame Generation feature turned on. Nvidia would then lean on the DLSS-enabled numbers when making broad proclamations about a GPU’s performance, as it does in its official press release when it says the 5090 is twice as fast as the 4090, or as Huang did during his CES keynote when he claimed that an RTX 5070 offered RTX 4090 performance for $549.

DLSS Frame Generation is an AI feature that builds on what DLSS is already doing. Where DLSS uses AI to fill in gaps and make a lower-resolution image look like a higher-resolution image, DLSS Frame Generation creates entirely new frames and inserts them in between the frames that your GPU is actually rendering.

DLSS 4 now generates up to three frames for every frame the GPU is actually rendering. Used in concert with DLSS image upscaling, Nvidia says that “15 out of every 16 pixels” you see on your screen are being generated by its AI models. Credit: Nvidia

The RTX 50-series one-ups the 40-series with DLSS 4, another new revision that’s exclusive to its just-launched GPUs: DLSS Multi Frame Generation. Instead of generating one extra frame for every traditionally rendered frame, DLSS 4 generates “up to three additional frames” to slide in between the ones your graphics card is actually rendering—based on Nvidia’s slides, it looks like users ought to be able to control how many extra frames are being generated, just as they can control the quality settings for DLSS upscaling. Nvidia is leaning on the Blackwell architecture’s faster Tensor Cores, which it says are up to 2.5 times faster than the Tensor Cores in the RTX 40-series, to do the AI processing necessary to upscale rendered frames and to generate new ones.

Nvidia’s performance comparisons aren’t indefensible; with DLSS FG enabled, the cards can put out a lot of frames per second. It’s just dependent on game support (Nvidia says that 75 titles will support it at launch), and going off of our experience with the original iteration of Frame Generation, there will likely be scenarios where image quality is noticeably worse or just “off-looking” compared to actual rendered frames. DLSS FG also needed a solid base frame rate to get the best results, which may or may not be the case for Multi-FG.

Enhanced versions of older DLSS features can benefit all RTX cards, including the 20-, 30-, and 40-series. Multi-Frame Generation is restricted to the 50-series, though. Credit: Nvidia

Though the practice of restricting the biggest DLSS upgrades to all-new hardware is a bit frustrating, Nvidia did announce that it’s releasing a new transformer module for the DLSS Ray Reconstruction, Super Resolution, and Anti-Aliasing features. These are DLSS features that are available on all RTX GPUs going all the way back to the RTX 20-series, and games that are upgraded to use the newer models should benefit from improved upscaling quality even if they’re using older GPUs.

GeForce 50-series: Also for laptops!

Nvidia’s projected pricing for laptops with each of its new mobile GPUs. Credit: Nvidia

Nvidia’s laptop GPU announcements sometimes trail the desktop announcements by a few weeks or months. But the company has already announced mobile versions of the 5090, 5080, 5070 Ti, and 5070 that Nvidia says will begin shipping in laptops priced between $1,299 and $2,899 when they launch in March.

All of these GPUs share names, the Blackwell architecture, and DLSS 4 support with their desktop counterparts, but per usual they’re significantly cut down to fit on a laptop motherboard and within a laptop’s cooling capacity. The mobile version of the 5090 includes 10,496 GPU cores, less than half the number of the desktop version, and just 24GB of GDDR7 memory on a 256-bit interface instead of 32GB on a 512-bit interface. But it also can operate with a power budget between 95 and 150 W, a fraction of what the desktop 5090 needs.

RTX 5090 (mobile) RTX 5080 (mobile) RTX 5070 Ti (mobile) RTX 5070 (mobile)
CUDA Cores 10,496 7,680 5,888 4,608
Memory Bus Width 256-bit 256-bit 192-bit 128-bit
Memory size 24GB GDDR7 16GB GDDR7 12GB GDDR7 8GB GDDR7
TGP 95-150 W 80-150 W 60-115 W 50-100 W

The other three GPUs are mostly cut down in similar ways, and all of them have fewer GPU cores and lower power requirements than their desktop counterparts. The 5070 GPUs both have less RAM and narrowed memory buses, too, but the mobile RTX 5080 at least comes closer to its desktop iteration, with the same 256-bit bus width and 16GB of RAM.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

New GeForce 50-series GPUs: There’s the $1,999 5090, and there’s everything else Read More »

rumors-say-next-gen-rtx-50-gpus-will-come-with-big-jumps-in-power-requirements

Rumors say next-gen RTX 50 GPUs will come with big jumps in power requirements

Nvidia is reportedly gearing up to launch the first few cards in its RTX 50-series at CES next week, including an RTX 5090, RTX 5080, RTX 5070 Ti, and RTX 5070. The 5090 will be of particular interest to performance-obsessed, money-is-no-object PC gaming fanatics since it’s the first new GPU in over two years that can beat the performance of 2022’s RTX 4090.

But boosted performance and slower advancements in chip manufacturing technology mean that the 5090’s maximum power draw will far outstrip the 4090’s, according to leakers. VideoCardz reports that the 5090’s thermal design power (TDP) will be set at 575 W, up from 450 W for the already power-hungry RTX 4090. The RTX 5080’s TDP is also increasing to 360 W, up from 320 W for the RTX 4080 Super.

That also puts the RTX 5090 close to the maximum power draw available over a single 12VHPWR connector, which is capable of delivering up to 600 W of power (though once you include the 75 W available via the PCI Express slot on your motherboard, the actual maximum possible power draw for a GPU with a single 12VHPWR connector is a slightly higher 675 W).

Higher peak power consumption doesn’t necessarily mean that these cards will always draw more power during actual gaming than their 40-series counterparts. And their performance could be good enough that they could still be very efficient cards in terms of performance per watt.

But if you’re considering an upgrade to an RTX 5090 and these power specs are accurate, you may need to consider an upgraded power supply along with your new graphics card. Nvidia recommends at least an 850 W power supply for the RTX 4090 to accommodate what the GPU needs while leaving enough power left over for the rest of the system. An additional 125 W bump suggests that Nvidia will recommend a 1,000 W power supply as the minimum for the 5090.

We’ll probably know more about Nvidia’s next-gen cards after its CES keynote, currently scheduled for 9: 30 pm Eastern/6: 30 pm Pacific on Monday, January 6.

Rumors say next-gen RTX 50 GPUs will come with big jumps in power requirements Read More »

nvidia-partners-leak-next-gen-rtx-50-series-gpus,-including-a-32gb-5090

Nvidia partners leak next-gen RTX 50-series GPUs, including a 32GB 5090

Rumors have suggested that Nvidia will be taking the wraps off of some next-generation RTX 50-series graphics cards at CES in January. And as we get closer to that date, Nvidia’s partners and some of the PC makers have begun to inadvertently leak details of the cards.

According to recent leaks from both Zotac and Acer, it looks like Nvidia is planning to announce four new GPUs next month, all at the high end of its lineup: The RTX 5090, RTX 5080, RTX 5070 Ti, and RTX 5070 were all briefly listed on Zotac’s website, as spotted by VideoCardz. There’s also an RTX 5090D variant for the Chinese market, which will presumably have its specs tweaked to conform with current US export restrictions on high-performance GPUs.

Though the website leak didn’t confirm many specs, it did list the RTX 5090 as including 32GB of GDDR7, an upgrade from the 4090’s 24GB of GDDR6X. An Acer spec sheet for new Predator Orion desktops also lists 32GB of GDDR7 for the 4090, as well as 16GB of GDDR7 for the RTX 5080. This is the same amount of RAM included with the RTX 4080 and 4080 Super.

The 5090 will be a big deal when it launches because no graphics card released since October 2022 has come close to beating the 4090’s performance. Nvidia’s early 2024 Super refresh for some 40-series cards didn’t include a 4090 Super, and AMD’s flagship RX 7900 XTX card is more comfortable competing with the likes of the 4080 and 4080 Super. The 5090 isn’t a card that most people are going to buy, but for the performance-obsessed, it’s the first high-end performance upgrade the GPU market has seen in more than two years.

Nvidia partners leak next-gen RTX 50-series GPUs, including a 32GB 5090 Read More »

nvidia’s-new-ai-audio-model-can-synthesize-sounds-that-have-never-existed

Nvidia’s new AI audio model can synthesize sounds that have never existed

At this point, anyone who has been following AI research is long familiar with generative models that can synthesize speech or melodic music from nothing but text prompting. Nvidia’s newly revealed “Fugatto” model looks to go a step further, using new synthetic training methods and inference-level combination techniques to “transform any mix of music, voices, and sounds,” including the synthesis of sounds that have never existed.

While Fugatto isn’t available for public testing yet, a sample-filled website showcases how Fugatto can be used to dial a number of distinct audio traits and descriptions up or down, resulting in everything from the sound of saxophones barking to people speaking underwater to ambulance sirens singing in a kind of choir. While the results on display can be a bit hit or miss, the vast array of capabilities on display here helps support Nvidia’s description of Fugatto as “a Swiss Army knife for sound.”

You’re only as good as your data

In an explanatory research paper, over a dozen Nvidia researchers explain the difficulty in crafting a training dataset that can “reveal meaningful relationships between audio and language.” While standard language models can often infer how to handle various instructions from the text-based data itself, it can be hard to generalize descriptions and traits from audio without more explicit guidance.

To that end, the researchers start by using an LLM to generate a Python script that can create a large number of template-based and free-form instructions describing different audio “personas” (e.g., “standard, young-crowd, thirty-somethings, professional”). They then generate a set of both absolute (e.g., “synthesize a happy voice”) and relative (e.g., “increase the happiness of this voice”) instructions that can be applied to those personas.

The wide array of open source audio datasets used as the basis for Fugatto generally don’t have these kinds of trait measurements embedded in them by default. But the researchers make use of existing audio understanding models to create “synthetic captions” for their training clips based on their prompts, creating natural language descriptions that can automatically quantify traits such as gender, emotion, and speech quality. Audio processing tools are also used to describe and quantify training clips on a more acoustic level (e.g. “fundamental frequency variance” or “reverb”).

Nvidia’s new AI audio model can synthesize sounds that have never existed Read More »

facebook,-nvidia-push-scotus-to-limit-“nuisance”-investor-suits-after-scandals

Facebook, Nvidia push SCOTUS to limit “nuisance” investor suits after scandals


Facebook, Nvidia ask SCOTUS to narrow legal paths to retrieve investor losses.

The Supreme Court will soon weigh two cases that could potentially make it harder for misled investors to sue Big Tech companies after major scandals.

One case involves one of the largest tech scandals of all time, the Facebook-Cambridge Analytica data breach. In 2019, Facebook agreed to pay “more than $5 billion in civil penalties to settle charges by the Federal Trade Commission (FTC) and the Securities and Exchange Commission (SEC) that it had misled its users and investors over the privacy and security of user data on its platform,” a Supreme Court filing said.

The other case involves an allegation that Nvidia intentionally hid how much of its 2017–2018 GPU demand was due to a volatile cryptocurrency boom and not Nvidia’s core gaming business—allegedly misleading investors ahead of a crypto crash. After the bust, Nvidia suddenly had to slash half a billion dollars from its earnings projection, and market experts later estimated that the firm had understated its crypto-related revenue by more than a billion. In 2022, Nvidia paid a $5.5 million SEC penalty over the inadequate disclosures that one SEC chief said “deprived investors of critical information to evaluate the company’s business in a key market.”

Investors, however, have not yet settled their own legal challenges. In both cases, investors suing convinced the 9th Circuit that the companies were guilty of misleading investors. But now, the tech companies have appealed to the Supreme Court, hoping to reverse those rulings.

In case documents, each claimed that their investors have not satisfied high legal bars, which Nvidia argued Congress designed to prevent “frivolous” or “nuisance” lawsuits from going on “fishing expeditions” to claim securities “fraud by hindsight.” Both warned that SCOTUS upholding the 9th Circuit rulings risked flooding courts with frivolous suits, with Nvidia cautioning that such lawsuits can be “used to injure the entire US economy.”

The Supreme Court will hear arguments in the Facebook case on Wednesday, November 6, then the Nvidia case on November 13.

SCOTUS may be persuaded by tech companies still stuck coping with the aftermath of scandals. A former SEC lawyer, Andrew Feller, told Reuters that the Supreme Court’s conservative majority may continue its “recent track record of handing down business-friendly decisions that narrowed the authority of federal regulators” in these cases. Both cases give justices opportunities to “rein in the power of private plaintiffs to enforce federal rules aimed at punishing corporate misconduct,” Reuters reported.

Facebook defends describing risk as hypothetical

The Facebook case centers on an SEC disclosure where Facebook said that its business may be harmed by a data breach, posing that as a hypothetical, without mentioning the ongoing Cambridge Analytica data breach. Specifically, Facebook wrote, “[a]ny failure to prevent or mitigate . . . improper access to or disclosure of our data or user data . . . could result in the loss or misuse of such data, which could harm our business and reputation and diminish our competitive position.”

Investors felt misled, accusing Facebook of hiding the breach by only presenting the risk as a hypothetical that implied no breach had ever occurred in the past and certainly did not disclose the present risk.

However, in a SCOTUS filing, Facebook insisted that “no reasonable investor would interpret a risk disclosure using probabilistic, forward-looking language as impliedly representing that the specified triggering event had never occurred in the past.”

Facebook is now arguing that SCOTUS agreeing that the company should have disclosed the major data breach “would result in a regime under which companies would be required to disclose every previous material incident they have experienced—effectively creating a sweeping regime of omissions liability.”

According to Facebook, news broke about the Cambridge Analytica data breach in 2015, and its business wasn’t immediately harmed. Following that logic, the social media company hopes that SCOTUS will agree that Facebook was only required to disclose the data breach in its SEC filing if Facebook knew its business would likely be harmed from the ongoing breach.

By affirming the 9th Circuit ruling, Facebook alleged, SCOTUS would be “vastly expanding the circumstances in which risk disclosures are deemed false or misleading,” exposing to legal challenges “a wide range of previously immune forward-looking statements—revenue projections, future business plans or objectives, and the like.”

But investors suing argue that Facebook is still being misleading about the data scandal in its court filings.

“The only reason Facebook has ever given to explain why the misappropriation risked no harm was that the event was allegedly disclosed to the public in 2015 and no one cared,” investors’ SCOTUS brief said. But in 2015, a report exposing a data breach tied to a Ted Cruz campaign was denied by Cambridge Analytica and prompted a Facebook investigation that concluded no damage had been done.

“Facebook actively misled the public about its investigation, ‘represent[ing] that no misconduct had been discovered,'” investors alleged, and “Facebook’s deception extended to its public filings with the SEC.”

According to investors, the real damage was done when the true extent of the Cambridge Analytica scandal was exposed in 2018. That caused substantial revenue losses that Facebook likely understood it was risking while allegedly leaving investors blind to those risks for years.

Investors argue that disclosure should not be required of every data breach that hits Facebook, whether it harms its business or not, but that the Cambridge Analytica data breach was significant and should have been disclosed as a material risk. The 9th Circuit agreed, holding that “publicly treating such a material adverse event as a merely hypothetical prospect can be misleading even if the event has not yet produced follow-on business harm because the company has kept the truth from the public.”

They further argued that requiring so-called overdisclosure wouldn’t trigger unwarranted litigation, as Facebook suggests, because Congress has always “given considerable attention to concerns over abusive private litigation.”

If Facebook wins, investors alleged, SCOTUS risks giving any tech company “a license to intentionally mislead investors about the occurrence of hugely material events by describing those events as purely hypothetical prospects.” Siding with Facebook would allegedly give “companies an incentive to stuff their annual reports with boilerplate, generic warnings that reveal little about the company’s actual business and to cover up events that could give rise to corporate scandals, as Facebook did here.”

Facebook argued that if the SEC is concerned about specific disclosures connected to the data breach, “the SEC can invoke the rulemaking process to impose” a requirement that companies must disclose all “past material adverse events.”

Nvidia disputes expert’s crypto data

While the Facebook case involved a bigger scandal, the Nvidia case could have bigger legal implications if Nvidia wins.

In the Nvidia case, investors argued that Nvidia CEO Jensen Huang made public statements allegedly misleading investors by downplaying the high demand for GPUs tied to volatile crypto markets. To plead their case, investors relied on statements from Nvidia employees, internal documents like meeting slides, industry research, as well as an expert opinion crunching general market numbers and estimating that Nvidia “underreported its crypto revenues by $1.126 billion.”

Nvidia claimed it’s far more plausible that the company simply made an “honest miscalculation” while navigating a complex emerging market.

To defend against the suit, Nvidia is arguing that the Private Securities Litigation Reform Act (PSLRA) imposes “special burdens on plaintiffs seeking to bring federal securities fraud class actions” through “heightened pleading requirements” to deter frivolous lawsuits arguing fraud by hindsight.

According to Nvidia, the PSLRA requires investors to allege particular facts based on particular contents of internal Nvidia documents, which goes beyond relying on an expert opinion. The tech company has urged SCOTUS that the 9th Circuit “‘significantly erode[d]” the PSLRA requirements by allowing Plaintiffs to “simply” hire “an expert who manufactured data to fit their allegations.”

“They hired an expert to create data and then filed a class action alleging that Nvidia and its CEO committed securities fraud by failing to disclose the data invented by Plaintiffs’ expert,” Nvidia argued.

This allegedly “eviscerates the guardrails that Congress erected to protect the public from abusive securities litigation” and creates a “dangerous” and “easy-to-replicate ‘roadmap’ for plaintiffs to sidestep the PSLRA in this recurring context.”

“Far from serving Congress’s goal of guarding against fishing expeditions by vexatious litigants, the Ninth Circuit’s opinion declares it open season so long as a plaintiff has funding to hire an expert,” Nvidia alleged.

Investors are hoping SCOTUS will uphold the 9th Circuit’s judgment. Instead of seeing their suit as frivolous, they argued that the SEC fine over the same misconduct “undermines any suggestion that this is the type of frivolous suit that the PSLRA was meant to screen out.”

They’ve disputed Nvidia’s arguments that they’ve relied solely on a hired expert to support their claims, arguing that each fact was corroborated by employee witnesses and third-party reports.

If Nvidia wins, investors warned, the SCOTUS decision would risk harming a wide range of private securities litigation that Congress has found “‘is an indispensable tool’ for ‘defrauded investors’ to ‘recover their losses without having to rely upon government action.'”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Facebook, Nvidia push SCOTUS to limit “nuisance” investor suits after scandals Read More »

us-suspects-tsmc-helped-huawei-skirt-export-controls,-report-says

US suspects TSMC helped Huawei skirt export controls, report says

In April, TSMC was provided with $6.6 billion in direct CHIPS Act funding to “support TSMC’s investment of more than $65 billion in three greenfield leading-edge fabs in Phoenix, Arizona, which will manufacture the world’s most advanced semiconductors,” the Department of Commerce said.

These investments are key to the Biden-Harris administration’s mission of strengthening “economic and national security by providing a reliable domestic supply of the chips that will underpin the future economy, powering the AI boom and other fast-growing industries like consumer electronics, automotive, Internet of Things, and high-performance computing,” the department noted. And in particular, the funding will help America “maintain our competitive edge” in artificial intelligence, the department said.

It likely wouldn’t make sense to prop TSMC up to help the US “onshore the critical hardware manufacturing capabilities that underpin AI’s deep language learning algorithms and inferencing techniques,” to then limit access to US-made tech. TSMC’s Arizona fabs are supposed to support companies like Apple, Nvidia, and Qualcomm and enable them to “compete effectively,” the Department of Commerce said.

Currently, it’s unclear where the US probe into TSMC will go or whether a damaging finding could potentially impact TSMC’s CHIPS funding.

Last fall, the Department of Commerce published a final rule, though, designed to “prevent CHIPS funds from being used to directly or indirectly benefit foreign countries of concern,” such as China.

If the US suspected that TSMC was aiding Huawei’s AI chip manufacturing, the company could be perceived as avoiding CHIPS guardrails prohibiting TSMC from “knowingly engaging in any joint research or technology licensing effort with a foreign entity of concern that relates to a technology or product that raises national security concerns.”

Violating this “technology clawback” provision of the final rule risks “the full amount” of CHIPS Act funding being “recovered” by the Department of Commerce. That outcome seems unlikely, though, given that TSMC has been awarded more funding than any other recipient apart from Intel.

The Department of Commerce declined Ars’ request to comment on whether TSMC’s CHIPS Act funding could be impacted by their reported probe.

US suspects TSMC helped Huawei skirt export controls, report says Read More »