NVIDIA

us-warns-companies-around-the-world-to-stay-away-from-huawei-chips

US warns companies around the world to stay away from Huawei chips

President Donald Trump’s administration has taken a tougher stance on Chinese technology advances, warning companies around the world that using artificial intelligence chips made by Huawei could trigger criminal penalties for violating US export controls.

The commerce department issued guidance to clarify that Huawei’s Ascend processors were subject to export controls because they almost certainly contained, or were made with, US technology.

Its Bureau of Industry and Security, which oversees export controls, said on Tuesday it was taking a more stringent approach to foreign AI chips, including “issuing guidance that using Huawei Ascend chips anywhere in the world violates US export controls.”

But people familiar with the matter stressed that the bureau had not issued a new rule, but was making it clear to companies that Huawei chips are likely to have violated a measure that requires hard-to-get licenses to export US technology to the Chinese company.

“The guidance is not a new control, but rather a public confirmation of an interpretation that even the mere use anywhere by anyone of a Huawei-designed advanced computing [integrated circuit] would violate export control rules,” said Kevin Wolf, a veteran export control lawyer at Akin Gump.

The bureau said three Huawei Ascend chips—the 910B, 910C, and 910D—were subject to the regulations, noting that such chips are likely to have been “designed with certain US software or technology or produced with semiconductor manufacturing equipment that is the direct produce of certain US-origin software or technology, or both.”

The guidance comes as the US has become increasingly concerned at the speed at which Huawei has developed advanced chips and other AI hardware.

Huawei has begun delivering AI chip “clusters” to clients in China that it claims outperform leading US AI chipmaker Nvidia’s comparable product on key metrics such as total compute and memory. The system relies on a large number of 910C chips, which individually fall short of Nvidia’s most advanced offering but collectively deliver superior performance to a rival Nvidia cluster product.

US warns companies around the world to stay away from Huawei chips Read More »

trump-admin-to-roll-back-biden’s-ai-chip-restrictions

Trump admin to roll back Biden’s AI chip restrictions

The changing face of chip export controls

The Biden-era chip restriction framework, which we covered in January, established a three-tiered system for regulating AI chip exports. The first tier included 17 countries, plus Taiwan, that could receive unlimited advanced chips. A second tier of roughly 120 countries faced caps on the number of chips they could import. The administration entirely blocked the third tier, which included China, Russia, Iran, and North Korea, from accessing the chips.

Commerce Department officials now say they “didn’t like the tiered system” and considered it “unenforceable,” according to Reuters. While no timeline exists for the new rule, the spokeswoman indicated that officials are still debating the best approach to replace it. The Biden rule was set to take effect on May 15.

Reports suggest the Trump administration might discard the tiered approach in favor of a global licensing system with government-to-government agreements. This could involve direct negotiations with nations like the United Arab Emirates or Saudi Arabia rather than applying broad regional restrictions. However, the Commerce Department spokeswoman indicated that debate about the new approach is still underway, and no timetable has been established for the final rule.

Trump admin to roll back Biden’s AI chip restrictions Read More »

nvidia-geforce-xx60-series-is-pc-gaming’s-default-gpu,-and-a-new-one-is-out-may-19

Nvidia GeForce xx60 series is PC gaming’s default GPU, and a new one is out May 19

Nvidia will release the GeForce RTX 5060 on May 19 starting at $299, the company announced via press release today. The new card, a successor to popular past GPUs like the GTX 1060 and RTX 3060, will bring Nvidia’s DLSS 4 and Multi Frame-Generation technology to budget-to-mainstream gaming builds—at least, it would if every single GPU launched by any company at any price wasn’t instantly selling out these days.

Nvidia announced a May release for the 5060 last month when it released the RTX 5060 Ti for $379 (8GB) and $429 (16GB). Prices for that card so far haven’t been as inflated as they have been for the RTX 5070 on up, but the cheapest ones you can currently get are still between $50 and $100 over that MSRP. Unless Nvidia and its partners have made dramatically more RTX 5060 cards than they’ve made of any other model so far, expect this card to carry a similar pricing premium for a while.

RTX 5060 Ti RTX 4060 Ti RTX 5060 RTX 4060 RTX 5050 (leaked) RTX 3050
CUDA Cores 4,608 4,352 3,840 3,072 2,560 2,560
Boost Clock 2,572 MHz 2,535 MHz 2,497 MHz 2,460 MHz Unknown 1,777 MHz
Memory Bus Width 128-bit 128-bit 128-bit 128-bit 128-bit 128-bit
Memory bandwidth 448GB/s 288GB/s 448GB/s 272GB/s Unknown 224GB/s
Memory size 8GB or 16GB GDDR7 8GB or 16GB GDDR6 8GB GDDR7 8GB GDDR6 8GB GDDR6 8GB GDDR6
TGP 180 W 160 W 145 W 115 W 130 W 130 W

Compared to the RTX 4060, the RTX 5060 adds a few hundred extra CUDA cores and gets a big memory bandwidth increase thanks to the move from GDDR6 to GDDR7. But its utility at higher resolutions will continue to be limited by its 8GB of RAM, which is already becoming a problem for a handful of high-end games at 1440p and 4K.

Regardless of its performance, the RTX 5060 will likely become a popular mainstream graphics card, just like its predecessors. Of the Steam Hardware Survey’s top 10 GPUs, three are RTX xx60-series desktop GPUs (the 3060, 4060, and 2060); the laptop versions of the 4060 and 3060 are two of the others. If supply of the RTX 5060 is adequate and pricing isn’t out of control, we’d expect it to shoot up these charts pretty quickly over the next few months.

Nvidia GeForce xx60 series is PC gaming’s default GPU, and a new one is out May 19 Read More »

trump-can’t-keep-china-from-getting-ai-chips,-tsmc-suggests

Trump can’t keep China from getting AI chips, TSMC suggests

“Despite TSMC’s best efforts to comply with all relevant export control and sanctions laws and regulations, there is no assurance that its business activities will not be found incompliant with export control laws and regulations,” TSMC said.

Further, “if TSMC or TSMC’s business partners fail to obtain appropriate import, export or re-export licenses or permits or are found to have violated applicable export control or sanctions laws, TSMC may also be adversely affected, through reputational harm as well as other negative consequences, including government investigations and penalties resulting from relevant legal proceedings,” TSMC warned.

Trump’s tariffs may end TSMC’s “tariff-proof” era

TSMC is thriving despite years of tariffs and export controls, its report said, with at least one analyst suggesting that, so far, the company appears “somewhat tariff-proof.” However, all of that could be changing fast, as “US President Donald Trump announced in 2025 an intention to impose more expansive tariffs on imports into the United States,” TSMC said.

“Any tariffs imposed on imports of semiconductors and products incorporating chips into the United States may result in increased costs for purchasing such products, which may, in turn, lead to decreased demand for TSMC’s products and services and adversely affect its business and future growth,” TSMC said.

And if TSMC’s business is rattled by escalations in the US-China trade war, TSMC warned, that risks disrupting the entire global semiconductor supply chain.

Trump’s semiconductor tariff plans remain uncertain. About a week ago, Trump claimed the rates would be unveiled “over the next week,” Reuters reported, which means they could be announced any day now.

Trump can’t keep China from getting AI chips, TSMC suggests Read More »

razer-built-a-game-streaming-app-on-top-of-moonlight,-and-it’s-not-too-bad

Razer built a game-streaming app on top of Moonlight, and it’s not too bad

I intentionally touched as few settings as I could on each device (minus a curious poke or two at the “Optimize” option), and the experience was fairly streamlined. I didn’t have to set resolutions or guess at a data-streaming rate; Razer defaults to 30Mbps, which generally provides rock-solid 1080p and pretty smooth 1440p-ish resolutions. My main complaints were the missing tricks I had picked up in Moonlight, like holding the start/menu button to activate a temporary mouse cursor or hitting a button combination to exit out of games.

Razer’s app is not limited to Steam games like Steam Link or Xbox/Game Pass titles like Remote Play and can work with pretty much any game you have installed. It is, however, limited to Windows and the major mobile platforms, leaving out Macs, Apple TVs, Linux, Steam Deck and other handhelds, Raspberry Pi setups, and so on. Still, for what it does, it works pretty well, and its interface, while Razer-green and a bit showy, was easier to navigate than Moonlight. I did not, for example, have to look up the launching executables and runtime options for certain games to make them launch directly from my mobile device.

Streaming-wise, I noticed no particular differences from the Moonlight experience, which one might expect, given the shared codebase. The default choice of streaming at my iPad’s native screen resolution and refresh rate saved me the headaches of figuring out the right balance of black box cut-offs and resolution that I would typically go through with Steam Link or sometimes Moonlight.

Razer built a game-streaming app on top of Moonlight, and it’s not too bad Read More »

nvidia-confirms-the-switch-2-supports-dlss,-g-sync,-and-ray-tracing

Nvidia confirms the Switch 2 supports DLSS, G-Sync, and ray tracing

In the wake of the Switch 2 reveal, neither Nintendo nor Nvidia has gone into any detail at all about the exact chip inside the upcoming handheld—technically, we are still not sure what Arm CPU architecture or what GPU architecture it uses, how much RAM we can expect it to have, how fast that memory will be, or exactly how many graphics cores we’re looking at.

But interviews with Nintendo executives and a blog post from Nvidia did at least confirm several of the new chip’s capabilities. The “custom Nvidia processor” has a GPU “with dedicated [Ray-Tracing] Cores and Tensor Cores for stunning visuals and AI-driven enhancements,” writes Nvidia Software Engineering VP Muni Anda.

This means that, as rumored, the Switch 2 will support Nvidia’s Deep Learning Super Sampling (DLSS) upscaling technology, which helps to upscale a lower-resolution image into a higher-resolution image with less of a performance impact than native rendering and less loss of quality than traditional upscaling methods. For the Switch games that can render at 4K or at 120 FPS 1080p, DLSS will likely be responsible for making it possible.

The other major Nvidia technology supported by the new Switch is G-Sync, which prevents screen tearing when games are running at variable frame rates. Nvidia notes that G-Sync is only supported in handheld mode and not in docked mode, which could be a limitation of the Switch dock’s HDMI port.

Nvidia confirms the Switch 2 supports DLSS, G-Sync, and ray tracing Read More »

no-cloud-needed:-nvidia-creates-gaming-centric-ai-chatbot-that-runs-on-your-gpu

No cloud needed: Nvidia creates gaming-centric AI chatbot that runs on your GPU

Nvidia has seen its fortunes soar in recent years as its AI-accelerating GPUs have become worth their weight in gold. Most people use their Nvidia GPUs for games, but why not both? Nvidia has a new AI you can run at the same time, having just released its experimental G-Assist AI. It runs locally on your GPU to help you optimize your PC and get the most out of your games. It can do some neat things, but Nvidia isn’t kidding when it says this tool is experimental.

G-Assist is available in the Nvidia desktop app, and it consists of a floating overlay window. After invoking the overlay, you can either type or speak to G-Assist to check system stats or make tweaks to your settings. You can ask basic questions like, “How does DLSS Frame Generation work?” but it also has control over some system-level settings.

By calling up G-Assist, you can get a rundown of how your system is running, including custom data charts created on the fly by G-Assist. You can also ask the AI to tweak your machine, for example, optimizing the settings for a particular game or toggling on or off a setting. G-Assist can even overclock your GPU if you so choose, complete with a graph of expected performance gains.

Nvidia on G-Assist.

Nvidia demoed G-Assist last year with some impressive features tied to the active game. That version of G-Assist could see what you were doing and offer suggestions about how to reach your next objective. The game integration is sadly quite limited in the public version, supporting just a few games, like Ark: Survival Evolved.

There is, however, support for a number of third-party plug-ins that give G-Assist control over Logitech G, Corsair, MSI, and Nanoleaf peripherals. So, for instance, G-Assist could talk to your MSI motherboard to control your thermal profile or ping Logitech G to change your LED settings.

No cloud needed: Nvidia creates gaming-centric AI chatbot that runs on your GPU Read More »

nvidia-announces-dgx-desktop-“personal-ai-supercomputers”

Nvidia announces DGX desktop “personal AI supercomputers”

During Tuesday’s Nvidia GTX keynote, CEO Jensen Huang unveiled two “personal AI supercomputers” called DGX Spark and DGX Station, both powered by the Grace Blackwell platform. In a way, they are a new type of AI PC architecture specifically built for running neural networks, and five major PC manufacturers will build the supercomputers.

These desktop systems, first previewed as “Project DIGITS” in January, aim to bring AI capabilities to developers, researchers, and data scientists who need to prototype, fine-tune, and run large AI models locally. DGX systems can serve as standalone desktop AI labs or “bridge systems” that allow AI developers to move their models from desktops to DGX Cloud or any AI cloud infrastructure with few code changes.

Huang explained the rationale behind these new products in a news release, saying, “AI has transformed every layer of the computing stack. It stands to reason a new class of computers would emerge—designed for AI-native developers and to run AI-native applications.”

The smaller DGX Spark features the GB10 Grace Blackwell Superchip with Blackwell GPU and fifth-generation Tensor Cores, delivering up to 1,000 trillion operations per second for AI.

Meanwhile, the more powerful DGX Station includes the GB300 Grace Blackwell Ultra Desktop Superchip with 784GB of coherent memory and the ConnectX-8 SuperNIC supporting networking speeds up to 800Gb/s.

The DGX architecture serves as a prototype that other manufacturers can produce. Asus, Dell, HP, and Lenovo will develop and sell both DGX systems, with DGX Spark reservations opening today and DGX Station expected later in 2025. Additional manufacturing partners for the DGX Station include BOXX, Lambda, and Supermicro, with systems expected to be available later this year.

Since the systems will be manufactured by different companies, Nvidia did not mention pricing for the units. However, in January, Nvidia mentioned that the base-level configuration for a DGX Spark-like computer would retail for around $3,000.

Nvidia announces DGX desktop “personal AI supercomputers” Read More »

nvidia-announces-“rubin-ultra”-and-“feynman”-ai-chips-for-2027-and-2028

Nvidia announces “Rubin Ultra” and “Feynman” AI chips for 2027 and 2028

On Tuesday at Nvidia’s GTC 2025 conference in San Jose, California, CEO Jensen Huang revealed several new AI-accelerating GPUs the company plans to release over the coming months and years. He also revealed more specifications about previously announced chips.

The centerpiece announcement was Vera Rubin, first teased at Computex 2024 and now scheduled for release in the second half of 2026. This GPU, named after a famous astronomer, will feature tens of terabytes of memory and comes with a custom Nvidia-designed CPU called Vera.

According to Nvidia, Vera Rubin will deliver significant performance improvements over its predecessor, Grace Blackwell, particularly for AI training and inference.

Specifications for Vera Rubin, presented by Jensen Huang during his GTC 2025 keynote.

Specifications for Vera Rubin, presented by Jensen Huang during his GTC 2025 keynote.

Vera Rubin features two GPUs together on one die that deliver 50 petaflops of FP4 inference performance per chip. When configured in a full NVL144 rack, the system delivers 3.6 exaflops of FP4 inference compute—3.3 times more than Blackwell Ultra’s 1.1 exaflops in a similar rack configuration.

The Vera CPU features 88 custom ARM cores with 176 threads connected to Rubin GPUs via a high-speed 1.8 TB/s NVLink interface.

Huang also announced Rubin Ultra, which will follow in the second half of 2027. Rubin Ultra will use the NVL576 rack configuration and feature individual GPUs with four reticle-sized dies, delivering 100 petaflops of FP4 precision (a 4-bit floating-point format used for representing and processing numbers within AI models) per chip.

At the rack level, Rubin Ultra will provide 15 exaflops of FP4 inference compute and 5 exaflops of FP8 training performance—about four times more powerful than the Rubin NVL144 configuration. Each Rubin Ultra GPU will include 1TB of HBM4e memory, with the complete rack containing 365TB of fast memory.

Nvidia announces “Rubin Ultra” and “Feynman” AI chips for 2027 and 2028 Read More »

leaked-geforce-rtx-5060-and-5050-specs-suggest-nvidia-will-keep-playing-it-safe

Leaked GeForce RTX 5060 and 5050 specs suggest Nvidia will keep playing it safe

Nvidia has launched all of the GeForce RTX 50-series GPUs that it announced at CES, at least technically—whether you’re buying from Nvidia, AMD, or Intel, it’s nearly impossible to find any of these new cards at their advertised prices right now.

But hope springs eternal, and newly leaked specs for GeForce RTX 5060 and 5050-series cards suggest that Nvidia may be announcing these lower-end cards soon. These kinds of cards are rarely exciting, but Steam Hardware Survey data shows that these xx60 and xx50 cards are what the overwhelming majority of PC gamers are putting in their systems.

The specs, posted by a reliable leaker named Kopite and reported by Tom’s Hardware and others, suggest a refresh that’s in line with what Nvidia has done with most of the 50-series so far. Along with a move to the next-generation Blackwell architecture, the 5060 GPUs each come with a small increase to the number of CUDA cores, a jump from GDDR6 to GDDR7, and an increase in power consumption, but no changes to the amount of memory or the width of the memory bus. The 8GB versions, in particular, will probably continue to be marketed primarily as 1080p cards.

RTX 5060 Ti (leaked) RTX 4060 Ti RTX 5060 (leaked) RTX 4060 RTX 5050 (leaked) RTX 3050
CUDA Cores 4,608 4,352 3,840 3,072 2,560 2,560
Boost Clock Unknown 2,535 MHz Unknown 2,460 MHz Unknown 1,777 MHz
Memory Bus Width 128-bit 128-bit 128-bit 128-bit 128-bit 128-bit
Memory bandwidth Unknown 288 GB/s Unknown 272 GB/s Unknown 224 GB/s
Memory size 8GB or 16GB GDDR7 8GB or 16GB GDDR6 8GB GDDR7 8GB GDDR6 8GB GDDR6 8GB GDDR6
TGP 180 W 160 W 150 W 115 W 130 W 130 W

As with the 4060 Ti, the 5060 Ti is said to come in two versions, one with 8GB of RAM and one with 16GB. One of the 4060 Ti’s problems was that its relatively narrow 128-bit memory bus limited its performance at 1440p and 4K resolutions even with 16GB of RAM—the bandwidth increase from GDDR7 could help with this, but we’ll need to test to see for sure.

Leaked GeForce RTX 5060 and 5050 specs suggest Nvidia will keep playing it safe Read More »

amd-radeon-rx-9070-and-9070-xt-review:-rdna-4-fixes-a-lot-of-amd’s-problems

AMD Radeon RX 9070 and 9070 XT review: RDNA 4 fixes a lot of AMD’s problems


For $549 and $599, AMD comes close to knocking out Nvidia’s GeForce RTX 5070.

AMD’s Radeon RX 9070 and 9070 XT are its first cards based on the RDNA 4 GPU architecture. Credit: Andrew Cunningham

AMD’s Radeon RX 9070 and 9070 XT are its first cards based on the RDNA 4 GPU architecture. Credit: Andrew Cunningham

AMD is a company that knows a thing or two about capitalizing on a competitor’s weaknesses. The company got through its early-2010s nadir partially because its Ryzen CPUs struck just as Intel’s current manufacturing woes began to set in, first with somewhat-worse CPUs that were great value for the money and later with CPUs that were better than anything Intel could offer.

Nvidia’s untrammeled dominance of the consumer graphics card market should also be an opportunity for AMD. Nvidia’s GeForce RTX 50-series graphics cards have given buyers very little to get excited about, with an unreachably expensive high-end 5090 refresh and modest-at-best gains from 5080 and 5070-series cards that are also pretty expensive by historical standards, when you can buy them at all. Tech YouTubers—both the people making the videos and the people leaving comments underneath them—have been almost uniformly unkind to the 50 series, hinting at consumer frustrations and pent-up demand for competitive products from other companies.

Enter AMD’s Radeon RX 9070 XT and RX 9070 graphics cards. These are aimed right at the middle of the current GPU market at the intersection of high sales volume and decent profit margins. They promise good 1440p and entry-level 4K gaming performance and improved power efficiency compared to previous-generation cards, with fixes for long-time shortcomings (ray-tracing performance, video encoding, and upscaling quality) that should, in theory, make them more tempting for people looking to ditch Nvidia.

Table of Contents

RX 9070 and 9070 XT specs and speeds

RX 9070 XT RX 9070 RX 7900 XTX RX 7900 XT RX 7900 GRE RX 7800 XT
Compute units (Stream processors) 64 RDNA4 (4,096) 56 RDNA4 (3,584) 96 RDNA3 (6,144) 84 RDNA3 (5,376) 80 RDNA3 (5,120) 60 RDNA3 (3,840)
Boost Clock 2,970 MHz 2,520 MHz 2,498 MHz 2,400 MHz 2,245 MHz 2,430 MHz
Memory Bus Width 256-bit 256-bit 384-bit 320-bit 256-bit 256-bit
Memory Bandwidth 650GB/s 650GB/s 960GB/s 800GB/s 576GB/s 624GB/s
Memory size 16GB GDDR6 16GB GDDR6 24GB GDDR6 20GB GDDR6 16GB GDDR6 16GB GDDR6
Total board power (TBP) 304 W 220 W 355 W 315 W 260 W 263 W

AMD’s high-level performance promise for the RDNA 4 architecture revolves around big increases in performance per compute unit (CU). An RDNA 4 CU, AMD says, is nearly twice as fast in rasterized performance as RDNA 2 (that is, rendering without ray-tracing effects enabled) and nearly 2.5 times as fast as RDNA 2 in games with ray-tracing effects enabled. Performance for at least some machine learning workloads also goes way up—twice as fast as RDNA 3 and four times as fast as RDNA 2.

We’ll see this in more detail when we start comparing performance, but AMD seems to have accomplished this goal. Despite having 64 or 56 compute units (for the 9070 XT and 9070, respectively), the cards’ performance often competes with AMD’s last-generation flagships, the RX 7900 XTX and 7900 XT. Those cards came with 96 and 84 compute units, respectively. The 9070 cards are specced a lot more like last generation’s RX 7800 XT—including the 16GB of GDDR6 on a 256-bit memory bus, as AMD still isn’t using GDDR6X or GDDR7—but they’re much faster than the 7800 XT was.

AMD has dramatically increased the performance-per-compute unit for RDNA 4. AMD

The 9070 series also uses a new 4 nm manufacturing process from TSMC, an upgrade from the 7000 series’ 5 nm process (and the 6 nm process used for the separate memory controller dies in higher-end RX 7000-series models that used chiplets). AMD’s GPUs are normally a bit less efficient than Nvidia’s, but the architectural improvements and the new manufacturing process allow AMD to do some important catch-up.

Both of the 9070 models we tested were ASRock Steel Legend models, and the 9070 and 9070 XT had identical designs—we’ll probably see a lot of this from AMD’s partners since the GPU dies and the 16GB RAM allotments are the same for both models. Both use two 8-pin power connectors; AMD says partners are free to use the 12-pin power connector if they want, but given Nvidia’s ongoing issues with it, most cards will likely stick with the reliable 8-pin connectors.

AMD doesn’t appear to be making and selling reference designs for the 9070 series the way it did for some RX 7000 and 6000-series GPUs or the way Nvidia does with its Founders Edition cards. From what we’ve seen, 2 or 2.5-slot, triple-fan designs will be the norm, the way they are for most midrange GPUs these days.

Testbed notes

We used the same GPU testbed for the Radeon RX 9070 series as we have for our GeForce RTX 50-series reviews.

An AMD Ryzen 7 9800X3D ensures that our graphics cards will be CPU-limited as little as possible. An ample 1050 W power supply, 32GB of DDR5-6000, and an AMD X670E motherboard with the latest BIOS installed round out the hardware. On the software side, we use an up-to-date installation of Windows 11 24H2 and recent GPU drivers for older cards, ensuring that our tests reflect whatever optimizations Microsoft, AMD, Nvidia, and game developers have made since the last generation of GPUs launched.

We have numbers for all of Nvidia’s RTX 50-series GPUs so far, plus most of the 40-series cards, most of AMD’s RX 7000-series cards, and a handful of older GPUs from the RTX 30-series and RX 6000 series. We’ll focus on comparing the 9070 XT and 9070 to other 1440p-to-4K graphics cards since those are the resolutions AMD is aiming at.

Performance

At $549 and $599, the 9070 series is priced to match Nvidia’s $549 RTX 5070 and undercut the $749 RTX 5070 Ti. So we’ll focus on comparing the 9070 series to those cards, plus the top tier of GPUs from the outgoing RX 7000-series.

Some 4K rasterized benchmarks.

Starting at the top with rasterized benchmarks with no ray-tracing effects, the 9070 XT does a good job of standing up to Nvidia’s RTX 5070 Ti, coming within a few frames per second of its performance in all the games we tested (and scoring very similarly in the 3DMark Time Spy Extreme benchmark).

Both cards are considerably faster than the RTX 5070—between 15 and 28 percent for the 9070 XT and between 5 and 13 percent for the regular 9070 (our 5070 scored weirdly low in Horizon Zero Dawn Remastered, so we’d treat those numbers as outliers for now). Both 9070 cards also stack up well next to the RX 7000 series here—the 9070 can usually just about match the performance of the 7900 XT, and the 9070 XT usually beats it by a little. Both cards thoroughly outrun the old RX 7900 GRE, which was AMD’s $549 GPU offering just a year ago.

The 7900 XT does have 20GB of RAM instead of 16GB, which might help its performance in some edge cases. But 16GB is still perfectly generous for a 1440p-to-4K graphics card—the 5070 only offers 12GB, which could end up limiting its performance in some games as RAM requirements continue to rise.

On ray-tracing improvements

Nvidia got a jump on AMD when it introduced hardware-accelerated ray-tracing in the RTX 20-series in 2018. And while these effects were only supported in a few games at the time, many modern games offer at least some kind of ray-traced lighting effects.

AMD caught up a little when it began shipping its own ray-tracing support in the RDNA2 architecture in late 2020, but the issue since then has always been that AMD cards have taken a larger performance hit than GeForce GPUs when these effects are turned on. RDNA3 promised improvements, but our tests still generally showed the same deficit as before.

So we’re looking for two things with RDNA4’s ray-tracing performance. First, we want the numbers to be higher than they were for comparably priced RX 7000-series GPUs, the same thing we look for in non-ray-traced (or rasterized) rendering performance. Second, we want the size of the performance hit to go down. To pick an example: the RX 7900 GRE could compete with Nvidia’s RTX 4070 Ti Super in games without ray tracing, but it was closer to a non-Super RTX 4070 in ray-traced games. It has helped keep AMD’s cards from being across-the-board competitive with Nvidia’s—is that any different now?

Benchmarks for games with ray-tracing effects enabled. Both AMD cards generally keep pace with the 5070 in these tests thanks to RDNA 4’s improvements.

The picture our tests paint is mixed but tentatively positive. The 9070 series and RDNA4 post solid improvements in the Cyberpunk 2077 benchmarks, substantially closing the performance gap with Nvidia. In games where AMD’s cards performed well enough before—here represented by Returnal—performance goes up, but roughly proportionately with rasterized performance. And both 9070 cards still punch below their weight in Black Myth: Wukong, falling substantially behind the 5070 under the punishing Cinematic graphics preset.

So the benefits you see, as with any GPU update, will depend a bit on the game you’re playing. There’s also a possibility that game optimizations and driver updates made with RDNA4 in mind could boost performance further. We can’t say that AMD has caught all the way up to Nvidia here—the 9070 and 9070 XT are both closer to the GeForce RTX 5070 than the 5070 Ti, despite keeping it closer to the 5070 Ti in rasterized tests—but there is real, measurable improvement here, which is what we were looking for.

Power usage

The 9070 series’ performance increases are particularly impressive when you look at the power-consumption numbers. The 9070 comes close to the 7900 XT’s performance but uses 90 W less power under load. It beats the RTX 5070 most of the time but uses around 30 W less power.

The 9070 XT is a little less impressive on this front—AMD has set clock speeds pretty high, and this can increase power use disproportionately. The 9070 XT is usually 10 or 15 percent faster than the 9070 but uses 38 percent more power. The XT’s power consumption is similar to the RTX 5070 Ti’s (a GPU it often matches) and the 7900 XT’s (a GPU it always beats), so it’s not too egregious, but it’s not as standout as the 9070’s.

AMD gives 9070 owners a couple of new toggles for power limits, though, which we’ll talk about in the next section.

Experimenting with “Total Board Power”

We don’t normally dabble much with overclocking when we review CPUs or GPUs—we’re happy to leave that to folks at other outlets. But when we review CPUs, we do usually test them with multiple power limits in place. Playing with power limits is easier (and occasionally safer) than actually overclocking, and it often comes with large gains to either performance (a chip that performs much better when given more power to work with) or efficiency (a chip that can run at nearly full speed without using as much power).

Initially, I experimented with the RX 9070’s power limits by accident. AMD sent me one version of the 9070 but exchanged it because of a minor problem the OEM identified with some units early in the production run. I had, of course, already run most of our tests on it, but that’s the way these things go sometimes.

By bumping the regular RX 9070’s TBP up just a bit, you can nudge it closer to 9070 XT-level performance.

The replacement RX 9070 card, an ASRock Steel Legend model, was performing significantly better in our tests, sometimes nearly closing the gap between the 9070 and the XT. It wasn’t until I tested power consumption that I discovered the explanation—by default, it was using a 245 W power limit rather than the AMD-defined 220 W limit. Usually, these kinds of factory tweaks don’t make much of a difference, but for the 9070, this power bump gave it a nice performance boost while still keeping it close to the 250 W power limit of the GeForce RTX 5070.

The 90-series cards we tested both add some power presets to AMD’s Adrenalin app in the Performance tab under Tuning. These replace and/or complement some of the automated overclocking and undervolting buttons that exist here for older Radeon cards. Clicking Favor Efficiency or Favor Performance can ratchet the card’s Total Board Power (TBP) up or down, limiting performance so that the card runs cooler and quieter or allowing the card to consume more power so it can run a bit faster.

The 9070 cards get slightly different performance tuning options in the Adrenalin software. These buttons mostly change the card’s Total Board Power (TBP), making it simple to either improve efficiency or boost performance a bit. Credit: Andrew Cunningham

For this particular ASRock 9070 card, the default TBP is set to 245 W. Selecting “Favor Efficiency” sets it to the default 220 W. You can double-check these values using an app like HWInfo, which displays both the current TBP and the maximum TBP in its Sensors Status window. Clicking the Custom button in the Adrenalin software gives you access to a Power Tuning slider, which for our card allowed us to ratchet the TBP up by up to 10 percent or down by as much as 30 percent.

This is all the firsthand testing we did with the power limits of the 9070 series, though I would assume that adding a bit more power also adds more overclocking headroom (bumping up the power limits is common for GPU overclockers no matter who makes your card). AMD says that some of its partners will ship 9070 XT models set to a roughly 340 W power limit out of the box but acknowledges that “you start seeing diminishing returns as you approach the top of that [power efficiency] curve.”

But it’s worth noting that the driver has another automated set-it-and-forget-it power setting you can easily use to find your preferred balance of performance and power efficiency.

A quick look at FSR4 performance

There’s a toggle in the driver for enabling FSR 4 in FSR 3.1-supporting games. Credit: Andrew Cunningham

One of AMD’s headlining improvements to the RX 90-series is the introduction of FSR 4, a new version of its FidelityFX Super Resolution upscaling algorithm. Like Nvidia’s DLSS and Intel’s XeSS, FSR 4 can take advantage of RDNA 4’s machine learning processing power to do hardware-backed upscaling instead of taking a hardware-agnostic approach as the older FSR versions did. AMD says this will improve upscaling quality, but it also means FSR4 will only work on RDNA 4 GPUs.

The good news is that FSR 3.1 and FSR 4 are forward- and backward-compatible. Games that have already added FSR 3.1 support can automatically take advantage of FSR 4, and games that support FSR 4 on the 90-series can just run FSR 3.1 on older and non-AMD GPUs.

FSR 4 comes with a small performance hit compared to FSR 3.1 at the same settings, but better overall quality can let you drop to a faster preset like Balanced or Performance and end up with more frames-per-second overall. Credit: Andrew Cunningham

The only game in our current test suite to be compatible with FSR 4 is Horizon Zero Dawn Remastered, and we tested its performance using both FSR 3.1 and FSR 4. In general, we found that FSR 4 improved visual quality at the cost of just a few frames per second when run at the same settings—not unlike using Nvidia’s recently released “transformer model” for DLSS upscaling.

Many games will let you choose which version of FSR you want to use. But for FSR 3.1 games that don’t have a built-in FSR 4 option, there’s a toggle in AMD’s Adrenalin driver you can hit to switch to the better upscaling algorithm.

Even if they come with a performance hit, new upscaling algorithms can still improve performance by making the lower-resolution presets look better. We run all of our testing in “Quality” mode, which generally renders at two-thirds of native resolution and scales up. But if FSR 4 running in Balanced or Performance mode looks the same to your eyes as FSR 3.1 running in Quality mode, you can still end up with a net performance improvement in the end.

RX 9070 or 9070 XT?

Just $50 separates the advertised price of the 9070 from that of the 9070 XT, something both Nvidia and AMD have done in the past that I find a bit annoying. If you have $549 to spend on a graphics card, you can almost certainly scrape together $599 for a graphics card. All else being equal, I’d tell most people trying to choose one of these to just spring for the 9070 XT.

That said, availability and retail pricing for these might be all over the place. If your choices are a regular RX 9070 or nothing, or an RX 9070 at $549 and an RX 9070 XT at any price higher than $599, I would just grab a 9070 and not sweat it too much. The two cards aren’t that far apart in performance, especially if you bump the 9070’s TBP up a little bit, and games that are playable on one will be playable at similar settings on the other.

Pretty close to great

If you’re building a 1440p or 4K gaming box, the 9070 series might be the ones to beat right now. Credit: Andrew Cunningham

We’ve got plenty of objective data in here, so I don’t mind saying that I came into this review kind of wanting to like the 9070 and 9070 XT. Nvidia’s 50-series cards have mostly upheld the status quo, and for the last couple of years, the status quo has been sustained high prices and very modest generational upgrades. And who doesn’t like an underdog story?

I think our test results mostly justify my priors. The RX 9070 and 9070 XT are very competitive graphics cards, helped along by a particularly mediocre RTX 5070 refresh from Nvidia. In non-ray-traced games, both cards wipe the floor with the 5070 and come close to competing with the $749 RTX 5070 Ti. In games and synthetic benchmarks with ray-tracing effects on, both cards can usually match or slightly beat the similarly priced 5070, partially (if not entirely) addressing AMD’s longstanding performance deficit here. Neither card comes close to the 5070 Ti in these games, but they’re also not priced like a 5070 Ti.

Just as impressively, the Radeon cards compete with the GeForce cards while consuming similar amounts of power. At stock settings, the RX 9070 uses roughly the same amount of power under load as a 4070 Super but with better performance. The 9070 XT uses about as much power as a 5070 Ti, with similar performance before you turn ray-tracing on. Power efficiency was a small but consistent drawback for the RX 7000 series compared to GeForce cards, and the 9070 cards mostly erase that disadvantage. AMD is also less stingy with the RAM, giving you 16GB for the price Nvidia charges for 12GB.

Some of the old caveats still apply. Radeons take a bigger performance hit, proportionally, than GeForce cards. DLSS already looks pretty good and is widely supported, while FSR 3.1/FSR 4 adoption is still relatively low. Nvidia has a nearly monopolistic grip on the dedicated GPU market, which means many apps, AI workloads, and games support its GPUs best/first/exclusively. AMD is always playing catch-up to Nvidia in some respect, and Nvidia keeps progressing quickly enough that it feels like AMD never quite has the opportunity to close the gap.

AMD also doesn’t have an answer for DLSS Multi-Frame Generation. The benefits of that technology are fairly narrow, and you already get most of those benefits with single-frame generation. But it’s still a thing that Nvidia does that AMDon’t.

Overall, the RX 9070 cards are both awfully tempting competitors to the GeForce RTX 5070—and occasionally even the 5070 Ti. They’re great at 1440p and decent at 4K. Sure, I’d like to see them priced another $50 or $100 cheaper to well and truly undercut the 5070 and bring 1440p-to-4K performance t0 a sub-$500 graphics card. It would be nice to see AMD undercut Nvidia’s GPUs as ruthlessly as it undercut Intel’s CPUs nearly a decade ago. But these RDNA4 GPUs have way fewer downsides than previous-generation cards, and they come at a moment of relative weakness for Nvidia. We’ll see if the sales follow.

The good

  • Great 1440p performance and solid 4K performance
  • 16GB of RAM
  • Decisively beats Nvidia’s RTX 5070, including in most ray-traced games
  • RX 9070 XT is competitive with RTX 5070 Ti in non-ray-traced games for less money
  • Both cards match or beat the RX 7900 XT, AMD’s second-fastest card from the last generation
  • Decent power efficiency for the 9070 XT and great power efficiency for the 9070
  • Automated options for tuning overall power use to prioritize either efficiency or performance
  • Reliable 8-pin power connectors available in many cards

The bad

  • Nvidia’s ray-tracing performance is still usually better
  • At $549 and $599, pricing matches but doesn’t undercut the RTX 5070
  • FSR 4 isn’t as widely supported as DLSS and may not be for a while

The ugly

  • Playing the “can you actually buy these for AMD’s advertised prices” game

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

AMD Radeon RX 9070 and 9070 XT review: RDNA 4 fixes a lot of AMD’s problems Read More »

china-aims-to-recruit-top-us-scientists-as-trump-tries-to-kill-the-chips-act

China aims to recruit top US scientists as Trump tries to kill the CHIPS Act


Tech innovation in US likely to stall if Trump ends the CHIPS Act.

On Tuesday, Donald Trump finally made it clear to Congress that he wants to kill the CHIPS and Science Act—a $280 billion bipartisan law Joe Biden signed in 2022 to bring more semiconductor manufacturing into the US and put the country at the forefront of research and innovation.

Trump has long expressed frustration with the high cost of the CHIPS Act, telling Congress on Tuesday that it’s a “horrible, horrible thing” to “give hundreds of billions of dollars” in subsidies to companies that he claimed “take our money” and “don’t spend it,” Reuters reported.

“You should get rid of the CHIPS Act, and whatever is left over, Mr. Speaker, you should use it to reduce debt,” Trump said.

Instead, Trump potentially plans to shift the US from incentivizing chips manufacturing to punishing firms dependent on imports, threatening a 25 percent tariff on all semiconductor imports that could kick in as soon as April 2, CNBC reported.

The CHIPS Act was supposed to be Biden’s legacy, and because he made it a priority, much of the $52.7 billion in subsidies that Trump is criticizing has already been finalized. In 2022, Biden approved $39 billion in subsidies for semiconductor firms, and in his last weeks in office, he finalized more than $33 billion in awards, Reuters noted.

Among the awardees are leading semiconductor firms, including the Taiwan Semiconductor Manufacturing Co. (TSMC), Micron, Intel, Nvidia, and Samsung Electronics. Although Trump claims the CHIPS Act is one-sided and only serves to benefit firms, according to the Semiconductor Industry Association, the law sparked $450 billion in private investments increasing semiconductor production across 28 states by mid-2024.

With the CHIPS Act officially in Trump’s crosshairs, innovation appears likely to stall the longer that lawmakers remain unsettled on whether the law stays or goes. Some officials worried that Trump might interfere with Biden’s binding agreements with leading firms already holding up their end of the bargain, Reuters reported. For example, Micron plans to invest $100 billion in New York, and TSMC just committed to spending the same over the next four years to expand construction of US chips fabs, which is already well underway.

So far, Commerce Secretary Howard Lutnick has only indicated that he will review the finalized awards, noting that the US wouldn’t be giving TSMC any new awards, Reuters reported.

But the CHIPS Act does much more than provide subsidies to lure leading semiconductor companies into the US. For the first time in decades, the law created a new arm of the National Science Foundation (NSF)—the Directorate of Technology, Innovation, and Partnerships (TIP)—which functions unlike any other part of NSF and now appears existentially threatened.

Designed to take the country’s boldest ideas from basic research to real-world applications as fast as possible to make the US as competitive as possible, TIP helps advance all NSF research and was supposed to ensure US leadership in breakthrough technologies, including AI, 6G communications, biotech, quantum computing, and advanced manufacturing.

Biden allocated $20 billion to launch TIP through the CHIPS Act to accelerate technology development not just at top firms but also in small research settings across the US. But as soon as the Department of Government Efficiency (DOGE) started making cuts at NSF this year, TIP got hit the hardest. Seemingly TIP was targeted not because DOGE deemed it the least consequential but simply because it was the youngest directorate at NSF with the most workers in transition when Trump took office and DOGE abruptly announced it was terminating all “probationary” federal workers.

It took years to get TIP ready to flip the switch to accelerate tech innovation in the US. Without it, Trump risks setting the US back at a time when competitors like China are racing ahead and wooing US scientists who suddenly may not know if or when their funding is coming, NSF workers and industry groups told Ars.

Without TIP, NSF slows down

Last month, DOGE absolutely scrambled the NSF by forcing arbitrary cuts of so-called probationary employees—mostly young scientists, some of whom were in transition due to promotions. All those cuts were deemed illegal and finally reversed Monday by court order after weeks of internal chaos reportedly stalling or threatening to delay some of the highest-priority research in the US.

“The Office of Personnel Management does not have any authority whatsoever under any statute in the history of the universe to hire and fire employees at another agency,” US District Judge William Alsup said, calling probationary employees the “life blood” of government agencies.

Ars granted NSF workers anonymity to discuss how cuts were impacting research. At TIP, a federal worker told Ars that one of the probationary cuts in particular threatened to do the most damage.

Because TIP is so new, only one worker was trained to code automated tracking forms that helped decision-makers balance budgets and approve funding for projects across NSF in real time. Ars’ source likened it to holding the only key to the vault of NSF funding. And because TIP is so different from other NSF branches—hiring experts never pulled into NSF before and requiring customized resources to coordinate projects across all NSF fields of research—the insider suggested another government worker couldn’t easily be substituted. It could take possibly two years to hire and train a replacement on TIP’s unique tracking system, the source said, while TIP’s (and possibly all of NSF’s) efficiency is likely strained.

TIP has never been fully functional, the TIP insider confirmed, and could be choked off right as it starts helping to move the needle on US innovation. “Imagine where we are in two years and where China is in two years in quantum computing, semiconductors, or AI,” the TIP insider warned, pointing to China’s surprisingly advanced AI model, DeepSeek, as an indicator of how quickly tech leadership in global markets can change.

On Monday, NSF emailed all workers to confirm that all probationary workers would be reinstated “right away.” But the damage may already be done as it’s unclear how many workers plan to return. When TIP lost the coder—who was seemingly fired for a technicality while transitioning to a different payscale—NSF workers rushed to recommend the coder on LinkedIn, hoping to help the coder quickly secure another opportunity in industry or academia.

Ars could not reach the coder to confirm whether a return to TIP is in the cards. But Ars’ source at TIP and another NSF worker granted anonymity said that probationary workers may be hesitant to return because they are likely to be hit in any official reductions in force (RIFs) in the future.

“RIFs done the legal way are likely coming down the pipe, so these staff are not coming back to a place of security,” the NSF worker said. “The trust is broken. Even for those that choose to return, they’d be wise to be seeking other opportunities.”

And even losing the TIP coder for a couple of weeks likely slows NSF down at a time when the US seemingly can’t afford to lose a single day.

“We’re going to get murdered” if China sets the standard on 6G or AI, the TIP worker fears.

Rivals and allies wooing top US scientists

On Monday, six research and scientific associations, which described themselves as “leading organizations representing more than 305,000 people in computing, information technology, and technical innovation across US industry, academia, and government,” wrote to Congress demanding protections for the US research enterprise.

The groups warned that funding freezes and worker cuts at NSF—and other agencies, including the Department of Energy, the National Institute of Standards & Technology, the National Aeronautics and Space Administration, the National Institutes of Health—”have caused disruption and uncertainty” and threaten “long-lasting negative consequences for our competitiveness, national security, and economic prosperity.”

Deeming America’s technology leadership at risk, the groups pointed out that “in computing alone, a federal investment in research of just over $10 billion annually across 24 agencies and offices underpins a technology sector that contributes more than $2 trillion to the US GDP each year.” Cutting US investment “would be a costly mistake, far outweighing any short-term savings,” the groups warned.

In a separate statement, the Computing Research Association (CRA) called NSF cuts, in particular, a “deeply troubling, self-inflicted setback to US leadership in computing research” that appeared “penny-wise and pound-foolish.”

“NSF is one of the most efficient federal agencies, operating with less than 9 percent overhead costs,” CRA said. “These arbitrary terminations are not justified by performance metrics or efficiency concerns; rather, they represent a drastic and unnecessary weakening of the US research enterprise.”

Many NSF workers are afraid to speak up, the TIP worker told Ars, and industry seems similarly tight-lipped as confusion remains. Only one of the organizations urging Congress to intervene agreed to talk to Ars about the NSF cuts and the significance of TIP. Kathryn Kelley, the executive director of the Coalition for Academic Scientific Computation, confirmed that while members are more aligned with NSF’s Directorate for Computer and Information Science and Engineering and the Office of Advanced Cyberinfrastructure, her group agrees that all NSF cuts are “deeply” concerning.

“We agree that the uncertainty and erosion of trust within the NSF workforce could have long-lasting effects on the agency’s ability to attract and retain top talent, particularly in such specialized areas,” Kelley told Ars. “This situation underscores the need for continued investment in a stable, well-supported workforce to maintain the US’s leadership in science and innovation.”

Other industry sources unwilling to go on the record told Ars that arbitrary cuts largely affecting the youngest scientists at NSF threatened to disrupt a generation of researchers who envisioned long careers advancing US tech. There’s now a danger that those researchers may be lured to other countries heavily investing in science and currently advertising to attract displaced US researchers, including not just rivals like China but also allies like Denmark.

Those sources questioned the wisdom of using the Elon Musk-like approach of breaking the NSF to rebuild it when it’s already one of the leanest organizations in government.

Ars confirmed that some PhD programs have been cancelled, as many academic researchers are already widely concerned about delayed or cancelled grants and generally freaked out about where to get dependable funding outside the NSF. And in industry, some CHIPS Act projects have already been delayed, as companies like Intel try to manage timelines without knowing what’s happening with CHIPS funding, AP News reported.

“Obviously chip manufacturing companies will slow spending on programs they previously thought they were getting CHIPS Act funding for if not cancel those projects outright,” the Semiconductor Advisors, an industry group, forecasted in a statement last month.

The TIP insider told Ars that the CHIPS Act subsidies for large companies that Trump despises mostly fuel manufacturing in the US, while funding for smaller research facilities is what actually advances technology. Reducing efficiency at TIP would likely disrupt those researchers the most, the TIP worker suggested, proclaiming that’s why TIP must be saved at all costs.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

China aims to recruit top US scientists as Trump tries to kill the CHIPS Act Read More »