Radeon

amd-radeon-rx-9070-and-9070-xt-review:-rdna-4-fixes-a-lot-of-amd’s-problems

AMD Radeon RX 9070 and 9070 XT review: RDNA 4 fixes a lot of AMD’s problems


For $549 and $599, AMD comes close to knocking out Nvidia’s GeForce RTX 5070.

AMD’s Radeon RX 9070 and 9070 XT are its first cards based on the RDNA 4 GPU architecture. Credit: Andrew Cunningham

AMD’s Radeon RX 9070 and 9070 XT are its first cards based on the RDNA 4 GPU architecture. Credit: Andrew Cunningham

AMD is a company that knows a thing or two about capitalizing on a competitor’s weaknesses. The company got through its early-2010s nadir partially because its Ryzen CPUs struck just as Intel’s current manufacturing woes began to set in, first with somewhat-worse CPUs that were great value for the money and later with CPUs that were better than anything Intel could offer.

Nvidia’s untrammeled dominance of the consumer graphics card market should also be an opportunity for AMD. Nvidia’s GeForce RTX 50-series graphics cards have given buyers very little to get excited about, with an unreachably expensive high-end 5090 refresh and modest-at-best gains from 5080 and 5070-series cards that are also pretty expensive by historical standards, when you can buy them at all. Tech YouTubers—both the people making the videos and the people leaving comments underneath them—have been almost uniformly unkind to the 50 series, hinting at consumer frustrations and pent-up demand for competitive products from other companies.

Enter AMD’s Radeon RX 9070 XT and RX 9070 graphics cards. These are aimed right at the middle of the current GPU market at the intersection of high sales volume and decent profit margins. They promise good 1440p and entry-level 4K gaming performance and improved power efficiency compared to previous-generation cards, with fixes for long-time shortcomings (ray-tracing performance, video encoding, and upscaling quality) that should, in theory, make them more tempting for people looking to ditch Nvidia.

Table of Contents

RX 9070 and 9070 XT specs and speeds

RX 9070 XT RX 9070 RX 7900 XTX RX 7900 XT RX 7900 GRE RX 7800 XT
Compute units (Stream processors) 64 RDNA4 (4,096) 56 RDNA4 (3,584) 96 RDNA3 (6,144) 84 RDNA3 (5,376) 80 RDNA3 (5,120) 60 RDNA3 (3,840)
Boost Clock 2,970 MHz 2,520 MHz 2,498 MHz 2,400 MHz 2,245 MHz 2,430 MHz
Memory Bus Width 256-bit 256-bit 384-bit 320-bit 256-bit 256-bit
Memory Bandwidth 650GB/s 650GB/s 960GB/s 800GB/s 576GB/s 624GB/s
Memory size 16GB GDDR6 16GB GDDR6 24GB GDDR6 20GB GDDR6 16GB GDDR6 16GB GDDR6
Total board power (TBP) 304 W 220 W 355 W 315 W 260 W 263 W

AMD’s high-level performance promise for the RDNA 4 architecture revolves around big increases in performance per compute unit (CU). An RDNA 4 CU, AMD says, is nearly twice as fast in rasterized performance as RDNA 2 (that is, rendering without ray-tracing effects enabled) and nearly 2.5 times as fast as RDNA 2 in games with ray-tracing effects enabled. Performance for at least some machine learning workloads also goes way up—twice as fast as RDNA 3 and four times as fast as RDNA 2.

We’ll see this in more detail when we start comparing performance, but AMD seems to have accomplished this goal. Despite having 64 or 56 compute units (for the 9070 XT and 9070, respectively), the cards’ performance often competes with AMD’s last-generation flagships, the RX 7900 XTX and 7900 XT. Those cards came with 96 and 84 compute units, respectively. The 9070 cards are specced a lot more like last generation’s RX 7800 XT—including the 16GB of GDDR6 on a 256-bit memory bus, as AMD still isn’t using GDDR6X or GDDR7—but they’re much faster than the 7800 XT was.

AMD has dramatically increased the performance-per-compute unit for RDNA 4. AMD

The 9070 series also uses a new 4 nm manufacturing process from TSMC, an upgrade from the 7000 series’ 5 nm process (and the 6 nm process used for the separate memory controller dies in higher-end RX 7000-series models that used chiplets). AMD’s GPUs are normally a bit less efficient than Nvidia’s, but the architectural improvements and the new manufacturing process allow AMD to do some important catch-up.

Both of the 9070 models we tested were ASRock Steel Legend models, and the 9070 and 9070 XT had identical designs—we’ll probably see a lot of this from AMD’s partners since the GPU dies and the 16GB RAM allotments are the same for both models. Both use two 8-pin power connectors; AMD says partners are free to use the 12-pin power connector if they want, but given Nvidia’s ongoing issues with it, most cards will likely stick with the reliable 8-pin connectors.

AMD doesn’t appear to be making and selling reference designs for the 9070 series the way it did for some RX 7000 and 6000-series GPUs or the way Nvidia does with its Founders Edition cards. From what we’ve seen, 2 or 2.5-slot, triple-fan designs will be the norm, the way they are for most midrange GPUs these days.

Testbed notes

We used the same GPU testbed for the Radeon RX 9070 series as we have for our GeForce RTX 50-series reviews.

An AMD Ryzen 7 9800X3D ensures that our graphics cards will be CPU-limited as little as possible. An ample 1050 W power supply, 32GB of DDR5-6000, and an AMD X670E motherboard with the latest BIOS installed round out the hardware. On the software side, we use an up-to-date installation of Windows 11 24H2 and recent GPU drivers for older cards, ensuring that our tests reflect whatever optimizations Microsoft, AMD, Nvidia, and game developers have made since the last generation of GPUs launched.

We have numbers for all of Nvidia’s RTX 50-series GPUs so far, plus most of the 40-series cards, most of AMD’s RX 7000-series cards, and a handful of older GPUs from the RTX 30-series and RX 6000 series. We’ll focus on comparing the 9070 XT and 9070 to other 1440p-to-4K graphics cards since those are the resolutions AMD is aiming at.

Performance

At $549 and $599, the 9070 series is priced to match Nvidia’s $549 RTX 5070 and undercut the $749 RTX 5070 Ti. So we’ll focus on comparing the 9070 series to those cards, plus the top tier of GPUs from the outgoing RX 7000-series.

Some 4K rasterized benchmarks.

Starting at the top with rasterized benchmarks with no ray-tracing effects, the 9070 XT does a good job of standing up to Nvidia’s RTX 5070 Ti, coming within a few frames per second of its performance in all the games we tested (and scoring very similarly in the 3DMark Time Spy Extreme benchmark).

Both cards are considerably faster than the RTX 5070—between 15 and 28 percent for the 9070 XT and between 5 and 13 percent for the regular 9070 (our 5070 scored weirdly low in Horizon Zero Dawn Remastered, so we’d treat those numbers as outliers for now). Both 9070 cards also stack up well next to the RX 7000 series here—the 9070 can usually just about match the performance of the 7900 XT, and the 9070 XT usually beats it by a little. Both cards thoroughly outrun the old RX 7900 GRE, which was AMD’s $549 GPU offering just a year ago.

The 7900 XT does have 20GB of RAM instead of 16GB, which might help its performance in some edge cases. But 16GB is still perfectly generous for a 1440p-to-4K graphics card—the 5070 only offers 12GB, which could end up limiting its performance in some games as RAM requirements continue to rise.

On ray-tracing improvements

Nvidia got a jump on AMD when it introduced hardware-accelerated ray-tracing in the RTX 20-series in 2018. And while these effects were only supported in a few games at the time, many modern games offer at least some kind of ray-traced lighting effects.

AMD caught up a little when it began shipping its own ray-tracing support in the RDNA2 architecture in late 2020, but the issue since then has always been that AMD cards have taken a larger performance hit than GeForce GPUs when these effects are turned on. RDNA3 promised improvements, but our tests still generally showed the same deficit as before.

So we’re looking for two things with RDNA4’s ray-tracing performance. First, we want the numbers to be higher than they were for comparably priced RX 7000-series GPUs, the same thing we look for in non-ray-traced (or rasterized) rendering performance. Second, we want the size of the performance hit to go down. To pick an example: the RX 7900 GRE could compete with Nvidia’s RTX 4070 Ti Super in games without ray tracing, but it was closer to a non-Super RTX 4070 in ray-traced games. It has helped keep AMD’s cards from being across-the-board competitive with Nvidia’s—is that any different now?

Benchmarks for games with ray-tracing effects enabled. Both AMD cards generally keep pace with the 5070 in these tests thanks to RDNA 4’s improvements.

The picture our tests paint is mixed but tentatively positive. The 9070 series and RDNA4 post solid improvements in the Cyberpunk 2077 benchmarks, substantially closing the performance gap with Nvidia. In games where AMD’s cards performed well enough before—here represented by Returnal—performance goes up, but roughly proportionately with rasterized performance. And both 9070 cards still punch below their weight in Black Myth: Wukong, falling substantially behind the 5070 under the punishing Cinematic graphics preset.

So the benefits you see, as with any GPU update, will depend a bit on the game you’re playing. There’s also a possibility that game optimizations and driver updates made with RDNA4 in mind could boost performance further. We can’t say that AMD has caught all the way up to Nvidia here—the 9070 and 9070 XT are both closer to the GeForce RTX 5070 than the 5070 Ti, despite keeping it closer to the 5070 Ti in rasterized tests—but there is real, measurable improvement here, which is what we were looking for.

Power usage

The 9070 series’ performance increases are particularly impressive when you look at the power-consumption numbers. The 9070 comes close to the 7900 XT’s performance but uses 90 W less power under load. It beats the RTX 5070 most of the time but uses around 30 W less power.

The 9070 XT is a little less impressive on this front—AMD has set clock speeds pretty high, and this can increase power use disproportionately. The 9070 XT is usually 10 or 15 percent faster than the 9070 but uses 38 percent more power. The XT’s power consumption is similar to the RTX 5070 Ti’s (a GPU it often matches) and the 7900 XT’s (a GPU it always beats), so it’s not too egregious, but it’s not as standout as the 9070’s.

AMD gives 9070 owners a couple of new toggles for power limits, though, which we’ll talk about in the next section.

Experimenting with “Total Board Power”

We don’t normally dabble much with overclocking when we review CPUs or GPUs—we’re happy to leave that to folks at other outlets. But when we review CPUs, we do usually test them with multiple power limits in place. Playing with power limits is easier (and occasionally safer) than actually overclocking, and it often comes with large gains to either performance (a chip that performs much better when given more power to work with) or efficiency (a chip that can run at nearly full speed without using as much power).

Initially, I experimented with the RX 9070’s power limits by accident. AMD sent me one version of the 9070 but exchanged it because of a minor problem the OEM identified with some units early in the production run. I had, of course, already run most of our tests on it, but that’s the way these things go sometimes.

By bumping the regular RX 9070’s TBP up just a bit, you can nudge it closer to 9070 XT-level performance.

The replacement RX 9070 card, an ASRock Steel Legend model, was performing significantly better in our tests, sometimes nearly closing the gap between the 9070 and the XT. It wasn’t until I tested power consumption that I discovered the explanation—by default, it was using a 245 W power limit rather than the AMD-defined 220 W limit. Usually, these kinds of factory tweaks don’t make much of a difference, but for the 9070, this power bump gave it a nice performance boost while still keeping it close to the 250 W power limit of the GeForce RTX 5070.

The 90-series cards we tested both add some power presets to AMD’s Adrenalin app in the Performance tab under Tuning. These replace and/or complement some of the automated overclocking and undervolting buttons that exist here for older Radeon cards. Clicking Favor Efficiency or Favor Performance can ratchet the card’s Total Board Power (TBP) up or down, limiting performance so that the card runs cooler and quieter or allowing the card to consume more power so it can run a bit faster.

The 9070 cards get slightly different performance tuning options in the Adrenalin software. These buttons mostly change the card’s Total Board Power (TBP), making it simple to either improve efficiency or boost performance a bit. Credit: Andrew Cunningham

For this particular ASRock 9070 card, the default TBP is set to 245 W. Selecting “Favor Efficiency” sets it to the default 220 W. You can double-check these values using an app like HWInfo, which displays both the current TBP and the maximum TBP in its Sensors Status window. Clicking the Custom button in the Adrenalin software gives you access to a Power Tuning slider, which for our card allowed us to ratchet the TBP up by up to 10 percent or down by as much as 30 percent.

This is all the firsthand testing we did with the power limits of the 9070 series, though I would assume that adding a bit more power also adds more overclocking headroom (bumping up the power limits is common for GPU overclockers no matter who makes your card). AMD says that some of its partners will ship 9070 XT models set to a roughly 340 W power limit out of the box but acknowledges that “you start seeing diminishing returns as you approach the top of that [power efficiency] curve.”

But it’s worth noting that the driver has another automated set-it-and-forget-it power setting you can easily use to find your preferred balance of performance and power efficiency.

A quick look at FSR4 performance

There’s a toggle in the driver for enabling FSR 4 in FSR 3.1-supporting games. Credit: Andrew Cunningham

One of AMD’s headlining improvements to the RX 90-series is the introduction of FSR 4, a new version of its FidelityFX Super Resolution upscaling algorithm. Like Nvidia’s DLSS and Intel’s XeSS, FSR 4 can take advantage of RDNA 4’s machine learning processing power to do hardware-backed upscaling instead of taking a hardware-agnostic approach as the older FSR versions did. AMD says this will improve upscaling quality, but it also means FSR4 will only work on RDNA 4 GPUs.

The good news is that FSR 3.1 and FSR 4 are forward- and backward-compatible. Games that have already added FSR 3.1 support can automatically take advantage of FSR 4, and games that support FSR 4 on the 90-series can just run FSR 3.1 on older and non-AMD GPUs.

FSR 4 comes with a small performance hit compared to FSR 3.1 at the same settings, but better overall quality can let you drop to a faster preset like Balanced or Performance and end up with more frames-per-second overall. Credit: Andrew Cunningham

The only game in our current test suite to be compatible with FSR 4 is Horizon Zero Dawn Remastered, and we tested its performance using both FSR 3.1 and FSR 4. In general, we found that FSR 4 improved visual quality at the cost of just a few frames per second when run at the same settings—not unlike using Nvidia’s recently released “transformer model” for DLSS upscaling.

Many games will let you choose which version of FSR you want to use. But for FSR 3.1 games that don’t have a built-in FSR 4 option, there’s a toggle in AMD’s Adrenalin driver you can hit to switch to the better upscaling algorithm.

Even if they come with a performance hit, new upscaling algorithms can still improve performance by making the lower-resolution presets look better. We run all of our testing in “Quality” mode, which generally renders at two-thirds of native resolution and scales up. But if FSR 4 running in Balanced or Performance mode looks the same to your eyes as FSR 3.1 running in Quality mode, you can still end up with a net performance improvement in the end.

RX 9070 or 9070 XT?

Just $50 separates the advertised price of the 9070 from that of the 9070 XT, something both Nvidia and AMD have done in the past that I find a bit annoying. If you have $549 to spend on a graphics card, you can almost certainly scrape together $599 for a graphics card. All else being equal, I’d tell most people trying to choose one of these to just spring for the 9070 XT.

That said, availability and retail pricing for these might be all over the place. If your choices are a regular RX 9070 or nothing, or an RX 9070 at $549 and an RX 9070 XT at any price higher than $599, I would just grab a 9070 and not sweat it too much. The two cards aren’t that far apart in performance, especially if you bump the 9070’s TBP up a little bit, and games that are playable on one will be playable at similar settings on the other.

Pretty close to great

If you’re building a 1440p or 4K gaming box, the 9070 series might be the ones to beat right now. Credit: Andrew Cunningham

We’ve got plenty of objective data in here, so I don’t mind saying that I came into this review kind of wanting to like the 9070 and 9070 XT. Nvidia’s 50-series cards have mostly upheld the status quo, and for the last couple of years, the status quo has been sustained high prices and very modest generational upgrades. And who doesn’t like an underdog story?

I think our test results mostly justify my priors. The RX 9070 and 9070 XT are very competitive graphics cards, helped along by a particularly mediocre RTX 5070 refresh from Nvidia. In non-ray-traced games, both cards wipe the floor with the 5070 and come close to competing with the $749 RTX 5070 Ti. In games and synthetic benchmarks with ray-tracing effects on, both cards can usually match or slightly beat the similarly priced 5070, partially (if not entirely) addressing AMD’s longstanding performance deficit here. Neither card comes close to the 5070 Ti in these games, but they’re also not priced like a 5070 Ti.

Just as impressively, the Radeon cards compete with the GeForce cards while consuming similar amounts of power. At stock settings, the RX 9070 uses roughly the same amount of power under load as a 4070 Super but with better performance. The 9070 XT uses about as much power as a 5070 Ti, with similar performance before you turn ray-tracing on. Power efficiency was a small but consistent drawback for the RX 7000 series compared to GeForce cards, and the 9070 cards mostly erase that disadvantage. AMD is also less stingy with the RAM, giving you 16GB for the price Nvidia charges for 12GB.

Some of the old caveats still apply. Radeons take a bigger performance hit, proportionally, than GeForce cards. DLSS already looks pretty good and is widely supported, while FSR 3.1/FSR 4 adoption is still relatively low. Nvidia has a nearly monopolistic grip on the dedicated GPU market, which means many apps, AI workloads, and games support its GPUs best/first/exclusively. AMD is always playing catch-up to Nvidia in some respect, and Nvidia keeps progressing quickly enough that it feels like AMD never quite has the opportunity to close the gap.

AMD also doesn’t have an answer for DLSS Multi-Frame Generation. The benefits of that technology are fairly narrow, and you already get most of those benefits with single-frame generation. But it’s still a thing that Nvidia does that AMDon’t.

Overall, the RX 9070 cards are both awfully tempting competitors to the GeForce RTX 5070—and occasionally even the 5070 Ti. They’re great at 1440p and decent at 4K. Sure, I’d like to see them priced another $50 or $100 cheaper to well and truly undercut the 5070 and bring 1440p-to-4K performance t0 a sub-$500 graphics card. It would be nice to see AMD undercut Nvidia’s GPUs as ruthlessly as it undercut Intel’s CPUs nearly a decade ago. But these RDNA4 GPUs have way fewer downsides than previous-generation cards, and they come at a moment of relative weakness for Nvidia. We’ll see if the sales follow.

The good

  • Great 1440p performance and solid 4K performance
  • 16GB of RAM
  • Decisively beats Nvidia’s RTX 5070, including in most ray-traced games
  • RX 9070 XT is competitive with RTX 5070 Ti in non-ray-traced games for less money
  • Both cards match or beat the RX 7900 XT, AMD’s second-fastest card from the last generation
  • Decent power efficiency for the 9070 XT and great power efficiency for the 9070
  • Automated options for tuning overall power use to prioritize either efficiency or performance
  • Reliable 8-pin power connectors available in many cards

The bad

  • Nvidia’s ray-tracing performance is still usually better
  • At $549 and $599, pricing matches but doesn’t undercut the RTX 5070
  • FSR 4 isn’t as widely supported as DLSS and may not be for a while

The ugly

  • Playing the “can you actually buy these for AMD’s advertised prices” game

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

AMD Radeon RX 9070 and 9070 XT review: RDNA 4 fixes a lot of AMD’s problems Read More »

details-on-amd’s-$549-and-$599-radeon-rx-9070-gpus,-which-aim-at-nvidia-and-4k

Details on AMD’s $549 and $599 Radeon RX 9070 GPUs, which aim at Nvidia and 4K

AMD is releasing the first detailed specifications of its next-generation Radeon RX 9070 series GPUs and the RDNA4 graphics architecture today, almost two months after teasing them at CES.

The short version is that these are both upper-midrange graphics cards targeting resolutions of 1440p and 4K and meant to compete mainly with Nvidia’s incoming and outgoing 4070- and 5070-series GeForce GPUs, including the RTX 4070, RTX 5070, RTX 4070 Ti and Ti Super, and the RTX 5070 Ti.

AMD says the RX 9070 will start at $549, the same price as Nvidia’s RTX 5070. The slightly faster 9070 XT starts at $599, $150 less than the RTX 5070 Ti. The cards go on sale March 6, a day after Nvidia’s RTX 5070.

Neither Nvidia nor Intel has managed to keep its GPUs in stores at their announced starting prices so far, though, so how well AMD’s pricing stacks up to Nvidia in the real world may take a few weeks or months to settle out. For its part, AMD says it’s confident that it has enough supply to meet demand, but that’s as specific as the company’s reassurances got.

Specs and speeds: Radeon RX 9070 and 9070 XT

RX 9070 XT RX 9070 RX 7900 XTX RX 7900 XT RX 7900 GRE RX 7800 XT
Compute units (Stream processors) 64 RDNA4 (4,096) 56 RDNA4 (3,584) 96 RDNA3 (6,144) 84 RDNA3 (5,376) 80 RDNA3 (5,120) 60 RDNA3 (3,840)
Boost Clock 2,970 MHz 2,520 MHz 2,498 MHz 2,400 MHz 2,245 MHz 2,430 MHz
Memory Bus Width 256-bit 256-bit 384-bit 320-bit 256-bit 256-bit
Memory Bandwidth 650 GB/s 650 GB/s 960 GB/s 800 GB/s 576 GB/s 624 GB/s
Memory size 16GB GDDR6 16GB GDDR6 24GB GDDR6 20GB GDDR6 16GB GDDR6 16GB GDDR6
Total board power (TBP) 304 W 220 W 355 W 315 W 260 W 263 W

As is implied by their similar price tags, the 9070 and 9070 XT have more in common than not. Both are based on the same GPU die—the 9070 has 56 of the chip’s compute units enabled, while the 9070 XT has 64. Both cards come with 16GB of RAM (4GB more than the 5070, the same amount as the 5070 Ti) on a 256-bit memory bus, and both use two 8-pin power connectors by default, though the 9070 XT can use significantly more power than the 9070 (304 W, compared to 220 W).

AMD says that its partners are free to make Radeon cards with the 12VHPWR or 12V-2×6 power connectors on them, though given the apparently ongoing issues with the connector, we’d expect most Radeon GPUs to stick with the known quantity that is the 8-pin connector.

AMD says that the 9070 series is made using a 4 nm TSMC manufacturing process and that the chips are monolithic rather than being split up into chiplets as some RX 7000-series cards were. AMD’s commitment to its memory controller chiplets was always hit or miss with the 7000-series—the high-end cards tended to use them, while the lower-end GPUs were usually monolithic—so it’s not clear one way or the other whether this means AMD is giving up on chiplet-based GPUs altogether or if it’s just not using them this time around.

Details on AMD’s $549 and $599 Radeon RX 9070 GPUs, which aim at Nvidia and 4K Read More »

amd’s-fsr-4-upscaling-is-exclusive-to-90-series-radeon-gpus,-won’t-work-on-other-cards

AMD’s FSR 4 upscaling is exclusive to 90-series Radeon GPUs, won’t work on other cards

AMD’s new Radeon RX 90-series cards and the RDNA4 architecture make their official debut on March 5, and a new version of AMD’s FidelityFX Super Resolution (FSR) upscaling technology is coming along with them.

FSR and Nvidia’s Deep Learning Super Sampling (DLSS) upscalers have the same goal: to take a lower-resolution image rendered by your graphics card, bump up the resolution, and fill in the gaps between the natively rendered pixels to make an image that looks close to natively rendered without making the GPU do all that rendering work. These upscalers can make errors, and they won’t always look quite as good as a native-resolution image. But they’re both nice alternatives to living with a blurry, non-native-resolution picture on an LCD or OLED display.

FSR and DLSS are especially useful for older or cheaper 1080p or 1440p-capable GPUs that are connected to a 4K monitor, where you’d otherwise have to decide between a sharp 4K image and a playable frame rate; it’s also useful for hitting higher frame rates at lower resolutions, which can be handy for high-refresh-rate gaming monitors.

But unlike past versions of FSR, FSR 4 is upscaling images using hardware-backed machine-learning algorithms, hardware newly added to RDNA4 and the RX 90-series graphics cards. This mirrors Nvidia’s strategy with DLSS, which has always leveraged the tensor cores found in RTX GPUs to run machine-learning models to achieve superior image quality for upscaled and AI-generated frames. If you don’t have an RDNA4 GPU, you can’t use FSR 4.

AMD’s FSR 4 upscaling is exclusive to 90-series Radeon GPUs, won’t work on other cards Read More »

what-we-know-about-amd-and-nvidia’s-imminent-midrange-gpu-launches

What we know about AMD and Nvidia’s imminent midrange GPU launches

The GeForce RTX 5090 and 5080 are both very fast graphics cards—if you can look past the possibility that we may have yet another power-connector-related overheating problem on our hands. But the vast majority of people (including you, discerning and tech-savvy Ars Technica reader) won’t be spending $1,000 or $2,000 (or $2,750 or whatever) on a new graphics card this generation.

No, statistically, you (like most people) will probably end up buying one of the more affordable midrange Nvidia or AMD cards, GPUs that are all slated to begin shipping later this month or early in March.

There has been a spate of announcements on that front this week. Nvidia announced yesterday that the GeForce RTX 5070 Ti, which the company previously introduced at CES, would be available starting on February 20 for $749 and up. The new GPU, like the RTX 5080, looks like a relatively modest upgrade from last year’s RTX 4070 Ti Super. But it ought to at least flirt with affordability for people who are looking to get natively rendered 4K without automatically needing to enable DLSS upscaling to get playable frame rates.

RTX 5070 Ti RTX 4070 Ti Super RTX 5070 RTX 4070 Super
CUDA Cores 8,960 8,448 6,144 7,168
Boost Clock 2,452 MHz 2,610 MHz 2,512 MHz 2,475 MHz
Memory Bus Width 256-bit 256-bit 192-bit 192-bit
Memory Bandwidth 896 GB/s 672 GB/s 672 GB/s 504 GB/s
Memory size 16GB GDDR7 16GB GDDR6X 12GB GDDR7 12GB GDDR6X
TGP 300 W 285 W 250 W 220 W

That said, if the launches of the 5090 and 5080 are anything to go by, it may not be easy to find and buy the RTX 5070 Ti for anything close to the listed retail price; early retail listings are not promising on this front. You’ll also be relying exclusively on Nvidia’s partners to deliver unadorned, relatively minimalist MSRP versions of the cards since Nvidia isn’t making a Founders Edition version.

As for the $549 RTX 5070, Nvidia’s website says it’s launching on March 5. But it’s less exciting than the other 50-series cards because it has fewer CUDA cores than the outgoing RTX 4070 Super, leaving it even more reliant on AI-generated frames to improve performance compared to the last generation.

What we know about AMD and Nvidia’s imminent midrange GPU launches Read More »

amd-promises-“mainstream”-4k-gaming-with-next-gen-gpus-as-current-gen-gpu-sales-tank

AMD promises “mainstream” 4K gaming with next-gen GPUs as current-gen GPU sales tank

AMD announced its fourth-quarter earnings yesterday, and the numbers were mostly rosy: $7.7 billion in revenue and a 51 percent profit margin, compared to $6.2 billion and 47 percent a year ago. The biggest winner was the data center division, which made $3.9 billion thanks to Epyc server processors and Instinct AI accelerators, and Ryzen CPUs are also selling well, helping the company’s client segment earn $2.3 billion.

But if you were looking for a dark spot, you’d find it in the company’s gaming division, which earned a relatively small $563 million, down 59 percent from a year ago. AMD’s Lisa Su blamed this on both dedicated graphics card sales and sales from the company’s “semi-custom” chips (that is, the ones created specifically for game consoles like the Xbox and PlayStation).

Other data sources suggest that the response from GPU buyers to AMD’s Radeon RX 7000 series, launched between late 2022 and early 2024, has been lackluster. The Steam Hardware Survey, a noisy but broadly useful barometer for GPU market share, shows no RX 7000-series models in the top 50; only two of the GPUs (the 7900 XTX and 7700 XT) are used in enough gaming PCs to be mentioned on the list at all, with the others all getting lumped into the “other” category. Jon Peddie Research recently estimated that AMD was selling roughly one dedicated GPU for every seven or eight sold by Nvidia.

But hope springs eternal. Su confirmed on AMD’s earnings call that the new Radeon RX 9000-series cards, announced at CES last month, would be launching in early March. The Radeon RX 9070 and 9070 XT are both aimed toward the middle of the graphics card market, and Su said that both would bring “high-quality gaming to mainstream players.”

An opportunity, maybe

“Mainstream” could mean a lot of things. AMD’s CES slide deck positioned the 9070 series alongside Nvidia’s RTX 4070 Ti ($799) and 4070 Super ($599) and its own RTX 7900 XT, 7900 GRE, and 7800 XT (between $500 and $730 as of this writing), a pretty wide price spread that is still more expensive than an entire high-end console. The GPUs could still rely heavily on upscaling algorithms like AMD’s Fidelity Super Resolution (FSR) to hit playable frame rates at those resolutions, rather than targeting native 4K.

AMD promises “mainstream” 4K gaming with next-gen GPUs as current-gen GPU sales tank Read More »

maze-of-adapters,-software-patches-gets-a-dedicated-gpu-working-on-a-raspberry-pi

Maze of adapters, software patches gets a dedicated GPU working on a Raspberry Pi

Actually getting the GPU working required patching the Linux kernel to include the open-source AMDGPU driver, which includes Arm support and provides decent support for the RX 460 (Geerling says the card and its Polaris architecture were chosen because they were new enough to be practically useful and to be supported by the AMDGPU driver, old enough that driver support is pretty mature, and because the card is cheap and uses PCIe 3.0). Nvidia’s GPUs generally aren’t really an option for projects like this because the open source drivers lag far behind the ones available for Radeon GPUs.

Once various kernel patches were applied and the kernel was recompiled, installing AMD’s graphics firmware got both graphics output and 3D acceleration working more or less normally.

Despite their age and relative graphical simplicity, running Doom 3 or Tux Racer on the Pi 5’s GPU is a tall order, even at 1080p. The RX 460 was able to run both at 4K, albeit with some settings reduced; Geerling also said that the card rendered the Pi operating system’s UI smoothly at 4k (the Pi’s integrated GPU does support 4K output, but things get framey quickly in our experience, especially when using multiple monitors).

Though a qualified success, anything this hacky is likely to have at least some software problems; Geerling noted that graphics acceleration in the Chromium browser and GPU-accelerated video encoding and decoding support weren’t working properly.

Most Pi owners aren’t going to want to run out and recreate this setup themselves, but it is interesting to see progress when it comes to using dedicated GPUs with Arm CPUs. So far, Arm chips across all major software ecosystems—including Windows, macOS, and Android—have mostly been restricted to using their own integrated GPUs. But if Arm processors are really going to compete with Intel’s and AMD’s in every PC market segment, we’ll eventually need to see better support for external graphics chips.

Maze of adapters, software patches gets a dedicated GPU working on a Raspberry Pi Read More »

amd-promises-big-upscaling-improvements-and-a-future-proof-api-in-fsr-3.1

AMD promises big upscaling improvements and a future-proof API in FSR 3.1

upscale upscaling —

API should help more games get future FSR improvements without a game update.

AMD promises big upscaling improvements and a future-proof API in FSR 3.1

AMD

Last summer, AMD debuted the latest version of its FidelityFX Super Resolution (FSR) upscaling technology. While version 2.x focused mostly on making lower-resolution images look better at higher resolutions, version 3.0 focused on AMD’s “Fluid Motion Frames,” which attempt to boost FPS by generating interpolated frames to insert between the ones that your GPU is actually rendering.

Today, the company is announcing FSR 3.1, which among other improvements decouples the upscaling improvements in FSR 3.x from the Fluid Motion Frames feature. FSR 3.1 will be available “later this year” in games whose developers choose to implement it.

Fluid Motion Frames and Nvidia’s equivalent DLSS Frame Generation usually work best when a game is already running at a high frame rate, and even then can be more prone to mistakes and odd visual artifacts than regular FSR or DLSS upscaling. FSR 3.0 was an all-or-nothing proposition, but version 3.1 should let you pick and choose what features you want to enable.

It also means you can use FSR 3.0 frame generation with other upscalers like DLSS, especially useful for 20- and 30-series Nvidia GeForce GPUs that support DLSS upscaling but not DLSS Frame Generation.

“When using FSR 3 Frame Generation with any upscaling quality mode OR with the new ‘Native AA’ mode, it is highly recommended to be always running at a minimum of ~60 FPS before Frame Generation is applied for an optimal high-quality gaming experience and to mitigate any latency introduced by the technology,” wrote AMD’s Alexander Blake-Davies in the post announcing FSR 3.1.

Generally, FSR’s upscaling image quality falls a little short of Nvidia’s DLSS, but FSR 2 closed that gap a bit, and FSR 3.1 goes further. AMD highlights two specific improvements: one for “temporal stability,” which will help reduce the flickering and shimmering effect that FSR sometimes introduces, and one for ghosting reduction, which will reduce unintentional blurring effects for fast-moving objects.

The biggest issue with these new FSR improvements is that they need to be implemented on a game-to-game basis. FSR 3.0 was announced in August 2023, and AMD now trumpets that there are 40 “available and upcoming” games that support the technology, of which just 19 are currently available. There are a lot of big-name AAA titles in the list, but that’s still not many compared to the sum total of all PC games or even the 183 titles that currently support FSR 2.x.

AMD wants to help solve this problem in FSR 3.1 by introducing a stable FSR API for developers, which AMD says “makes it easier for developers to debug and allows forward compatibility with updated versions of FSR.” This may eventually lead to more games getting future FSR improvements for “free,” without the developer’s effort.

AMD didn’t mention any hardware requirements for FSR 3.1, though presumably, the company will still support a reasonably wide range of recent GPUs from AMD, Nvidia, and Intel. FSR 3.0 is formally supported on Radeon RX 5000, 6000, and 7000 cards, Nvidia’s RTX 20-series and newer, and Intel Arc GPUs. It will also bring FSR 3.x features to games that use the Vulkan API, not just DirectX 12, and the Xbox Game Development Kit (GDK) so it can be used in console titles as well as PC games.

AMD promises big upscaling improvements and a future-proof API in FSR 3.1 Read More »

review:-amd-radeon-rx-7900-gre-gpu-doesn’t-quite-earn-its-“7900”-label

Review: AMD Radeon RX 7900 GRE GPU doesn’t quite earn its “7900” label

rabbit season —

New $549 graphics card is the more logical successor to the RX 6800 XT.

ASRock's take on AMD's Radeon RX 7900 GRE.

Enlarge / ASRock’s take on AMD’s Radeon RX 7900 GRE.

Andrew Cunningham

In July 2023, AMD released a new GPU called the “Radeon RX 7900 GRE” in China. GRE stands for “Golden Rabbit Edition,” a reference to the Chinese zodiac, and while the card was available outside of China in a handful of pre-built OEM systems, AMD didn’t make it widely available at retail.

That changes today—AMD is launching the RX 7900 GRE at US retail for a suggested starting price of $549. This throws it right into the middle of the busy upper-mid-range graphics card market, where it will compete with Nvidia’s $549 RTX 4070 and the $599 RTX 4070 Super, as well as AMD’s own $500 Radeon RX 7800 XT.

We’ve run our typical set of GPU tests on the 7900 GRE to see how it stacks up to the cards AMD and Nvidia are already offering. Is it worth buying a new card relatively late in this GPU generation, when rumors point to new next-gen GPUs from Nvidia, AMD, and Intel before the end of the year? Can the “Golden Rabbit Edition” still offer a good value, even though it’s currently the year of the dragon?

Meet the 7900 GRE

RX 7900 XT RX 7900 GRE RX 7800 XT RX 6800 XT RX 6800 RX 7700 XT RX 6700 XT RX 6750 XT
Compute units (Stream processors) 84 (5,376) 80 (5,120) 60 (3,840) 72 (4,608) 60 (3,840) 54 (3,456) 40 (2,560) 40 (2,560)
Boost Clock 2,400 MHz 2,245 MHz 2,430 MHz 2,250 MHz 2,105 MHz 2,544 MHz 2,581 MHz 2,600 MHz
Memory Bus Width 320-bit 256-bit 256-bit 256-bit 256-bit 192-bit 192-bit 192-bit
Memory Clock 2,500 MHz 2,250 MHz 2,438 MHz 2,000 MHz 2,000 MHz 2,250 MHz 2,000 MHz 2,250 MHz
Memory size 20GB GDDR6 16GB GDDR6 16GB GDDR6 16GB GDDR6 16GB GDDR6 12GB GDDR6 12GB GDDR6 12GB GDDR6
Total board power (TBP) 315 W 260 W 263 W 300 W 250 W 245 W 230 W 250 W

The 7900 GRE slots into AMD’s existing lineup above the RX 7800 XT (currently $500-ish) and below the RX 7900 (around $750). Technologically, we’re looking at the same Navi 31 GPU silicon as the 7900 XT and XTX, but with just 80 of the compute units enabled, down from 84 and 96, respectively. The normal benefits of the RDNA3 graphics architecture apply, including hardware-accelerated AV1 video encoding and DisplayPort 2.1 support.

The 7900 GRE also includes four active memory controller die (MCD) chiplets, giving it a narrower 256-bit memory bus and 16GB of memory instead of 20GB—still plenty for modern games, though possibly not quite as future-proof as the 7900 XT. The card uses significantly less power than the 7900 XT and about the same amount as the 7800 XT. That feels a bit weird, intuitively, since slower cards almost always consume less power than faster ones. But it does make some sense; pushing the 7800 XT’s smaller Navi 32 GPU to get higher clock speeds out of it is probably making it run a bit less efficiently than a larger Navi 31 GPU die that isn’t being pushed as hard.

  • Andrew Cunningham

  • Andrew Cunningham

  • Andrew Cunningham

When we reviewed the 7800 XT last year, we noted that its hardware configuration and performance made it seem more like a successor to the (non-XT) Radeon RX 6800, while it just barely managed to match or beat the 6800 XT in our tests. Same deal with the 7900 GRE, which is a more logical successor to the 6800 XT. Bear that in mind when doing generation-over-generation comparisons.

Review: AMD Radeon RX 7900 GRE GPU doesn’t quite earn its “7900” label Read More »

ryzen-8000g-review:-an-integrated-gpu-that-can-beat-a-graphics-card,-for-a-price

Ryzen 8000G review: An integrated GPU that can beat a graphics card, for a price

The most interesting thing about AMD's Ryzen 7 8700G CPU is the Radeon 780M GPU that's attached to it.

Enlarge / The most interesting thing about AMD’s Ryzen 7 8700G CPU is the Radeon 780M GPU that’s attached to it.

Andrew Cunningham

Put me on the short list of people who can get excited about the humble, much-derided integrated GPU.

Yes, most of them are afterthoughts, designed for office desktops and laptops that will spend most of their lives rendering 2D images to a single monitor. But when integrated graphics push forward, it can open up possibilities for people who want to play games but can only afford a cheap desktop (or who have to make do with whatever their parents will pay for, which was the big limiter on my PC gaming experience as a kid).

That, plus an unrelated but accordant interest in building small mini-ITX-based desktops, has kept me interested in AMD’s G-series Ryzen desktop chips (which it sometimes calls “APUs,” to distinguish them from the Ryzen CPUs). And the Ryzen 8000G chips are a big upgrade from the 5000G series that immediately preceded them (this makes sense, because as we all know the number 8 immediately follows the number 5).

We’re jumping up an entire processor socket, one CPU architecture, three GPU architectures, and up to a new generation of much faster memory; especially for graphics, it’s a pretty dramatic leap. It’s an integrated GPU that can credibly beat the lowest tier of currently available graphics cards, replacing a $100–$200 part with something a lot more energy-efficient.

As with so many current-gen Ryzen chips, still-elevated pricing for the socket AM5 platform and the DDR5 memory it requires limit the 8000G series’ appeal, at least for now.

From laptop to desktop

AMD's first Ryzen 8000 desktop processors are what the company used to call

Enlarge / AMD’s first Ryzen 8000 desktop processors are what the company used to call “APUs,” a combination of a fast integrated GPU and a reasonably capable CPU.

AMD

The 8000G chips use the same Zen 4 CPU architecture as the Ryzen 7000 desktop chips, but the way the rest of the chip is put together is pretty different. Like past APUs, these are actually laptop silicon (in this case, the Ryzen 7040/8040 series, codenamed Phoenix and Phoenix 2) repackaged for a desktop processor socket.

Generally, the real-world impact of this is pretty mild; in most ways, the 8700G and 8600G will perform a lot like any other Zen 4 CPU with the same number of cores (our benchmarks mostly bear this out). But to the extent that there is a difference, the Phoenix silicon will consistently perform just a little worse, because it has half as much L3 cache. AMD’s Ryzen X3D chips revolve around the performance benefits of tons of cache, so you can see why having less would be detrimental.

The other missing feature from the Ryzen 7000 desktop chips is PCI Express 5.0 support—Ryzen 8000G tops out at PCIe 4.0. This might, maybe, one day in the distant future, eventually lead to some kind of user-observable performance difference. Some recent GPUs use an 8-lane PCIe 4.0 interface instead of the typical 16 lanes, which limits performance slightly. But PCIe 5.0 SSDs remain rare (and PCIe 4.0 peripherals remain extremely fast), so it probably shouldn’t top your list of concerns.

The Ryzen 5 8500G is a lot different from the 8700G and 8600G, since some of the CPU cores in the Phoenix 2 chips are based on Zen 4c rather than Zen 4. These cores have all the same capabilities as regular Zen 4 ones—unlike Intel’s E-cores—but they’re optimized to take up less space rather than hit high clock speeds. They were initially made for servers, where cramming lots of cores into a small amount of space is more important than having a smaller number of faster cores, but AMD is also using them to make some of its low-end consumer chips physically smaller and presumably cheaper to produce. AMD didn’t send us a Ryzen 8500G for review, so we can’t see exactly how Phoenix 2 stacks up in a desktop.

The 8700G and 8600G chips are also the only ones that come with AMD’s “Ryzen AI” feature, the brand AMD is using to refer to processors with a neural processing unit (NPU) included. Sort of like GPUs or video encoding/decoding blocks, these are additional bits built into the chip that handle things that CPUs can’t do very efficiently—in this case, machine learning and AI workloads.

Most PCs still don’t have NPUs, and as such they are only barely used in current versions of Windows (Windows 11 offers some webcam effects that will take advantage of NPU acceleration, but for now that’s mostly it). But expect this to change as they become more common and as more AI-accelerated text, image, and video creating and editing capabilities are built into modern operating systems.

The last major difference is the GPU. Ryzen 7000 includes a pair of RDNA2 compute units that perform more or less like Intel’s desktop integrated graphics: good enough to render your desktop on a monitor or two, but not much else. The Ryzen 8000G chips include up to 12 RDNA3 CUs, which—as we’ve already seen in laptops and portable gaming systems like the Asus ROG Ally that use the same silicon—is enough to run most games, if just barely in some cases.

That gives AMD’s desktop APUs a unique niche. You can use them in cases where you can’t afford a dedicated GPU—for a time during the big graphics card shortage in 2020 and 2021, a Ryzen 5700G was actually one of the only ways to build a budget gaming PC. Or you can use them in cases where a dedicated GPU won’t fit, like super-small mini ITX-based desktops.

The main argument that AMD makes is the affordability one, comparing the price of a Ryzen 8700G to the price of an Intel Core i5-13400F and a GeForce GTX 1650 GPU (this card is nearly five years old, but it remains Nvidia’s newest and best GPU available for less than $200).

Let’s check on performance first, and then we’ll revisit pricing.

Ryzen 8000G review: An integrated GPU that can beat a graphics card, for a price Read More »

$329-radeon-7600-xt-brings-16gb-of-memory-to-amd’s-latest-midrange-gpu

$329 Radeon 7600 XT brings 16GB of memory to AMD’s latest midrange GPU

more rams —

Updated 7600 XT also bumps up clock speeds and power requirements.

The new Radeon RX 7600 XT mostly just adds extra memory, though clock speeds and power requirements have also increased somewhat.

Enlarge / The new Radeon RX 7600 XT mostly just adds extra memory, though clock speeds and power requirements have also increased somewhat.

AMD

Graphics card buyers seem anxious about buying a GPU with enough memory installed, even in midrange graphics cards that aren’t otherwise equipped to play games at super-high resolutions. And while this anxiety tends to be a bit overblown—lots of first- and third-party testing of cards like the GeForce 4060 Ti shows that just a handful of games benefit when all you do is boost GPU memory from 8GB to 16GB—there’s still a market for less-expensive GPUs with big pools of memory, whether you’re playing games that need it or running compute tasks that benefit from it.

That’s the apparent impetus behind AMD’s sole GPU announcement from its slate of CES news today: the $329 Radeon RX 7600 XT, a version of last year’s $269 RX 7600 with twice as much memory, slightly higher clock speeds, and higher power use to go with it.

RX 7700 XT RX 7600 RX 7600 XT RX 6600 RX 6600 XT RX 6650 XT RX 6750 XT
Compute units (Stream processors) 54 (3,456) 32 (2,048) 32 (2,048) 28 (1,792) 32 (2,048) 32 (2,048) 40 (2,560)
Boost Clock 2,544 MHz 2,600 MHz 2,760 MHz 2,490 MHz 2,589 MHz 2,635 MHz 2,600 MHz
Memory Bus Width 192-bit 128-bit 128-bit 128-bit 128-bit 128-bit 192-bit
Memory Clock 2,250 MHz 2,250 MHz 2,250 MHz 1,750 MHz 2,000 MHz 2,190 MHz 2,250 MHz
Memory size 12GB GDDR6 8GB GDDR6 16GB GDDR6 8GB GDDR6 8GB GDDR6 8GB GDDR6 12GB GDDR6
Total board power (TBP) 245 W 165 W 190 W 132 W 160 W 180 W 250 W

The core specifications of the 7600 XT remain the same as the regular 7600: 32 of AMD’s compute units (CUs) based on the RDNA3 GPU architecture and the same memory clock speed attached to the same 128-bit memory bus. But RAM has been boosted from 8GB to 16GB, and the GPU’s clock speeds have been boosted a little, ensuring that the card runs games a little faster than the regular 7600, even in games that don’t care about the extra memory.

Images of AMD’s reference design show a slightly larger card than the regular 7600, with a second 8-pin power connector to provide the extra power (total board power increases from 165 W to 190 W). The only other difference between the cards is DisplayPort 2.1 support—it was optional in the regular RX 7600, but all 7600 XTs will have it. That brings it in line with all the other RX 7000-series GPUs.

  • AMD’s hand-picked benchmarks generally show a mild performance improvement over the RX 7600, though Forza is an outlier.

    AMD

  • The 7600 XT’s performance relative to Nvidia’s RTX 4060 is also a little better than the RX 7600’s, thanks to added RAM and higher clocks. But Nvidia should continue to benefit from superior ray-tracing performance in a lot of games.

    AMD

  • Testing against the 4060 at 1440p. Note that the longest bars are coming from games with FSR 3 frame-generation enabled and that Nvidia’s cards also support DLSS 3.

    AMD

  • The complete RX 7000-series lineup.

    AMD

AMD’s provided performance figures show the 7600 XT outrunning the regular 7600 by between 5 and 10 percent in most titles, with one—Forza Horizon 5 with ray-tracing turned all the way up—showing a more significant jump of around 40 percent at 1080p and 1440p. Whether that kind of performance jump is worth the extra $60 depends on the games you play and how worried you are about the system requirements in future games.

AMD says the RX 7600 XT will be available starting on January 24. Pricing and availability for other RX 7000-series GPUs, including the regular RX 7600, aren’t changing.

$329 Radeon 7600 XT brings 16GB of memory to AMD’s latest midrange GPU Read More »

2023-was-the-year-that-gpus-stood-still

2023 was the year that GPUs stood still

2023 was the year that GPUs stood still

Andrew Cunningham

In many ways, 2023 was a long-awaited return to normalcy for people who build their own gaming and/or workstation PCs. For the entire year, most mainstream components have been available at or a little under their official retail prices, making it possible to build all kinds of PCs at relatively reasonable prices without worrying about restocks or waiting for discounts. It was a welcome continuation of some GPU trends that started in 2022. Nvidia, AMD, and Intel could release a new GPU, and you could consistently buy that GPU for roughly what it was supposed to cost.

That’s where we get into how frustrating 2023 was for GPU buyers, though. Cards like the GeForce RTX 4090 and Radeon RX 7900 series launched in late 2022 and boosted performance beyond what any last-generation cards could achieve. But 2023’s midrange GPU launches were less ambitious. Not only did they offer the performance of a last-generation GPU, but most of them did it for around the same price as the last-gen GPUs whose performance they matched.

The midrange runs in place

Not every midrange GPU launch will get us a GTX 1060—a card roughly 50 percent faster than its immediate predecessor and beat the previous-generation GTX 980 despite costing just a bit over half as much money. But even if your expectations were low, this year’s midrange GPU launches have been underwhelming.

The worst was probably the GeForce RTX 4060 Ti, which sometimes struggled to beat the card it replaced at around the same price. The 16GB version of the card was particularly maligned since it was $100 more expensive but was only faster than the 8GB version in a handful of games.

The regular RTX 4060 was slightly better news, thanks partly to a $30 price drop from where the RTX 3060 started. The performance gains were small, and a drop from 12GB to 8GB of RAM isn’t the direction we prefer to see things move, but it was still a slightly faster and more efficient card at around the same price. AMD’s Radeon RX 7600, RX 7700 XT, and RX 7800 XT all belong in this same broad category—some improvements, but generally similar performance to previous-generation parts at similar or slightly lower prices. Not an exciting leap for people with aging GPUs who waited out the GPU shortage to get an upgrade.

The best midrange card of the generation—and at $600, we’re definitely stretching the definition of “midrange”—might be the GeForce RTX 4070, which can generally match or slightly beat the RTX 3080 while using much less power and costing $100 less than the RTX 3080’s suggested retail price. That seems like a solid deal once you consider that the RTX 3080 was essentially unavailable at its suggested retail price for most of its life span. But $600 is still a $100 increase from the 2070 and a $220 increase from the 1070, making it tougher to swallow.

In all, 2023 wasn’t the worst time to buy a $300 GPU; that dubious honor belongs to the depths of 2021, when you’d be lucky to snag a GTX 1650 for that price. But “consistently available, basically competent GPUs” are harder to be thankful for the further we get from the GPU shortage.

Marketing gets more misleading

1.7 times faster than the last-gen GPU? Sure, under exactly the right conditions in specific games.

Enlarge / 1.7 times faster than the last-gen GPU? Sure, under exactly the right conditions in specific games.

Nvidia

If you just looked at Nvidia’s early performance claims for each of these GPUs, you might think that the RTX 40-series was an exciting jump forward.

But these numbers were only possible in games that supported these GPUs’ newest software gimmick, DLSS Frame Generation (FG). The original DLSS and DLSS 2 improve performance by upsampling the images generated by your GPU, generating interpolated pixels that make lower-res image into higher-res ones without the blurriness and loss of image quality you’d get from simple upscaling. DLSS FG generates entire frames in between the ones being rendered by your GPU, theoretically providing big frame rate boosts without requiring a powerful GPU.

The technology is impressive when it works, and it’s been successful enough to spawn hardware-agnostic imitators like the AMD-backed FSR 3 and an alternate implementation from Intel that’s still in early stages. But it has notable limitations—mainly, it needs a reasonably high base frame rate to have enough data to generate convincing extra frames, something that these midrange cards may struggle to do. Even when performance is good, it can introduce weird visual artifacts or lose fine detail. The technology isn’t available in all games. And DLSS FG also adds a bit of latency, though this can be offset with latency-reducing technologies like Nvidia Reflex.

As another tool in the performance-enhancing toolbox, DLSS FG is nice to have. But to put it front-and-center in comparisons with previous-generation graphics cards is, at best, painting an overly rosy picture of what upgraders can actually expect.

2023 was the year that GPUs stood still Read More »