GeForce

nvidia-confirms-the-switch-2-supports-dlss,-g-sync,-and-ray-tracing

Nvidia confirms the Switch 2 supports DLSS, G-Sync, and ray tracing

In the wake of the Switch 2 reveal, neither Nintendo nor Nvidia has gone into any detail at all about the exact chip inside the upcoming handheld—technically, we are still not sure what Arm CPU architecture or what GPU architecture it uses, how much RAM we can expect it to have, how fast that memory will be, or exactly how many graphics cores we’re looking at.

But interviews with Nintendo executives and a blog post from Nvidia did at least confirm several of the new chip’s capabilities. The “custom Nvidia processor” has a GPU “with dedicated [Ray-Tracing] Cores and Tensor Cores for stunning visuals and AI-driven enhancements,” writes Nvidia Software Engineering VP Muni Anda.

This means that, as rumored, the Switch 2 will support Nvidia’s Deep Learning Super Sampling (DLSS) upscaling technology, which helps to upscale a lower-resolution image into a higher-resolution image with less of a performance impact than native rendering and less loss of quality than traditional upscaling methods. For the Switch games that can render at 4K or at 120 FPS 1080p, DLSS will likely be responsible for making it possible.

The other major Nvidia technology supported by the new Switch is G-Sync, which prevents screen tearing when games are running at variable frame rates. Nvidia notes that G-Sync is only supported in handheld mode and not in docked mode, which could be a limitation of the Switch dock’s HDMI port.

Nvidia confirms the Switch 2 supports DLSS, G-Sync, and ray tracing Read More »

leaked-geforce-rtx-5060-and-5050-specs-suggest-nvidia-will-keep-playing-it-safe

Leaked GeForce RTX 5060 and 5050 specs suggest Nvidia will keep playing it safe

Nvidia has launched all of the GeForce RTX 50-series GPUs that it announced at CES, at least technically—whether you’re buying from Nvidia, AMD, or Intel, it’s nearly impossible to find any of these new cards at their advertised prices right now.

But hope springs eternal, and newly leaked specs for GeForce RTX 5060 and 5050-series cards suggest that Nvidia may be announcing these lower-end cards soon. These kinds of cards are rarely exciting, but Steam Hardware Survey data shows that these xx60 and xx50 cards are what the overwhelming majority of PC gamers are putting in their systems.

The specs, posted by a reliable leaker named Kopite and reported by Tom’s Hardware and others, suggest a refresh that’s in line with what Nvidia has done with most of the 50-series so far. Along with a move to the next-generation Blackwell architecture, the 5060 GPUs each come with a small increase to the number of CUDA cores, a jump from GDDR6 to GDDR7, and an increase in power consumption, but no changes to the amount of memory or the width of the memory bus. The 8GB versions, in particular, will probably continue to be marketed primarily as 1080p cards.

RTX 5060 Ti (leaked) RTX 4060 Ti RTX 5060 (leaked) RTX 4060 RTX 5050 (leaked) RTX 3050
CUDA Cores 4,608 4,352 3,840 3,072 2,560 2,560
Boost Clock Unknown 2,535 MHz Unknown 2,460 MHz Unknown 1,777 MHz
Memory Bus Width 128-bit 128-bit 128-bit 128-bit 128-bit 128-bit
Memory bandwidth Unknown 288 GB/s Unknown 272 GB/s Unknown 224 GB/s
Memory size 8GB or 16GB GDDR7 8GB or 16GB GDDR6 8GB GDDR7 8GB GDDR6 8GB GDDR6 8GB GDDR6
TGP 180 W 160 W 150 W 115 W 130 W 130 W

As with the 4060 Ti, the 5060 Ti is said to come in two versions, one with 8GB of RAM and one with 16GB. One of the 4060 Ti’s problems was that its relatively narrow 128-bit memory bus limited its performance at 1440p and 4K resolutions even with 16GB of RAM—the bandwidth increase from GDDR7 could help with this, but we’ll need to test to see for sure.

Leaked GeForce RTX 5060 and 5050 specs suggest Nvidia will keep playing it safe Read More »

amd-radeon-rx-9070-and-9070-xt-review:-rdna-4-fixes-a-lot-of-amd’s-problems

AMD Radeon RX 9070 and 9070 XT review: RDNA 4 fixes a lot of AMD’s problems


For $549 and $599, AMD comes close to knocking out Nvidia’s GeForce RTX 5070.

AMD’s Radeon RX 9070 and 9070 XT are its first cards based on the RDNA 4 GPU architecture. Credit: Andrew Cunningham

AMD’s Radeon RX 9070 and 9070 XT are its first cards based on the RDNA 4 GPU architecture. Credit: Andrew Cunningham

AMD is a company that knows a thing or two about capitalizing on a competitor’s weaknesses. The company got through its early-2010s nadir partially because its Ryzen CPUs struck just as Intel’s current manufacturing woes began to set in, first with somewhat-worse CPUs that were great value for the money and later with CPUs that were better than anything Intel could offer.

Nvidia’s untrammeled dominance of the consumer graphics card market should also be an opportunity for AMD. Nvidia’s GeForce RTX 50-series graphics cards have given buyers very little to get excited about, with an unreachably expensive high-end 5090 refresh and modest-at-best gains from 5080 and 5070-series cards that are also pretty expensive by historical standards, when you can buy them at all. Tech YouTubers—both the people making the videos and the people leaving comments underneath them—have been almost uniformly unkind to the 50 series, hinting at consumer frustrations and pent-up demand for competitive products from other companies.

Enter AMD’s Radeon RX 9070 XT and RX 9070 graphics cards. These are aimed right at the middle of the current GPU market at the intersection of high sales volume and decent profit margins. They promise good 1440p and entry-level 4K gaming performance and improved power efficiency compared to previous-generation cards, with fixes for long-time shortcomings (ray-tracing performance, video encoding, and upscaling quality) that should, in theory, make them more tempting for people looking to ditch Nvidia.

Table of Contents

RX 9070 and 9070 XT specs and speeds

RX 9070 XT RX 9070 RX 7900 XTX RX 7900 XT RX 7900 GRE RX 7800 XT
Compute units (Stream processors) 64 RDNA4 (4,096) 56 RDNA4 (3,584) 96 RDNA3 (6,144) 84 RDNA3 (5,376) 80 RDNA3 (5,120) 60 RDNA3 (3,840)
Boost Clock 2,970 MHz 2,520 MHz 2,498 MHz 2,400 MHz 2,245 MHz 2,430 MHz
Memory Bus Width 256-bit 256-bit 384-bit 320-bit 256-bit 256-bit
Memory Bandwidth 650GB/s 650GB/s 960GB/s 800GB/s 576GB/s 624GB/s
Memory size 16GB GDDR6 16GB GDDR6 24GB GDDR6 20GB GDDR6 16GB GDDR6 16GB GDDR6
Total board power (TBP) 304 W 220 W 355 W 315 W 260 W 263 W

AMD’s high-level performance promise for the RDNA 4 architecture revolves around big increases in performance per compute unit (CU). An RDNA 4 CU, AMD says, is nearly twice as fast in rasterized performance as RDNA 2 (that is, rendering without ray-tracing effects enabled) and nearly 2.5 times as fast as RDNA 2 in games with ray-tracing effects enabled. Performance for at least some machine learning workloads also goes way up—twice as fast as RDNA 3 and four times as fast as RDNA 2.

We’ll see this in more detail when we start comparing performance, but AMD seems to have accomplished this goal. Despite having 64 or 56 compute units (for the 9070 XT and 9070, respectively), the cards’ performance often competes with AMD’s last-generation flagships, the RX 7900 XTX and 7900 XT. Those cards came with 96 and 84 compute units, respectively. The 9070 cards are specced a lot more like last generation’s RX 7800 XT—including the 16GB of GDDR6 on a 256-bit memory bus, as AMD still isn’t using GDDR6X or GDDR7—but they’re much faster than the 7800 XT was.

AMD has dramatically increased the performance-per-compute unit for RDNA 4. AMD

The 9070 series also uses a new 4 nm manufacturing process from TSMC, an upgrade from the 7000 series’ 5 nm process (and the 6 nm process used for the separate memory controller dies in higher-end RX 7000-series models that used chiplets). AMD’s GPUs are normally a bit less efficient than Nvidia’s, but the architectural improvements and the new manufacturing process allow AMD to do some important catch-up.

Both of the 9070 models we tested were ASRock Steel Legend models, and the 9070 and 9070 XT had identical designs—we’ll probably see a lot of this from AMD’s partners since the GPU dies and the 16GB RAM allotments are the same for both models. Both use two 8-pin power connectors; AMD says partners are free to use the 12-pin power connector if they want, but given Nvidia’s ongoing issues with it, most cards will likely stick with the reliable 8-pin connectors.

AMD doesn’t appear to be making and selling reference designs for the 9070 series the way it did for some RX 7000 and 6000-series GPUs or the way Nvidia does with its Founders Edition cards. From what we’ve seen, 2 or 2.5-slot, triple-fan designs will be the norm, the way they are for most midrange GPUs these days.

Testbed notes

We used the same GPU testbed for the Radeon RX 9070 series as we have for our GeForce RTX 50-series reviews.

An AMD Ryzen 7 9800X3D ensures that our graphics cards will be CPU-limited as little as possible. An ample 1050 W power supply, 32GB of DDR5-6000, and an AMD X670E motherboard with the latest BIOS installed round out the hardware. On the software side, we use an up-to-date installation of Windows 11 24H2 and recent GPU drivers for older cards, ensuring that our tests reflect whatever optimizations Microsoft, AMD, Nvidia, and game developers have made since the last generation of GPUs launched.

We have numbers for all of Nvidia’s RTX 50-series GPUs so far, plus most of the 40-series cards, most of AMD’s RX 7000-series cards, and a handful of older GPUs from the RTX 30-series and RX 6000 series. We’ll focus on comparing the 9070 XT and 9070 to other 1440p-to-4K graphics cards since those are the resolutions AMD is aiming at.

Performance

At $549 and $599, the 9070 series is priced to match Nvidia’s $549 RTX 5070 and undercut the $749 RTX 5070 Ti. So we’ll focus on comparing the 9070 series to those cards, plus the top tier of GPUs from the outgoing RX 7000-series.

Some 4K rasterized benchmarks.

Starting at the top with rasterized benchmarks with no ray-tracing effects, the 9070 XT does a good job of standing up to Nvidia’s RTX 5070 Ti, coming within a few frames per second of its performance in all the games we tested (and scoring very similarly in the 3DMark Time Spy Extreme benchmark).

Both cards are considerably faster than the RTX 5070—between 15 and 28 percent for the 9070 XT and between 5 and 13 percent for the regular 9070 (our 5070 scored weirdly low in Horizon Zero Dawn Remastered, so we’d treat those numbers as outliers for now). Both 9070 cards also stack up well next to the RX 7000 series here—the 9070 can usually just about match the performance of the 7900 XT, and the 9070 XT usually beats it by a little. Both cards thoroughly outrun the old RX 7900 GRE, which was AMD’s $549 GPU offering just a year ago.

The 7900 XT does have 20GB of RAM instead of 16GB, which might help its performance in some edge cases. But 16GB is still perfectly generous for a 1440p-to-4K graphics card—the 5070 only offers 12GB, which could end up limiting its performance in some games as RAM requirements continue to rise.

On ray-tracing improvements

Nvidia got a jump on AMD when it introduced hardware-accelerated ray-tracing in the RTX 20-series in 2018. And while these effects were only supported in a few games at the time, many modern games offer at least some kind of ray-traced lighting effects.

AMD caught up a little when it began shipping its own ray-tracing support in the RDNA2 architecture in late 2020, but the issue since then has always been that AMD cards have taken a larger performance hit than GeForce GPUs when these effects are turned on. RDNA3 promised improvements, but our tests still generally showed the same deficit as before.

So we’re looking for two things with RDNA4’s ray-tracing performance. First, we want the numbers to be higher than they were for comparably priced RX 7000-series GPUs, the same thing we look for in non-ray-traced (or rasterized) rendering performance. Second, we want the size of the performance hit to go down. To pick an example: the RX 7900 GRE could compete with Nvidia’s RTX 4070 Ti Super in games without ray tracing, but it was closer to a non-Super RTX 4070 in ray-traced games. It has helped keep AMD’s cards from being across-the-board competitive with Nvidia’s—is that any different now?

Benchmarks for games with ray-tracing effects enabled. Both AMD cards generally keep pace with the 5070 in these tests thanks to RDNA 4’s improvements.

The picture our tests paint is mixed but tentatively positive. The 9070 series and RDNA4 post solid improvements in the Cyberpunk 2077 benchmarks, substantially closing the performance gap with Nvidia. In games where AMD’s cards performed well enough before—here represented by Returnal—performance goes up, but roughly proportionately with rasterized performance. And both 9070 cards still punch below their weight in Black Myth: Wukong, falling substantially behind the 5070 under the punishing Cinematic graphics preset.

So the benefits you see, as with any GPU update, will depend a bit on the game you’re playing. There’s also a possibility that game optimizations and driver updates made with RDNA4 in mind could boost performance further. We can’t say that AMD has caught all the way up to Nvidia here—the 9070 and 9070 XT are both closer to the GeForce RTX 5070 than the 5070 Ti, despite keeping it closer to the 5070 Ti in rasterized tests—but there is real, measurable improvement here, which is what we were looking for.

Power usage

The 9070 series’ performance increases are particularly impressive when you look at the power-consumption numbers. The 9070 comes close to the 7900 XT’s performance but uses 90 W less power under load. It beats the RTX 5070 most of the time but uses around 30 W less power.

The 9070 XT is a little less impressive on this front—AMD has set clock speeds pretty high, and this can increase power use disproportionately. The 9070 XT is usually 10 or 15 percent faster than the 9070 but uses 38 percent more power. The XT’s power consumption is similar to the RTX 5070 Ti’s (a GPU it often matches) and the 7900 XT’s (a GPU it always beats), so it’s not too egregious, but it’s not as standout as the 9070’s.

AMD gives 9070 owners a couple of new toggles for power limits, though, which we’ll talk about in the next section.

Experimenting with “Total Board Power”

We don’t normally dabble much with overclocking when we review CPUs or GPUs—we’re happy to leave that to folks at other outlets. But when we review CPUs, we do usually test them with multiple power limits in place. Playing with power limits is easier (and occasionally safer) than actually overclocking, and it often comes with large gains to either performance (a chip that performs much better when given more power to work with) or efficiency (a chip that can run at nearly full speed without using as much power).

Initially, I experimented with the RX 9070’s power limits by accident. AMD sent me one version of the 9070 but exchanged it because of a minor problem the OEM identified with some units early in the production run. I had, of course, already run most of our tests on it, but that’s the way these things go sometimes.

By bumping the regular RX 9070’s TBP up just a bit, you can nudge it closer to 9070 XT-level performance.

The replacement RX 9070 card, an ASRock Steel Legend model, was performing significantly better in our tests, sometimes nearly closing the gap between the 9070 and the XT. It wasn’t until I tested power consumption that I discovered the explanation—by default, it was using a 245 W power limit rather than the AMD-defined 220 W limit. Usually, these kinds of factory tweaks don’t make much of a difference, but for the 9070, this power bump gave it a nice performance boost while still keeping it close to the 250 W power limit of the GeForce RTX 5070.

The 90-series cards we tested both add some power presets to AMD’s Adrenalin app in the Performance tab under Tuning. These replace and/or complement some of the automated overclocking and undervolting buttons that exist here for older Radeon cards. Clicking Favor Efficiency or Favor Performance can ratchet the card’s Total Board Power (TBP) up or down, limiting performance so that the card runs cooler and quieter or allowing the card to consume more power so it can run a bit faster.

The 9070 cards get slightly different performance tuning options in the Adrenalin software. These buttons mostly change the card’s Total Board Power (TBP), making it simple to either improve efficiency or boost performance a bit. Credit: Andrew Cunningham

For this particular ASRock 9070 card, the default TBP is set to 245 W. Selecting “Favor Efficiency” sets it to the default 220 W. You can double-check these values using an app like HWInfo, which displays both the current TBP and the maximum TBP in its Sensors Status window. Clicking the Custom button in the Adrenalin software gives you access to a Power Tuning slider, which for our card allowed us to ratchet the TBP up by up to 10 percent or down by as much as 30 percent.

This is all the firsthand testing we did with the power limits of the 9070 series, though I would assume that adding a bit more power also adds more overclocking headroom (bumping up the power limits is common for GPU overclockers no matter who makes your card). AMD says that some of its partners will ship 9070 XT models set to a roughly 340 W power limit out of the box but acknowledges that “you start seeing diminishing returns as you approach the top of that [power efficiency] curve.”

But it’s worth noting that the driver has another automated set-it-and-forget-it power setting you can easily use to find your preferred balance of performance and power efficiency.

A quick look at FSR4 performance

There’s a toggle in the driver for enabling FSR 4 in FSR 3.1-supporting games. Credit: Andrew Cunningham

One of AMD’s headlining improvements to the RX 90-series is the introduction of FSR 4, a new version of its FidelityFX Super Resolution upscaling algorithm. Like Nvidia’s DLSS and Intel’s XeSS, FSR 4 can take advantage of RDNA 4’s machine learning processing power to do hardware-backed upscaling instead of taking a hardware-agnostic approach as the older FSR versions did. AMD says this will improve upscaling quality, but it also means FSR4 will only work on RDNA 4 GPUs.

The good news is that FSR 3.1 and FSR 4 are forward- and backward-compatible. Games that have already added FSR 3.1 support can automatically take advantage of FSR 4, and games that support FSR 4 on the 90-series can just run FSR 3.1 on older and non-AMD GPUs.

FSR 4 comes with a small performance hit compared to FSR 3.1 at the same settings, but better overall quality can let you drop to a faster preset like Balanced or Performance and end up with more frames-per-second overall. Credit: Andrew Cunningham

The only game in our current test suite to be compatible with FSR 4 is Horizon Zero Dawn Remastered, and we tested its performance using both FSR 3.1 and FSR 4. In general, we found that FSR 4 improved visual quality at the cost of just a few frames per second when run at the same settings—not unlike using Nvidia’s recently released “transformer model” for DLSS upscaling.

Many games will let you choose which version of FSR you want to use. But for FSR 3.1 games that don’t have a built-in FSR 4 option, there’s a toggle in AMD’s Adrenalin driver you can hit to switch to the better upscaling algorithm.

Even if they come with a performance hit, new upscaling algorithms can still improve performance by making the lower-resolution presets look better. We run all of our testing in “Quality” mode, which generally renders at two-thirds of native resolution and scales up. But if FSR 4 running in Balanced or Performance mode looks the same to your eyes as FSR 3.1 running in Quality mode, you can still end up with a net performance improvement in the end.

RX 9070 or 9070 XT?

Just $50 separates the advertised price of the 9070 from that of the 9070 XT, something both Nvidia and AMD have done in the past that I find a bit annoying. If you have $549 to spend on a graphics card, you can almost certainly scrape together $599 for a graphics card. All else being equal, I’d tell most people trying to choose one of these to just spring for the 9070 XT.

That said, availability and retail pricing for these might be all over the place. If your choices are a regular RX 9070 or nothing, or an RX 9070 at $549 and an RX 9070 XT at any price higher than $599, I would just grab a 9070 and not sweat it too much. The two cards aren’t that far apart in performance, especially if you bump the 9070’s TBP up a little bit, and games that are playable on one will be playable at similar settings on the other.

Pretty close to great

If you’re building a 1440p or 4K gaming box, the 9070 series might be the ones to beat right now. Credit: Andrew Cunningham

We’ve got plenty of objective data in here, so I don’t mind saying that I came into this review kind of wanting to like the 9070 and 9070 XT. Nvidia’s 50-series cards have mostly upheld the status quo, and for the last couple of years, the status quo has been sustained high prices and very modest generational upgrades. And who doesn’t like an underdog story?

I think our test results mostly justify my priors. The RX 9070 and 9070 XT are very competitive graphics cards, helped along by a particularly mediocre RTX 5070 refresh from Nvidia. In non-ray-traced games, both cards wipe the floor with the 5070 and come close to competing with the $749 RTX 5070 Ti. In games and synthetic benchmarks with ray-tracing effects on, both cards can usually match or slightly beat the similarly priced 5070, partially (if not entirely) addressing AMD’s longstanding performance deficit here. Neither card comes close to the 5070 Ti in these games, but they’re also not priced like a 5070 Ti.

Just as impressively, the Radeon cards compete with the GeForce cards while consuming similar amounts of power. At stock settings, the RX 9070 uses roughly the same amount of power under load as a 4070 Super but with better performance. The 9070 XT uses about as much power as a 5070 Ti, with similar performance before you turn ray-tracing on. Power efficiency was a small but consistent drawback for the RX 7000 series compared to GeForce cards, and the 9070 cards mostly erase that disadvantage. AMD is also less stingy with the RAM, giving you 16GB for the price Nvidia charges for 12GB.

Some of the old caveats still apply. Radeons take a bigger performance hit, proportionally, than GeForce cards. DLSS already looks pretty good and is widely supported, while FSR 3.1/FSR 4 adoption is still relatively low. Nvidia has a nearly monopolistic grip on the dedicated GPU market, which means many apps, AI workloads, and games support its GPUs best/first/exclusively. AMD is always playing catch-up to Nvidia in some respect, and Nvidia keeps progressing quickly enough that it feels like AMD never quite has the opportunity to close the gap.

AMD also doesn’t have an answer for DLSS Multi-Frame Generation. The benefits of that technology are fairly narrow, and you already get most of those benefits with single-frame generation. But it’s still a thing that Nvidia does that AMDon’t.

Overall, the RX 9070 cards are both awfully tempting competitors to the GeForce RTX 5070—and occasionally even the 5070 Ti. They’re great at 1440p and decent at 4K. Sure, I’d like to see them priced another $50 or $100 cheaper to well and truly undercut the 5070 and bring 1440p-to-4K performance t0 a sub-$500 graphics card. It would be nice to see AMD undercut Nvidia’s GPUs as ruthlessly as it undercut Intel’s CPUs nearly a decade ago. But these RDNA4 GPUs have way fewer downsides than previous-generation cards, and they come at a moment of relative weakness for Nvidia. We’ll see if the sales follow.

The good

  • Great 1440p performance and solid 4K performance
  • 16GB of RAM
  • Decisively beats Nvidia’s RTX 5070, including in most ray-traced games
  • RX 9070 XT is competitive with RTX 5070 Ti in non-ray-traced games for less money
  • Both cards match or beat the RX 7900 XT, AMD’s second-fastest card from the last generation
  • Decent power efficiency for the 9070 XT and great power efficiency for the 9070
  • Automated options for tuning overall power use to prioritize either efficiency or performance
  • Reliable 8-pin power connectors available in many cards

The bad

  • Nvidia’s ray-tracing performance is still usually better
  • At $549 and $599, pricing matches but doesn’t undercut the RTX 5070
  • FSR 4 isn’t as widely supported as DLSS and may not be for a while

The ugly

  • Playing the “can you actually buy these for AMD’s advertised prices” game

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

AMD Radeon RX 9070 and 9070 XT review: RDNA 4 fixes a lot of AMD’s problems Read More »

details-on-amd’s-$549-and-$599-radeon-rx-9070-gpus,-which-aim-at-nvidia-and-4k

Details on AMD’s $549 and $599 Radeon RX 9070 GPUs, which aim at Nvidia and 4K

AMD is releasing the first detailed specifications of its next-generation Radeon RX 9070 series GPUs and the RDNA4 graphics architecture today, almost two months after teasing them at CES.

The short version is that these are both upper-midrange graphics cards targeting resolutions of 1440p and 4K and meant to compete mainly with Nvidia’s incoming and outgoing 4070- and 5070-series GeForce GPUs, including the RTX 4070, RTX 5070, RTX 4070 Ti and Ti Super, and the RTX 5070 Ti.

AMD says the RX 9070 will start at $549, the same price as Nvidia’s RTX 5070. The slightly faster 9070 XT starts at $599, $150 less than the RTX 5070 Ti. The cards go on sale March 6, a day after Nvidia’s RTX 5070.

Neither Nvidia nor Intel has managed to keep its GPUs in stores at their announced starting prices so far, though, so how well AMD’s pricing stacks up to Nvidia in the real world may take a few weeks or months to settle out. For its part, AMD says it’s confident that it has enough supply to meet demand, but that’s as specific as the company’s reassurances got.

Specs and speeds: Radeon RX 9070 and 9070 XT

RX 9070 XT RX 9070 RX 7900 XTX RX 7900 XT RX 7900 GRE RX 7800 XT
Compute units (Stream processors) 64 RDNA4 (4,096) 56 RDNA4 (3,584) 96 RDNA3 (6,144) 84 RDNA3 (5,376) 80 RDNA3 (5,120) 60 RDNA3 (3,840)
Boost Clock 2,970 MHz 2,520 MHz 2,498 MHz 2,400 MHz 2,245 MHz 2,430 MHz
Memory Bus Width 256-bit 256-bit 384-bit 320-bit 256-bit 256-bit
Memory Bandwidth 650 GB/s 650 GB/s 960 GB/s 800 GB/s 576 GB/s 624 GB/s
Memory size 16GB GDDR6 16GB GDDR6 24GB GDDR6 20GB GDDR6 16GB GDDR6 16GB GDDR6
Total board power (TBP) 304 W 220 W 355 W 315 W 260 W 263 W

As is implied by their similar price tags, the 9070 and 9070 XT have more in common than not. Both are based on the same GPU die—the 9070 has 56 of the chip’s compute units enabled, while the 9070 XT has 64. Both cards come with 16GB of RAM (4GB more than the 5070, the same amount as the 5070 Ti) on a 256-bit memory bus, and both use two 8-pin power connectors by default, though the 9070 XT can use significantly more power than the 9070 (304 W, compared to 220 W).

AMD says that its partners are free to make Radeon cards with the 12VHPWR or 12V-2×6 power connectors on them, though given the apparently ongoing issues with the connector, we’d expect most Radeon GPUs to stick with the known quantity that is the 8-pin connector.

AMD says that the 9070 series is made using a 4 nm TSMC manufacturing process and that the chips are monolithic rather than being split up into chiplets as some RX 7000-series cards were. AMD’s commitment to its memory controller chiplets was always hit or miss with the 7000-series—the high-end cards tended to use them, while the lower-end GPUs were usually monolithic—so it’s not clear one way or the other whether this means AMD is giving up on chiplet-based GPUs altogether or if it’s just not using them this time around.

Details on AMD’s $549 and $599 Radeon RX 9070 GPUs, which aim at Nvidia and 4K Read More »

nvidia-geforce-rtx-5070-ti-review:-an-rtx-4080-for-$749,-at-least-in-theory

Nvidia GeForce RTX 5070 Ti review: An RTX 4080 for $749, at least in theory


may the odds be ever in your favor

It’s hard to review a product if you don’t know what it will actually cost!

The Asus Prime GeForce RTX 5070 Ti. Credit: Andrew Cunningham

The Asus Prime GeForce RTX 5070 Ti. Credit: Andrew Cunningham

Nvidia’s RTX 50-series makes its first foray below the $1,000 mark starting this week, with the $749 RTX 5070 Ti—at least in theory.

The third-fastest card in the Blackwell GPU lineup, the 5070 Ti is still far from “reasonably priced” by historical standards (the 3070 Ti was $599 at launch). But it’s also $50 cheaper and a fair bit faster than the outgoing 4070 Ti Super and the older 4070 Ti. These are steps in the right direction, if small ones.

We’ll talk more about its performance shortly, but at a high level, the 5070 Ti’s performance falls in the same general range as the 4080 Super and the original RTX 4080, a card that launched for $1,199 just over two years ago. And it’s probably your floor for consistently playable native 4K gaming for those of you out there who don’t want to rely on DLSS or 4K upscaling to hit that resolution (it’s also probably all the GPU that most people will need for high-FPS 1440p, if that’s more your speed).

But it’s a card I’m ambivalent about! It’s close to 90 percent as fast as a 5080 for 75 percent of the price, at least if you go by Nvidia’s minimum list prices, which for the 5090 and 5080 have been mostly fictional so far. If you can find it at that price—and that’s a big “if,” since every $749 model is already out of stock across the board at Newegg—and you’re desperate to upgrade or are building a brand-new 4K gaming PC, you could do worse. But I wouldn’t spend more than $749 on it, and it might be worth waiting to see what AMD’s first 90-series Radeon cards look like in a couple weeks before you jump in.

Meet the GeForce RTX 5070 Ti

RTX 5080 RTX 4080 Super RTX 5070 Ti RTX 4070 Ti Super RTX 4070 Ti RTX 5070
CUDA Cores 10,752 10,240 8,960 8,448 7,680 6,144
Boost Clock 2,617 MHz 2,550 MHz 2,452 MHz 2,610 MHz 2,610 MHz 2,512 MHz
Memory Bus Width 256-bit 256-bit 256-bit 256-bit 192-bit 192-bit
Memory Bandwidth 960 GB/s 736 GB/s 896 GB/s 672 GB/s 504 GB/s 672 GB/s
Memory size 16GB GDDR7 16GB GDDR6X 16GB GDDR7 16GB GDDR6X 12GB GDDR6X 12GB GDDR7
TGP 360 W 320 W 300 W 285 W 285 W 250 W

Nvidia isn’t making a Founders Edition version of the 5070 Ti, so this time around our review unit is an Asus Prime GeForce RTX 5070 Ti provided by Asus and Nvidia. These third-party cards will deviate a little from the stock specs listed above, but factory overclocks tend to be inordinately mild, and done mostly so the GPU manufacturer can slap a big “overclocked” badge somewhere on the box. We tested this Asus card with its BIOS switch set to “performance” mode, which elevates the boost clock by an entire 30 MHz; you don’t need to be a math whiz to guess that a 1.2 percent overclock is not going to change performance much.

Compared to the 4070 Ti Super, the 5070 Ti brings two things to the table: a roughly 6 percent increase in CUDA cores and a 33 percent increase in memory bandwidth, courtesy of the switch from GDDR6X to GDDR7. The original 4070 Ti had even fewer CUDA cores, but most importantly for its 4K performance included just 12GB of memory on a 192-bit bus.

The 5070 Ti is based on the same GB203 GPU silicon as the 5080 series, but with 1,792 CUDA cores disabled. But there are a lot of similarities between the two, including the 16GB bank of GDDR7 and the 256-bit memory bus. It looks nothing like the yawning gap between the RTX 5090 and the RTX 5080, and the two cards’ similar-ish specs meant they weren’t too far away from each other in our testing. The 5070 Ti’s 300 W power requirement is also a bit lower than the 5080’s 360 W, but it’s pretty close to the 4080 and 4080 Super’s 320 W; in practice, the 5070 Ti draws about as much as the 4080 cards do under load.

Asus’ design for its Prime RTX 5070 Ti is an inoffensive 2.5-slot, triple-fan card that should fit without a problem in most builds. Credit: Andrew Cunningham

As a Blackwell GPU, the 5070 Ti also supports Nvidia’s most-hyped addition to the 50-series: support for DLSS 4 and Multi-Frame Generation (MFG). We’ve already covered this in our 5090 and 5080 reviews, but the short version is that MFG works exactly like Frame Generation did in the 40-series, except that it can now insert up to three AI-generated frames in between natively rendered frames instead of just one.

Especially if you’re already running at a reasonably high frame rate, this can make things look a lot smoother on a high-refresh-rate monitor without introducing distractingly excessive lag or weird rendering errors. The feature is mainly controversial because Nvidia is comparing 50-series performance numbers with DLSS MFG enabled to older 40-series cards without DLSS MFG to make the 50-series cards seem a whole lot faster than they actually are.

We’ll publish some frame-generation numbers in our review, both using DLSS and (for AMD cards) FSR. But per usual, we’ll continue to focus on natively rendered performance—more relevant for all the games out there that don’t support frame generation or don’t benefit much from it, and more relevant because your base performance dictates how good your generated frames will look and feel anyway.

Testbed notes

We tested the 5070 Ti in the same updated testbed and with the same updated suite of games that we started using in our RTX 5090 review. The heart of the build is an AMD Ryzen 9800X3D, ensuring that our numbers are limited as little as possible by the CPU speed.

Per usual, we prioritize testing GPUs at resolutions that we think most people will use them for. For the 5070 Ti, that means both 4K and 1440p—this card is arguably still overkill for 1440p, but if you’re trying to hit 144 or 240 Hz (or even more) on a monitor, there’s a good case to be made for it. We also use a mix of ray-traced and non-ray-traced games. For the games we test with upscaling enabled, we use DLSS on Nvidia cards and the newest supported version of FSR (usually 2.x or 3.x) for AMD cards.

Though we’ve tested and re-tested multiple cards with recent drivers in our updated testbed, we don’t have a 4070 Ti Super, 4070 Ti, or 3070 Ti available to test with. We’ve provided some numbers for those GPUs from past reviews; these are from a PC running older drivers and a Ryzen 7 7800X3D instead of a 9800X3D, and we’ve put asterisks next to them in our charts. They should still paint a reasonably accurate picture of the older GPUs’ relative performance, but take them with that small grain of salt.

Performance and power

Despite including fewer CUDA cores than either version of the 4080, some combination of architectural improvements and memory bandwidth increases help the card keep pace with both 4080 cards almost perfectly. In most of our tests, it landed in the narrow strip right in between the 4080 and the 4080 Super, and its power consumption under load was also almost identical.

Benchmarks with DLSS/FSR and/or frame generation enabled.

In every way that matters, the 5070 Ti is essentially an RTX 4080 that also supports DLSS Multi-Frame Generation. You can see why we’d be mildly enthusiastic about it at $749 but less and less impressed the closer the price creeps to $1,000.

Being close to a 4080 also means that the performance gap between the 5070 Ti and the 5080 is usually pretty small. In most of the games we tested, the 5070 Ti hovers right around 90 percent of the 5080’s performance.

The 5070 Ti is also around 60 percent as fast as an RTX 5090. The performance is a lot lower, but the price-to-performance ratio is a lot higher, possibly reflecting the fact that the 5070 Ti actually has other GPUs it has to compete with (in non-ray-traced games, the Radeon RX 7900 XTX generally keeps pace with the 5070 Ti, though at this late date it is mostly out of stock unless you’re willing to pay way more than you ought to for one).

Compared to the old 4070 Ti, the 5070 Ti can be between 20 and 50 percent faster at 4K, depending on how limited the game is by the 4070 Ti’s narrower memory bus and 12GB bank of RAM. The performance improvement over the 4070 Ti Super is more muted, ranging from as little as 8 percent to as much as 20 percent in our 4K tests. This is better than the RTX 5080 did relative to the RTX 4080 Super, but as a generational leap, it’s still pretty modest—it’s clear why Nvidia wants everyone to look at the Multi-Frame Generation numbers when making comparisons.

Waiting to put theory into practice

Asus’ RTX 5070 Ti, replete with 12-pin power plug. Credit: Andrew Cunningham

Being able to get RTX 4080-level performance for several hundred dollars less just a couple of years after the 4080 launched is kind of exciting, though that excitement is leavened by the still high-ish $749 price tag (again, assuming it’s actually available at or anywhere near that price). That certainly makes it feel more like a next-generation GPU than the RTX 5080 did—and whatever else you can say about it, the 5070 Ti certainly feels like a better buy than the 5080.

The 5070 Ti is a fast and 4K-capable graphics card, fast enough that you should be able to get some good results from all of Blackwell’s new frame-generation trickery if that’s something you want to play with. Its price-to-performance ratio does not thrill me, but if you do the math, it’s still a much better value than the 4070 Ti series was—particularly the original 4070 Ti, with the 12GB allotment of RAM that limited its usefulness and future-proofness at 4K.

Two reasons to hold off on buying a 5070 Ti, if you’re thinking about it: We’re waiting to see how AMD’s 9070 series GPUs shake out, and Nvidia’s 50-series launch so far has been kind of a mess, with low availability and price gouging both on retail sites and in the secondhand market. Pay much more than $749 for a 5070 Ti, and its delicate value proposition fades quickly. We should know more about the AMD cards in a couple of weeks. The supply situation, at least so far, seems like a problem that Nvidia can’t (or won’t) figure out how to solve.

The good

  • For a starting price of $749, you get the approximate performance and power consumption of an RTX 4080, a GPU that cost $1,199 two years ago and $999 one year ago.
  • Good 4K performance and great 1440p performance for those with high-refresh monitors.
  • 16GB of RAM should be reasonably future-proof.
  • Multi-Frame Generation is an interesting performance-boosting tool to have in your toolbox, even if it isn’t a cure-all for low framerates.
  • Nvidia-specific benefits like DLSS support and CUDA.

The bad

  • Not all that much faster than a 4070 Ti Super.
  • $749 looks cheap compared to a $2,000 GPU, but it’s still enough money to buy a high-end game console or an entire 1080p gaming PC.

The ugly

  • Pricing and availability for other 50-series GPUs to date have both been kind of a mess.
  • Will you actually be able to get it for $749? Because it doesn’t make a ton of sense if it costs more than $749.
  • Seriously, it’s been months since I reviewed a GPU that was actually widely available at its advertised price.
  • And it’s not just the RTX 5090 or 5080, it’s low-end stuff like the Intel Arc B580 and B570, too.
  • Is it high demand? Low supply? Scalpers and resellers hanging off the GPU market like the parasites they are? No one can say!
  • It makes these reviews very hard to do.
  • It also makes PC gaming, as a hobby, really difficult to get into if you aren’t into it already!
  • It just makes me mad is all.
  • If you’re reading this months from now and the GPUs actually are in stock at the list price, I hope this was helpful.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Nvidia GeForce RTX 5070 Ti review: An RTX 4080 for $749, at least in theory Read More »

what-we-know-about-amd-and-nvidia’s-imminent-midrange-gpu-launches

What we know about AMD and Nvidia’s imminent midrange GPU launches

The GeForce RTX 5090 and 5080 are both very fast graphics cards—if you can look past the possibility that we may have yet another power-connector-related overheating problem on our hands. But the vast majority of people (including you, discerning and tech-savvy Ars Technica reader) won’t be spending $1,000 or $2,000 (or $2,750 or whatever) on a new graphics card this generation.

No, statistically, you (like most people) will probably end up buying one of the more affordable midrange Nvidia or AMD cards, GPUs that are all slated to begin shipping later this month or early in March.

There has been a spate of announcements on that front this week. Nvidia announced yesterday that the GeForce RTX 5070 Ti, which the company previously introduced at CES, would be available starting on February 20 for $749 and up. The new GPU, like the RTX 5080, looks like a relatively modest upgrade from last year’s RTX 4070 Ti Super. But it ought to at least flirt with affordability for people who are looking to get natively rendered 4K without automatically needing to enable DLSS upscaling to get playable frame rates.

RTX 5070 Ti RTX 4070 Ti Super RTX 5070 RTX 4070 Super
CUDA Cores 8,960 8,448 6,144 7,168
Boost Clock 2,452 MHz 2,610 MHz 2,512 MHz 2,475 MHz
Memory Bus Width 256-bit 256-bit 192-bit 192-bit
Memory Bandwidth 896 GB/s 672 GB/s 672 GB/s 504 GB/s
Memory size 16GB GDDR7 16GB GDDR6X 12GB GDDR7 12GB GDDR6X
TGP 300 W 285 W 250 W 220 W

That said, if the launches of the 5090 and 5080 are anything to go by, it may not be easy to find and buy the RTX 5070 Ti for anything close to the listed retail price; early retail listings are not promising on this front. You’ll also be relying exclusively on Nvidia’s partners to deliver unadorned, relatively minimalist MSRP versions of the cards since Nvidia isn’t making a Founders Edition version.

As for the $549 RTX 5070, Nvidia’s website says it’s launching on March 5. But it’s less exciting than the other 50-series cards because it has fewer CUDA cores than the outgoing RTX 4070 Super, leaving it even more reliant on AI-generated frames to improve performance compared to the last generation.

What we know about AMD and Nvidia’s imminent midrange GPU launches Read More »

handful-of-users-claim-new-nvidia-gpus-are-melting-power-cables-again

Handful of users claim new Nvidia GPUs are melting power cables again

The 12VHPWR and 12V-2×6 connectors are both designed to solve a real problem: delivering hundreds of watts of power to high-end GPUs over a single cable rather than trying to fit multiple 8-pin power connectors onto these GPUs. In theory, swapping two to four 8-pin connectors for a single 12V-2×6 or 12VHPWR connector cuts down on the amount of board space OEMs must reserve for these connectors in their designs and the number of cables that users have to snake through the inside of their gaming PCs.

But while Nvidia, Intel, AMD, Qualcomm, Arm, and other companies are all PCI-SIG members and all had a hand in the design of the new standards, Nvidia is the only GPU company to use the 12VHPWR and 12V-2×6 connectors in most of its GPUs. AMD and Intel have continued to use the 8-pin power connector, and even some of Nvidia’s partners have stuck with 8-pin connectors for lower-end, lower-power cards like the RTX 4060 and 4070 series.

Both of the reported 5090 incidents involved third-party cables, one from custom PC part manufacturer MODDIY and one included with an FSP power supply, rather than the first-party 8-pin adapter that Nvidia supplies with GeForce GPUs. It’s much too early to say whether these cables (or Nvidia, or the design of the connector, or the affected users) caused the problem or whether this was just a coincidence.

We’ve contacted Nvidia to see whether it’s aware of and investigating the reports and will update this piece if we receive a response.

Handful of users claim new Nvidia GPUs are melting power cables again Read More »

amd-promises-“mainstream”-4k-gaming-with-next-gen-gpus-as-current-gen-gpu-sales-tank

AMD promises “mainstream” 4K gaming with next-gen GPUs as current-gen GPU sales tank

AMD announced its fourth-quarter earnings yesterday, and the numbers were mostly rosy: $7.7 billion in revenue and a 51 percent profit margin, compared to $6.2 billion and 47 percent a year ago. The biggest winner was the data center division, which made $3.9 billion thanks to Epyc server processors and Instinct AI accelerators, and Ryzen CPUs are also selling well, helping the company’s client segment earn $2.3 billion.

But if you were looking for a dark spot, you’d find it in the company’s gaming division, which earned a relatively small $563 million, down 59 percent from a year ago. AMD’s Lisa Su blamed this on both dedicated graphics card sales and sales from the company’s “semi-custom” chips (that is, the ones created specifically for game consoles like the Xbox and PlayStation).

Other data sources suggest that the response from GPU buyers to AMD’s Radeon RX 7000 series, launched between late 2022 and early 2024, has been lackluster. The Steam Hardware Survey, a noisy but broadly useful barometer for GPU market share, shows no RX 7000-series models in the top 50; only two of the GPUs (the 7900 XTX and 7700 XT) are used in enough gaming PCs to be mentioned on the list at all, with the others all getting lumped into the “other” category. Jon Peddie Research recently estimated that AMD was selling roughly one dedicated GPU for every seven or eight sold by Nvidia.

But hope springs eternal. Su confirmed on AMD’s earnings call that the new Radeon RX 9000-series cards, announced at CES last month, would be launching in early March. The Radeon RX 9070 and 9070 XT are both aimed toward the middle of the graphics card market, and Su said that both would bring “high-quality gaming to mainstream players.”

An opportunity, maybe

“Mainstream” could mean a lot of things. AMD’s CES slide deck positioned the 9070 series alongside Nvidia’s RTX 4070 Ti ($799) and 4070 Super ($599) and its own RTX 7900 XT, 7900 GRE, and 7800 XT (between $500 and $730 as of this writing), a pretty wide price spread that is still more expensive than an entire high-end console. The GPUs could still rely heavily on upscaling algorithms like AMD’s Fidelity Super Resolution (FSR) to hit playable frame rates at those resolutions, rather than targeting native 4K.

AMD promises “mainstream” 4K gaming with next-gen GPUs as current-gen GPU sales tank Read More »

nvidia-starts-to-wind-down-support-for-old-gpus,-including-the-long-lived-gtx-1060

Nvidia starts to wind down support for old GPUs, including the long-lived GTX 1060

Nvidia is launching the first volley of RTX 50-series GPUs based on its new Blackwell architecture, starting with the RTX 5090 and working downward from there. The company also appears to be winding down support for a few of its older GPU architectures, according to these CUDA release notes spotted by Tom’s Hardware.

The release notes say that CUDA support for the Maxwell, Pascal, and Volta GPU architectures “is considered feature-complete and will be frozen in an upcoming release.” While all of these architectures—which collectively cover GeForce GPUs from the old GTX 700 series all the way up through 2016’s GTX 1000 series, plus a couple of Quadro and Titan workstation cards—are still currently supported by Nvidia’s December Game Ready driver package, the end of new CUDA feature support suggests that these GPUs will eventually be dropped from these driver packages soon.

It’s common for Nvidia and AMD to drop support for another batch of architectures all at once every few years; Nvidia last dropped support for older cards in 2021, and AMD dropped support for several prominent GPUs in 2023. Both companies maintain a separate driver branch for some of their older cards but releases usually only happen every few months, and they focus on security updates, not on providing new features or performance optimizations for new games.

Nvidia starts to wind down support for old GPUs, including the long-lived GTX 1060 Read More »

rumors-say-next-gen-rtx-50-gpus-will-come-with-big-jumps-in-power-requirements

Rumors say next-gen RTX 50 GPUs will come with big jumps in power requirements

Nvidia is reportedly gearing up to launch the first few cards in its RTX 50-series at CES next week, including an RTX 5090, RTX 5080, RTX 5070 Ti, and RTX 5070. The 5090 will be of particular interest to performance-obsessed, money-is-no-object PC gaming fanatics since it’s the first new GPU in over two years that can beat the performance of 2022’s RTX 4090.

But boosted performance and slower advancements in chip manufacturing technology mean that the 5090’s maximum power draw will far outstrip the 4090’s, according to leakers. VideoCardz reports that the 5090’s thermal design power (TDP) will be set at 575 W, up from 450 W for the already power-hungry RTX 4090. The RTX 5080’s TDP is also increasing to 360 W, up from 320 W for the RTX 4080 Super.

That also puts the RTX 5090 close to the maximum power draw available over a single 12VHPWR connector, which is capable of delivering up to 600 W of power (though once you include the 75 W available via the PCI Express slot on your motherboard, the actual maximum possible power draw for a GPU with a single 12VHPWR connector is a slightly higher 675 W).

Higher peak power consumption doesn’t necessarily mean that these cards will always draw more power during actual gaming than their 40-series counterparts. And their performance could be good enough that they could still be very efficient cards in terms of performance per watt.

But if you’re considering an upgrade to an RTX 5090 and these power specs are accurate, you may need to consider an upgraded power supply along with your new graphics card. Nvidia recommends at least an 850 W power supply for the RTX 4090 to accommodate what the GPU needs while leaving enough power left over for the rest of the system. An additional 125 W bump suggests that Nvidia will recommend a 1,000 W power supply as the minimum for the 5090.

We’ll probably know more about Nvidia’s next-gen cards after its CES keynote, currently scheduled for 9: 30 pm Eastern/6: 30 pm Pacific on Monday, January 6.

Rumors say next-gen RTX 50 GPUs will come with big jumps in power requirements Read More »

nvidia-is-ditching-dedicated-g-sync-modules-to-push-back-against-freesync’s-ubiquity

Nvidia is ditching dedicated G-Sync modules to push back against FreeSync’s ubiquity

sync or swim —

But G-Sync will still require specific G-Sync-capable MediaTek scaler chips.

Nvidia is ditching dedicated G-Sync modules to push back against FreeSync’s ubiquity

Nvidia

Back in 2013, Nvidia introduced a new technology called G-Sync to eliminate screen tearing and stuttering effects and reduce input lag when playing PC games. The company accomplished this by tying your display’s refresh rate to the actual frame rate of the game you were playing, and similar variable refresh-rate (VRR) technology has become a mainstay even in budget monitors and TVs today.

The issue for Nvidia is that G-Sync isn’t what has been driving most of that adoption. G-Sync has always required extra dedicated hardware inside of displays, increasing the costs for both users and monitor manufacturers. The VRR technology in most low-end to mid-range screens these days is usually some version of the royalty-free AMD FreeSync or the similar VESA Adaptive-Sync standard, both of which provide G-Sync’s most important features without requiring extra hardware. Nvidia more or less acknowledged that the free-to-use, cheap-to-implement VRR technologies had won in 2019 when it announced its “G-Sync Compatible” certification tier for FreeSync monitors. The list of G-Sync Compatible screens now vastly outnumbers the list of G-Sync and G-Sync Ultimate screens.

Today, Nvidia is announcing a change that’s meant to keep G-Sync alive as its own separate technology while eliminating the requirement for expensive additional hardware. Nvidia says it’s partnering with chipmaker MediaTek to build G-Sync capabilities directly into scaler chips that MediaTek is creating for upcoming monitors. G-Sync modules ordinarily replace these scaler chips, but they’re entirely separate boards with expensive FPGA chips and dedicated RAM.

These new MediaTek scalers will support all the same features that current dedicated G-Sync modules do. Nvidia says that three G-Sync monitors with MediaTek scaler chips inside will launch “later this year”: the Asus ROG Swift PG27AQNR, the Acer Predator XB273U F5, and the AOC AGON PRO AG276QSG2. These are all 27-inch 1440p displays with maximum refresh rates of 360 Hz.

As of this writing, none of these companies has announced pricing for these displays—the current Asus PG27AQN has a traditional G-Sync module and a 360 Hz refresh rate and currently goes for around $800, so we’d hope for the new version to be significantly cheaper to make good on Nvidia’s claim that the MediaTek chips will reduce costs (or, if they do reduce costs, whether monitor makers are willing to pass those savings on to consumers).

For most people most of the time, there won’t be an appreciable difference between a “true” G-Sync monitor and one that uses FreeSync or Adaptive-Sync, but there are still a few fringe benefits. G-Sync monitors support a refresh rate between 1 and the maximum refresh rate of the monitor, whereas FreeSync and Adaptive-Sync stop working on most displays when the frame rate drops below 40 or 48 frames per second. All G-Sync monitors also support “variable overdrive” technology to help eliminate display ghosting, and the new MediaTek-powered displays will support the recent “G-Sync Pulsar” feature to reduce blur.

Nvidia is ditching dedicated G-Sync modules to push back against FreeSync’s ubiquity Read More »

review:-amd-radeon-rx-7900-gre-gpu-doesn’t-quite-earn-its-“7900”-label

Review: AMD Radeon RX 7900 GRE GPU doesn’t quite earn its “7900” label

rabbit season —

New $549 graphics card is the more logical successor to the RX 6800 XT.

ASRock's take on AMD's Radeon RX 7900 GRE.

Enlarge / ASRock’s take on AMD’s Radeon RX 7900 GRE.

Andrew Cunningham

In July 2023, AMD released a new GPU called the “Radeon RX 7900 GRE” in China. GRE stands for “Golden Rabbit Edition,” a reference to the Chinese zodiac, and while the card was available outside of China in a handful of pre-built OEM systems, AMD didn’t make it widely available at retail.

That changes today—AMD is launching the RX 7900 GRE at US retail for a suggested starting price of $549. This throws it right into the middle of the busy upper-mid-range graphics card market, where it will compete with Nvidia’s $549 RTX 4070 and the $599 RTX 4070 Super, as well as AMD’s own $500 Radeon RX 7800 XT.

We’ve run our typical set of GPU tests on the 7900 GRE to see how it stacks up to the cards AMD and Nvidia are already offering. Is it worth buying a new card relatively late in this GPU generation, when rumors point to new next-gen GPUs from Nvidia, AMD, and Intel before the end of the year? Can the “Golden Rabbit Edition” still offer a good value, even though it’s currently the year of the dragon?

Meet the 7900 GRE

RX 7900 XT RX 7900 GRE RX 7800 XT RX 6800 XT RX 6800 RX 7700 XT RX 6700 XT RX 6750 XT
Compute units (Stream processors) 84 (5,376) 80 (5,120) 60 (3,840) 72 (4,608) 60 (3,840) 54 (3,456) 40 (2,560) 40 (2,560)
Boost Clock 2,400 MHz 2,245 MHz 2,430 MHz 2,250 MHz 2,105 MHz 2,544 MHz 2,581 MHz 2,600 MHz
Memory Bus Width 320-bit 256-bit 256-bit 256-bit 256-bit 192-bit 192-bit 192-bit
Memory Clock 2,500 MHz 2,250 MHz 2,438 MHz 2,000 MHz 2,000 MHz 2,250 MHz 2,000 MHz 2,250 MHz
Memory size 20GB GDDR6 16GB GDDR6 16GB GDDR6 16GB GDDR6 16GB GDDR6 12GB GDDR6 12GB GDDR6 12GB GDDR6
Total board power (TBP) 315 W 260 W 263 W 300 W 250 W 245 W 230 W 250 W

The 7900 GRE slots into AMD’s existing lineup above the RX 7800 XT (currently $500-ish) and below the RX 7900 (around $750). Technologically, we’re looking at the same Navi 31 GPU silicon as the 7900 XT and XTX, but with just 80 of the compute units enabled, down from 84 and 96, respectively. The normal benefits of the RDNA3 graphics architecture apply, including hardware-accelerated AV1 video encoding and DisplayPort 2.1 support.

The 7900 GRE also includes four active memory controller die (MCD) chiplets, giving it a narrower 256-bit memory bus and 16GB of memory instead of 20GB—still plenty for modern games, though possibly not quite as future-proof as the 7900 XT. The card uses significantly less power than the 7900 XT and about the same amount as the 7800 XT. That feels a bit weird, intuitively, since slower cards almost always consume less power than faster ones. But it does make some sense; pushing the 7800 XT’s smaller Navi 32 GPU to get higher clock speeds out of it is probably making it run a bit less efficiently than a larger Navi 31 GPU die that isn’t being pushed as hard.

  • Andrew Cunningham

  • Andrew Cunningham

  • Andrew Cunningham

When we reviewed the 7800 XT last year, we noted that its hardware configuration and performance made it seem more like a successor to the (non-XT) Radeon RX 6800, while it just barely managed to match or beat the 6800 XT in our tests. Same deal with the 7900 GRE, which is a more logical successor to the 6800 XT. Bear that in mind when doing generation-over-generation comparisons.

Review: AMD Radeon RX 7900 GRE GPU doesn’t quite earn its “7900” label Read More »