The GeForce RTX 5090 and 5080 are both very fast graphics cards—if you can look past the possibility that we may have yet another power-connector-related overheating problem on our hands. But the vast majority of people (including you, discerning and tech-savvy Ars Technica reader) won’t be spending $1,000 or $2,000 (or $2,750 or whatever) on a new graphics card this generation.
No, statistically, you (like most people) will probably end up buying one of the more affordable midrange Nvidia or AMD cards, GPUs that are all slated to begin shipping later this month or early in March.
There has been a spate of announcements on that front this week. Nvidia announced yesterday that the GeForce RTX 5070 Ti, which the company previously introduced at CES, would be available starting on February 20 for $749 and up. The new GPU, like the RTX 5080, looks like a relatively modest upgrade from last year’s RTX 4070 Ti Super. But it ought to at least flirt with affordability for people who are looking to get natively rendered 4K without automatically needing to enable DLSS upscaling to get playable frame rates.
RTX 5070 Ti
RTX 4070 Ti Super
RTX 5070
RTX 4070 Super
CUDA Cores
8,960
8,448
6,144
7,168
Boost Clock
2,452 MHz
2,610 MHz
2,512 MHz
2,475 MHz
Memory Bus Width
256-bit
256-bit
192-bit
192-bit
Memory Bandwidth
896 GB/s
672 GB/s
672 GB/s
504 GB/s
Memory size
16GB GDDR7
16GB GDDR6X
12GB GDDR7
12GB GDDR6X
TGP
300 W
285 W
250 W
220 W
That said, if the launches of the 5090 and 5080 are anything to go by, it may not be easy to find and buy the RTX 5070 Ti for anything close to the listed retail price; early retail listings are not promising on this front. You’ll also be relying exclusively on Nvidia’s partners to deliver unadorned, relatively minimalist MSRP versions of the cards since Nvidia isn’t making a Founders Edition version.
As for the $549 RTX 5070, Nvidia’s website says it’s launching on March 5. But it’s less exciting than the other 50-series cards because it has fewer CUDA cores than the outgoing RTX 4070 Super, leaving it even more reliant on AI-generated frames to improve performance compared to the last generation.
Cambridge-headquartered Arm has more than doubled in value to $160 billion since it listed on Nasdaq in 2023, carried higher by explosive investor interest in AI. Arm’s partnerships with Nvidia and Amazon have driven its rapid growth in the data centers that power AI assistants from OpenAI, Meta, and Anthropic.
Meta is the latest big tech company to turn to Arm for server chips, displacing those traditionally provided by Intel and AMD.
During last month’s earnings call, Meta’s finance chief Susan Li said it would be “extending our custom silicon efforts to [AI] training workloads” to drive greater efficiency and performance by tuning its chips to its particular computing needs.
Meanwhile, an Arm-produced chip is also likely to eventually play a role in Sir Jony Ive’s secretive plans to build a new kind of AI-powered personal device, which is a collaboration between the iPhone designer’s firm LoveFrom, OpenAI’s Sam Altman, and SoftBank.
Arm’s designs have been used in more than 300 billion chips, including almost all of the world’s smartphones. Its power-efficient designs have made its CPUs, the general-purpose workhorse that sits at the heart of any computer, an increasingly attractive alternative to Intel’s chips in PCs and servers at a time when AI is making data centers much more energy-intensive.
Arm, which started out in a converted turkey barn in Cambridgeshire 35 years ago, became ubiquitous in the mobile market by licensing its designs to Apple for its iPhone chips, as well as Android suppliers such as Qualcomm and MediaTek. Maintaining its unique position in the center of the fiercely competitive mobile market has required a careful balancing act for Arm.
But Son has long pushed for Arm to make more money from its intellectual property. Under Haas, who became chief executive in 2022, Arm’s business model began to evolve, with a focus on driving higher royalties from customers as the company designs more of the building blocks needed to make a chip.
Going a step further by building and selling its own complete chip is a bold move by Haas that risks putting it on a collision course with customers such as Qualcomm, which is already locked in a legal battle with Arm over licensing terms, and Nvidia, the world’s most valuable chipmaker.
AMD announced its fourth-quarter earnings yesterday, and the numbers were mostly rosy: $7.7 billion in revenue and a 51 percent profit margin, compared to $6.2 billion and 47 percent a year ago. The biggest winner was the data center division, which made $3.9 billion thanks to Epyc server processors and Instinct AI accelerators, and Ryzen CPUs are also selling well, helping the company’s client segment earn $2.3 billion.
But if you were looking for a dark spot, you’d find it in the company’s gaming division, which earned a relatively small $563 million, down 59 percent from a year ago. AMD’s Lisa Su blamed this on both dedicated graphics card sales and sales from the company’s “semi-custom” chips (that is, the ones created specifically for game consoles like the Xbox and PlayStation).
Other data sources suggest that the response from GPU buyers to AMD’s Radeon RX 7000 series, launched between late 2022 and early 2024, has been lackluster. The Steam Hardware Survey, a noisy but broadly useful barometer for GPU market share, shows no RX 7000-series models in the top 50; only two of the GPUs (the 7900 XTX and 7700 XT) are used in enough gaming PCs to be mentioned on the list at all, with the others all getting lumped into the “other” category. Jon Peddie Research recently estimated that AMD was selling roughly one dedicated GPU for every seven or eight sold by Nvidia.
But hope springs eternal. Su confirmed on AMD’s earnings call that the new Radeon RX 9000-series cards, announced at CES last month, would be launching in early March. The Radeon RX 9070 and 9070 XT are both aimed toward the middle of the graphics card market, and Su said that both would bring “high-quality gaming to mainstream players.”
An opportunity, maybe
“Mainstream” could mean a lot of things. AMD’s CES slide deck positioned the 9070 series alongside Nvidia’s RTX 4070 Ti ($799) and 4070 Super ($599) and its own RTX 7900 XT, 7900 GRE, and 7800 XT (between $500 and $730 as of this writing), a pretty wide price spread that is still more expensive than an entire high-end console. The GPUs could still rely heavily on upscaling algorithms like AMD’s Fidelity Super Resolution (FSR) to hit playable frame rates at those resolutions, rather than targeting native 4K.
AMD’s CES announcements include a tease about next-gen graphics cards, a new flagship desktop CPU, and a modest refresh of its processors for handheld gaming PCs. But the company’s largest announcement, by volume, is about laptop processors.
Today the company is expanding the Ryzen AI 300 lineup with a batch of updated high-end chips with up to 16 CPU cores and some midrange options for cheaper Copilot+ PCs. AMD has repackaged some of its high-end desktop chips for gaming laptops, including the first Ryzen laptop CPU with 3D V-Cache enabled. And there’s also a new-in-name-only Ryzen 200 series, another repackaging of familiar silicon to address lower-budget laptops.
Ryzen AI 300 is back, along with high-end Max and Max+ versions
Ryzen AI is back, with Max and Max+ versions that include huge integrated GPUs. Credit: AMD
We came away largely impressed by the initial Ryzen AI 300 processors in August 2024, and new processors being announced today expand the lineup upward and downward.
AMD is announcing the Ryzen AI 7 350 and Ryzen AI 5 340 today, along with identically specced Pro versions of the same chips with a handful of extra features for large businesses and other organizations.
Credit: AMD
The 350 includes eight CPU cores split evenly between large Zen 5 cores and smaller, slower but more efficient Zen 5C cores, plus a Radeon 860M with eight integrated graphics cores (down from a peak of 16 for the Ryzen AI 9). The 340 has six CPU cores, again split evenly between Zen 5 and Zen 5C, and a Radeon 840M with four graphics cores. But both have the same 50 TOPS NPUs as the higher-end Ryzen AI chips, qualifying both for the Copilot+ label.
For consumers, AMD is launching three high-end chips across the new “Ryzen AI Max+” and “Ryzen AI Max” families. Compared to the existing Strix Point-based Ryzen AI processors, Ryzen AI Max+ and Max include more CPU cores, and all of their cores are higher-performing Zen 5 cores, with no Zen 5C cores mixed in. The integrated graphics also get significantly more powerful, with as many as 40 cores built in—these chips seem to be destined for larger thin-and-light systems that could benefit from more power but don’t want to make room for a dedicated GPU.
AMD’s batch of CES announcements this year includes just two new products for desktop PC users: the new Ryzen 9 9950X3D and 9900X3D. Both will be available at some point in the first quarter of 2025.
Both processors include additional CPU cores compared to the 9800X3D that launched in November. The 9900X3D includes 12 Zen 5 CPU cores with a maximum clock speed of 5.5 GHz, and the 9950X3D includes 16 cores with a maximum clock speed of 5.7 GHz. Both include 64MB of extra L3 cache compared to the regular 9900X and 9950X, for a total cache of 144MB and 140MB, respectively; games in particular tend to benefit disproportionately from this extra cache memory.
But the 9950X3D and 9900X3D aren’t being targeted at people who build PCs primarily to game—the company says their game performance is usually within 1 percent of the 9800X3D. These processors are for people who want peak game performance when they’re playing something but also need lots of CPU cores for chewing on CPU-heavy workloads during the workday.
AMD estimates that the Ryzen 9 9950X3D is about 8 percent faster than the 7950X3D when playing games and about 13 percent faster in professional content creation apps. These modest gains are more or less in line with the small performance bump we’ve seen in other Ryzen 9000-series desktop CPUs.
Nearly two years ago, AMD announced its first Ryzen Z1 processors. These were essentially the same silicon that AMD was putting in high-end thin-and-light laptops but tuned specifically for handheld gaming PCs like the Steam Deck and Asus ROG Ally X. As part of its CES announcements today, AMD is refreshing that lineup with three processors, all slated for an undisclosed date in the first quarter of 2025.
Although they’re all part of the “Ryzen Z2” family, each of these three chips is actually much different under the hood, and some of them are newer than others.
The Ryzen Z2 Extreme is what you’d expect from a refresh: a straightforward upgrade to both the CPU and GPU architectures of the Ryzen Z1 Extreme. Based on the same “Strix Point” architecture as the Ryzen AI 300 laptop processors, the Z2 Extreme includes eight CPU cores (three high-performance Zen 5 cores, five smaller and efficiency-optimized Zen 5C cores) and an unnamed RDNA 3.5 GPU with 16 of AMD’s compute units (CUs). These should both provide small bumps to CPU and GPU performance relative to the Ryzen Z1 Extreme, which used eight Zen 4 CPU cores and 12 RDNA 3 GPU cores.
AMD’s full Ryzen Z2 lineup, which obfuscates the fact that these three chips are all using different CPU and GPU architectures. Credit: AMD
The Ryzen Z2, on the other hand, appears to be exactly the same chip as the Ryzen Z1 Extreme, but with a different name. Like the Z1 Extreme, it has eight Zen 4 cores with a 5.1 GHz maximum clock speed and an RDNA 3 GPU with 12 cores.
Nvidia is widely expected to announce specs, pricing, and availability information for the first few cards in the new RTX 50 series at its CES keynote later today. AMD isn’t ready to get as specific about its next-generation graphics lineup yet, but the company shared a few morsels today about its next-generation RDNA 4 graphics architecture and its 9000-series graphics cards.
AMD mentioned that RDNA 4 cards were on track to launch in early 2025 during a recent earnings call, acknowledging that shipments of current-generation RX 7000-series cards were already slowing down. CEO Lisa Su said then that the architecture would include “significantly higher ray-tracing performance” as well as “new AI capabilities.”
AMD’s RDNA 4 launch will begin with the 9070 XT and 9070, which are both being positioned as upper-midrange GPUs like the RTX 4070 series. Credit: AMD
The preview the company is providing today provides few details beyond those surface-level proclamations. The compute units will be “optimized,” AI compute will be “supercharged,” ray-tracing will be “improved,” and media encoding quality will be “better,” but AMD isn’t providing hard numbers for anything at this point. The RDNA 4 launch will begin with the Radeon RX 9070 XT and 9070 at some point in Q1 of 2025, and AMD will provide more information “later in the quarter.”
The GPUs will be built on a 4 nm process, presumably from TSMC, an upgrade from the 5 nm process used for the 7000-series GPUs and the 6 nm process used for the separate memory controller chiplets (AMD hasn’t said whether RDNA 4 GPUs are using chiplets; the 7000 series used them for high-end GPUs but not lower-end ones).
FSR 4 will be AMD’s first ML-powered upscaling algorithm, similar to Nvidia’s DLSS, Intel’s XeSS (on Intel GPUs), and Apple’s MetalFX. This generally results in better image quality but more restrictive hardware requirements. Credit: AMD
We do know that AMD’s next-generation upscaling algorithm, FidelityFX Super Resolution 4, has been “developed for AMD RDNA 4,” and it will be the first version of FSR to use machine learning-powered upscaling. Nvidia’s DLSS and Intel’s XeSS (when running on Intel GPUs) also use ML-powered upscaling, which generally leads to better results but also has stricter hardware requirements than older versions of FSR. AMD isn’t saying whether FSR 4 will work on any older Radeon cards.
Attack bypasses AMD protection promising security, even when a server is compromised.
One of the oldest maxims in hacking is that once an attacker has physical access to a device, it’s game over for its security. The basis is sound. It doesn’t matter how locked down a phone, computer, or other machine is; if someone intent on hacking it gains the ability to physically manipulate it, the chances of success are all but guaranteed.
In the age of cloud computing, this widely accepted principle is no longer universally true. Some of the world’s most sensitive information—health records, financial account information, sealed legal documents, and the like—now often resides on servers that receive day-to-day maintenance from unknown administrators working in cloud centers thousands of miles from the companies responsible for safeguarding it.
Bad (RAM) to the bone
In response, chipmakers have begun baking protections into their silicon to provide assurances that even if a server has been physically tampered with or infected with malware, sensitive data funneled through virtual machines can’t be accessed without an encryption key that’s known only to the VM administrator. Under this scenario, admins inside the cloud provider, law enforcement agencies with a court warrant, and hackers who manage to compromise the server are out of luck.
On Tuesday, an international team of researchers unveiled BadRAM, a proof-of-concept attack that completely undermines security assurances that chipmaker AMD makes to users of one of its most expensive and well-fortified microprocessor product lines. Starting with the AMD Epyc 7003 processor, a feature known as SEV-SNP—short for Secure Encrypted Virtualization and Secure Nested Paging—has provided the cryptographic means for certifying that a VM hasn’t been compromised by any sort of backdoor installed by someone with access to the physical machine running it.
If a VM has been backdoored, the cryptographic attestation will fail and immediately alert the VM admin of the compromise. Or at least that’s how SEV-SNP is designed to work. BadRAM is an attack that a server admin can carry out in minutes, using either about $10 of hardware, or in some cases, software only, to cause DDR4 or DDR5 memory modules to misreport during bootup the amount of memory capacity they have. From then on, SEV-SNP will be permanently made to suppress the cryptographic hash attesting its integrity even when the VM has been badly compromised.
“BadRAM completely undermines trust in AMD’s latest Secure Encrypted Virtualization (SEV-SNP) technology, which is widely deployed by major cloud providers, including Amazon AWS, Google Cloud, and Microsoft Azure,” members of the research team wrote in an email. “BadRAM for the first time studies the security risks of bad RAM—rogue memory modules that deliberately provide false information to the processor during startup. We show how BadRAM attackers can fake critical remote attestation reports and insert undetectable backdoors into _any_ SEV-protected VM.”
Compromising the AMD SEV ecosystem
On a website providing more information about the attack, the researchers wrote:
Modern computers increasingly use encryption to protect sensitive data in DRAM, especially in shared cloud environments with pervasive data breaches and insider threats. AMD’s Secure Encrypted Virtualization (SEV) is a cutting-edge technology that protects privacy and trust in cloud computing by encrypting a virtual machine’s (VM’s) memory and isolating it from advanced attackers, even those compromising critical infrastructure like the virtual machine manager or firmware.
We found that tampering with the embedded SPD chip on commercial DRAM modules allows attackers to bypass SEV protections—including AMD’s latest SEV-SNP version. For less than $10 in off-the-shelf equipment, we can trick the processor into allowing access to encrypted memory. We build on this BadRAM attack primitive to completely compromise the AMD SEV ecosystem, faking remote attestation reports and inserting backdoors into any SEV-protected VM.
In response to a vulnerability report filed by the researchers, AMD has already shipped patches to affected customers, a company spokesperson said. The researchers say there are no performance penalties, other than the possibility of additional time required during boot up. The BadRAM vulnerability is tracked in the industry as CVE-2024-21944 and AMD-SB-3015 by the chipmaker.
A stroll down memory lane
Modern dynamic random access memory for servers typically comes in the form of DIMMs, short for Dual In-Line Memory Modules. The basic building block of these rectangular sticks are capacitors, which, when charged, represent a binary 1 and, when discharged, represent a 0. The capacitors are organized into cells, which are organized into arrays of rows and columns, which are further arranged into ranks and banks. The more capacitors that are stuffed into a DIMM, the more capacity it has to store data. Servers usually have multiple DIMMs that are organized into channels that can be processed in parallel.
For a server to store or access a particular piece of data, it first must locate where the bits representing it are stored in this vast configuration of transistors. Locations are tracked through addresses that map the channel, rank, bank row, and column. For performance reasons, the task of translating these physical addresses to DRAM address bits—a job assigned to the memory controller—isn’t a one-to-one mapping. Rather, consecutive addresses are spread across different channels, ranks, and banks.
Before the server can map these locations, it must first know how many DIMMs are connected and the total capacity of memory they provide. This information is provided each time the server boots, when the BIOS queries the SPD—short for Serial Presence Detect—chip found on the surface of the DIMM. This chip is responsible for providing the BIOS basic information about available memory. BadRAM causes the SPD chip to report that its capacity is twice what it actually is. It does this by adding an extra addressing bit.
To do this, a server admin need only briefly connect a specially programmed Raspberry Pi to the SPD chip just once.
The researchers’ Raspberry Pi connected to the SPD chip of a DIMM. Credit: De Meulemeester et al.
Hacking by numbers, 1, 2, 3
In some cases, with certain DIMM models that don’t adequately lock down the chip, the modification can likely be done through software. In either case, the modification need only occur once. From then on, the SPD chip will falsify the memory capacity available.
Next, the server admin configures the operating system to ignore the newly created “ghost memory,” meaning the top half of the capacity reported by the compromised SPD chip, but continue to map to the lower half of the real memory. On Linux, this configuration can be done with the `memmap` kernel command-line parameter. The researchers’ paper, titled BadRAM: Practical Memory Aliasing Attacks on Trusted Execution Environments, provides many more details about the attack.
Next, a script developed as part of BadRAM allows the attacker to quickly find the memory locations of ghost memory bits. These aliases give the attacker access to memory regions that SEV-SNP is supposed to make inaccessible. This allows the attacker to read and write to these protected memory regions.
Access to this normally fortified region of memory allows the attacker to copy the cryptographic hash SEV-SNP creates to attest to the integrity of the VM. The access also permits the attacker to boot an SEV-compliant VM that has been backdoored. Normally, this malicious VM would trigger a warning in the form of a cryptographic hash. BadRAM allows the attacker to replace this attestation failure hash with the attestation success hash collected earlier.
The primary steps involved in BadRAM attacks are:
Compromise the memory module to lie about its size and thus trick the CPU into accessing the nonexistent ghost addresses that have been silently mapped to existing memory regions.
Find aliases. These addresses map to the same DRAM location.
Bypass CPU Access Control. The aliases allow the attacker to bypass memory protections that are supposed to prevent the reading of and writing to regions storing sensitive data.
Beware of the ghost bit
For those looking for more technical details, Jesse De Meulemeester, who along with Luca Wilke was lead co-author of the paper, provided the following, which more casual readers can skip:
In our attack, there are two addresses that go to the same DRAM location; one is the original address, the other one is what we call the alias.
When we modify the SPD, we double its size. At a low level, this means all memory addresses now appear to have one extra bit. This extra bit is what we call the “ghost” bit, it is the address bit that is used by the CPU, but is not used (thus ignored) by the DIMM. The addresses for which this “ghost” bit is 0 are the original addresses, and the addresses for which this bit is 1 is the “ghost” memory.
This explains how we can access protected data like the launch digest. The launch digest is stored at an address with the ghost bit set to 0, and this address is protected; any attempt to access it is blocked by the CPU. However, if we try to access the same address with the ghost bit set to 1, the CPU treats it as a completely new address and allows access. On the DIMM side, the ghost bit is ignored, so both addresses (with ghost bit 0 or 1) point to the same physical memory location.
A small example to illustrate this:
Original SPD: 4 bit addresses: CPU: address 1101 -> DIMM: address 1101
Modified SPD: Reports 5 bits even though it only has 4: CPU: address 01101 -> DIMM: address 1101 CPU: address 11101 -> DIMM: address 1101
In this case 01101 is the protected address, 11101 is the alias. Even though to the CPU they seem like two different addresses, they go to the same DRAM location.
As noted earlier, some DIMM models don’t lock down the SPD chip, a failure that likely makes software-only modifications possible. Specifically, the researchers found that two DDR4 models made by Corsair contained this flaw.
In a statement, AMD officials wrote:
AMD believes exploiting the disclosed vulnerability requires an attacker either having physical access to the system, operating system kernel access on a system with unlocked memory modules, or installing a customized, malicious BIOS. AMD recommends utilizing memory modules that lock Serial Presence Detect (SPD), as well as following physical system security best practices. AMD has also released firmware updates to customers to mitigate the vulnerability.
Members of the research team are from KU Leuven, the University of Lübeck, and the University of Birmingham. Specifically, they are:
The researchers tested BadRAM against the Intel SGX, a competing microprocessor sold by AMD’s much bigger rival promising integrity assurances comparable to SEV-SNP. The classic, now-discontinued version of the SGX did allow reading of protected regions, but not writing to them. The current Intel Scalable SGX and Intel TDX processors, however, allowed no reading or writing. Since a comparable Arm processor wasn’t available for testing, it’s unknown if it’s vulnerable.
Despite the lack of universality, the researchers warned that the design flaws underpinning the BadRAM vulnerability may creep into other systems and should always use the mitigations AMD has now put in place.
“Since our BadRAM primitive is generic, we argue that such countermeasures should be considered when designing a system against untrusted DRAM,” the researchers wrote in their paper. “While advanced hardware-level attacks could potentially circumvent the currently used countermeasures, further research is required to judge whether they can be carried out in an impactful attacker model.”
Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.
On Thursday, AMD announced its new MI325X AI accelerator chip, which is set to roll out to data center customers in the fourth quarter of this year. At an event hosted in San Francisco, the company claimed the new chip offers “industry-leading” performance compared to Nvidia’s current H200 GPUs, which are widely used in data centers to power AI applications such as ChatGPT.
With its new chip, AMD hopes to narrow the performance gap with Nvidia in the AI processor market. The Santa Clara-based company also revealed plans for its next-generation MI350 chip, which is positioned as a head-to-head competitor of Nvidia’s new Blackwell system, with an expected shipping date in the second half of 2025.
In an interview with the Financial Times, AMD CEO Lisa Su expressed her ambition for AMD to become the “end-to-end” AI leader over the next decade. “This is the beginning, not the end of the AI race,” she told the publication.
According to AMD’s website, the announced MI325X accelerator contains 153 billion transistors and is built on the CDNA3 GPU architecture using TSMC’s 5 nm and 6 nm FinFET lithography processes. The chip includes 19,456 stream processors and 1,216 matrix cores spread across 304 compute units. With a peak engine clock of 2100 MHz, the MI325X delivers up to 2.61 PFLOPs of peak eight-bit precision (FP8) performance. For half-precision (FP16) operations, it reaches 1.3 PFLOPs.
Actually getting the GPU working required patching the Linux kernel to include the open-source AMDGPU driver, which includes Arm support and provides decent support for the RX 460 (Geerling says the card and its Polaris architecture were chosen because they were new enough to be practically useful and to be supported by the AMDGPU driver, old enough that driver support is pretty mature, and because the card is cheap and uses PCIe 3.0). Nvidia’s GPUs generally aren’t really an option for projects like this because the open source drivers lag far behind the ones available for Radeon GPUs.
Once various kernel patches were applied and the kernel was recompiled, installing AMD’s graphics firmware got both graphics output and 3D acceleration working more or less normally.
Despite their age and relative graphical simplicity, running Doom 3 or Tux Racer on the Pi 5’s GPU is a tall order, even at 1080p. The RX 460 was able to run both at 4K, albeit with some settings reduced; Geerling also said that the card rendered the Pi operating system’s UI smoothly at 4k (the Pi’s integrated GPU does support 4K output, but things get framey quickly in our experience, especially when using multiple monitors).
Though a qualified success, anything this hacky is likely to have at least some software problems; Geerling noted that graphics acceleration in the Chromium browser and GPU-accelerated video encoding and decoding support weren’t working properly.
Most Pi owners aren’t going to want to run out and recreate this setup themselves, but it is interesting to see progress when it comes to using dedicated GPUs with Arm CPUs. So far, Arm chips across all major software ecosystems—including Windows, macOS, and Android—have mostly been restricted to using their own integrated GPUs. But if Arm processors are really going to compete with Intel’s and AMD’s in every PC market segment, we’ll eventually need to see better support for external graphics chips.
But that doesn’t make it a bad time to buy a PC, especially if you’re looking for some cost-efficient builds. Prices of CPUs and GPUs have both fallen a fair bit since we did our last build guide a year or so ago, which means all of our builds are either cheaper than they were before or we can squeeze out a little more performance than before at similar prices.
We have six builds across four broad tiers—a budget office desktop, a budget 1080p gaming PC, a mainstream 1440p-to-4K gaming PC, and a price-conscious workstation build with a powerful CPU and lots of room for future expandability.
You won’t find a high-end “god box” this time around, though; for a money-is-no-object high-end build, it’s probably worth waiting for Intel’s upcoming Arrow Lake desktop processors, AMD’s expected Ryzen 9000X3D series, and whatever Nvidia’s next-generation GPU launch is. All three of those things are expected either later this year or early next.
We have a couple of different iterations of the more expensive builds, and we also suggest multiple alternate components that can make more sense for certain types of builds based on your needs. The fun of PC building is how flexible and customizable it is—whether you want to buy what we recommend and put it together or want to treat these configurations as starting points, hopefully, they give you some idea of what your money can get you right now.
Notes on component selection
Part of the fun of building a PC is making it look the way you want. We’ve selected cases that will physically fit the motherboards and other parts we’re recommending and which we think will be good stylistic fits for each system. But there are many cases out there, and our picks aren’t the only options available.
As for power supplies, we’re looking for 80 Plus certified power supplies from established brands with positive user reviews on retail sites (or positive professional reviews, though these can be somewhat hard to come by for any given PSU these days). If you have a preferred brand, by all means, go with what works for you. The same goes for RAM—we’ll recommend capacities and speeds, and we’ll link to kits from brands that have worked well for us in the past, but that doesn’t mean they’re better than the many other RAM kits with equivalent specs.
For SSDs, we mostly stick to drives from known brands like Samsung, Crucial, or Western Digital, though going with a lesser-known brand can save you a bit of money. All of our builds also include built-in Bluetooth and Wi-Fi, so you don’t need to worry about running Ethernet wires and can easily connect to Bluetooth gamepads, keyboards, mice, headsets, and other accessories.
We also haven’t priced in peripherals, like webcams, monitors, keyboards, or mice, as we’re assuming most people will re-use what they already have or buy those components separately. If you’re feeling adventurous, you could even make your own DIY keyboard! If you need more guidance, Kimber Streams’ Wirecutter keyboard guides are exhaustive and educational.
Finally, we won’t be including the cost of a Windows license in our cost estimates. You can pay a lot of different prices for Windows—$139 for an official retail license from Microsoft, $120 for an “OEM” license for system builders, or anywhere between $15 and $40 for a product key from shady gray market product key resale sites. Windows 10 keys will also work to activate Windows 11, though Microsoft stopped letting old Windows 7 and Windows 8 keys activate new Windows 10 and 11 installs relatively recently. You could even install Linux, given recent advancements to game compatibility layers!
But rather than make Ryzen owners wait for the 24H2 update to come out later this fall (or make them install a beta version of a major OS update), AMD and Microsoft have backported the scheduler improvements to Windows 11 23H2. Users of Ryzen 5000, 7000, and 9000 CPUs can install the KB5041587 update by going to Windows Update in Settings, selecting Advanced Options, and then Optional Updates.
“We expect the performance uplift to be very similar between 24H2 and 23H2 with KB5041587 installed,” an AMD representative told Ars.
In current versions of Windows 11 23H2, the CPU scheduler optimizations are only available using Windows’ built-in Administrator account. The update enables them for typical user accounts, too.
Older AMD CPUs benefit, too
AMD’s messaging has focused mainly on how the 24H2 update (and 23H2 with the KB5041587 update installed) improves Ryzen 9000 performance; across a handful of provided benchmarks, the company says speeds can improve by anything between zero and 13 percent over Windows 11 23H2. There are also benefits for users of CPUs that use the older Zen 4 (Ryzen 7000/8000G) and Zen 3 (Ryzen 5000) architectures, but AMD hasn’t been specific about how much either of these older architectures would improve.
The Hardware Unboxed YouTube channel has done some early game testing with the current builds of the 24H2 update, and there’s good news for Ryzen 7000 CPU owners and less good news for AMD. The channel found that, on average, across dozens of games, average frame rates increased by about 10 percent for a Zen 4-based Ryzen 7 7700X. Ryzen 7 9700X improved more, as AMD said it would, but only by 11 percent. At default settings, the 9700X is only 2 or 3 percent faster than the nearly 2-year-old 7700X in these games, whether you’re running the 24H2 update or not.
This early data suggests that both Ryzen 7000 and Ryzen 5000 owners will see at least a marginal benefit from upgrading to Windows 11 24H2, which is a nice thing to get for free with a software update. But there are caveats. Hardware Unboxed tested for CPU performance strictly in games running at 1080p on a high-end Nvidia GeForce RTX 4090—one of the few scenarios in any modern gaming PC where your CPU might limit your performance before your GPU would. If you play at a higher resolution like 1440p or 4K, your GPU will usually go back to being the bottleneck, and CPU performance improvements won’t be as noticeable.
The update is also taking already-high frame rates and making them even higher; one game went from an average frame rate of 142 FPS to 158 FPS on the 7700X, and from 167 to 181 FPS on the 9700X, for example. Even side by side, it’s an increase that will be difficult for most people to see. Other kinds of workloads may benefit, too—AMD said that the Procyon Office benchmark ran about 6 percent faster under Windows 11 24H2—but we don’t have definitive data on real-world workloads yet.
We wouldn’t expect performance to improve much, if at all, in either heavily multi-threaded workloads where all the CPU cores are actively engaged at once or in exclusively single-threaded workloads that run continuously on a single-core. AMD’s numbers for both single- and multi-threaded versions of the Cinebench benchmark, which simulates these kinds of workloads, were exactly the same in Windows 11 23H2 and 24H2 for Ryzen 9000.
Finally, it’s worth noting that the Ryzen 7 9700X was held back quite a bit by its new, lower 65 W TDP in our testing, compared to the 105 W TDP of the Ryzen 7 7700X. Both CPUs performed similarly in games Hardware Unboxed tested, both before and after the 24H2 update. But the 9700X is still the cooler and more efficient chip, and it’s capable of higher speeds if you either set its TDP to 105 W manually or use features like Precision Boost Overdrive to adjust its power limits. How both CPUs perform out of the box is important, but comparing the 9700X to the 7700X at stock settings is a worst-case scenario for Ryzen 9000’s generation-over-generation performance increases.
Windows 11 24H2: Coming soon but available now
Microsoft has disclosed a few details of the underpinnings of the 24H2 update, which looks the same as older Windows 11 releases but includes a new compiler, a new kernel, and a new scheduler under the hood. Microsoft talked about these specifically in the context of improving Arm CPU performance and the speed of translated x86 apps because it was gearing up to push Microsoft Surface devices and other Copilot+ PCs with new Qualcomm Snapdragon chips in them. Still, we’ll hopefully see some subtle benefits for other CPU architectures, too.
The 24H2 update is still technically a preview, available via Microsoft’s Windows Insider Release Preview channel. Users can either download it from Windows Update or as an ISO file if they want to make a USB installer to upgrade multiple systems. But Microsoft and PC OEMs have been shipping the 24H2 update on the Surfaces and other PCs for weeks now, and you shouldn’t have many problems with it in day-to-day use at this point. For those who would rather wait, the update should begin rolling out to the general public this fall.