Reviews

2025-ipad-air-hands-on:-why-mess-with-a-good-thing?

2025 iPad Air hands-on: Why mess with a good thing?

There’s not much new in Apple’s latest refresh of the iPad Air, so there’s not much to say about it, but it’s worth taking a brief look regardless.

In almost every way, this is identical to the previous generation. There are only two differences to go over: the bump from the M2 chip to the slightly faster M3, and a redesign of the Magic Keyboard peripheral.

If you want more details about this tablet, refer to our M2 iPad Air review from last year. Everything we said then applies now.

From M2 to M3

The M3 chip has an 8-core CPU with four performance cores and four efficiency cores. On the GPU side, there are nine cores. There’s also a 16-core Neural Engine, which is what Apple calls its NPU.

We’ve seen the M3 in other devices before, and it performs comparably here in the iPad Air in Geekbench benchmarks. Those coming from the M1 or older A-series chips will see some big gains, but it’s a subtle step up over the M2 in last year’s iPad Air.

That will be a noticeable boost primarily for a handful of particularly demanding 3D games (the likes of Assassin’s Creed Mirage, Resident Evil Village, Infinity Nikki, and Genshin Impact) and some heavy-duty applications only a few people use, like CAD or video editing programs.

Most of the iPad Air’s target audience would never know the difference, though, and the main benefit here isn’t necessarily real-world performance. Rather, the upside of this upgrade is the addition of a few specific features, namely hardware-accelerated ray tracing and hardware-accelerated AV1 video codec support.

This isn’t new, but this chip supports Apple Intelligence, the much-ballyhooed suite of generative AI features Apple recently introduced. At this point there aren’t many devices left in Apple’s lineup that don’t support Apple Intelligence (it’s basically just the cheapest, entry-level iPad that doesn’t have it) and that’s good news for Apple, as it helps the company simplify its marketing messaging around the features.

2025 iPad Air hands-on: Why mess with a good thing? Read More »

ryzen-9-9950x3d-review:-seriously-fast,-if-a-step-backward-in-efficiency

Ryzen 9 9950X3D review: Seriously fast, if a step backward in efficiency


Not a lot of people actually need this thing, but if you do, it’s very good.

AMD’s Ryzen 9 9950X3D. Credit: Andrew Cunningham

AMD’s Ryzen 9 9950X3D. Credit: Andrew Cunningham

Even three years later, AMD’s high-end X3D-series processors still aren’t a thing that most people need to spend extra money on—under all but a handful of circumstances, your GPU will be the limiting factor when you’re running games, and few non-game apps benefit from the extra 64MB chunk of L3 cache that is the processors’ calling card. They’ve been a reasonably popular way for people with old AM4 motherboards to extend the life of their gaming PCs, but for AM5 builds, a regular Zen 4 or Zen 5 CPU will not bottleneck modern graphics cards most of the time.

But high-end PC building isn’t always about what’s rational, and people spending $2,000 or more to stick a GeForce RTX 5090 into their systems probably won’t worry that much about spending a couple hundred extra dollars to get the fastest CPU they can get. That’s the audience for the new Ryzen 9 9950X3D, a 16-core, Zen 5-based, $699 monster of a processor that AMD begins selling tomorrow.

If you’re only worried about game performance (and if you can find one), the Ryzen 7 9800X3D is the superior choice, for reasons that will become apparent once we start looking at charts. But if you want fast game performance and you need as many CPU cores as you can get for other streaming or video production or rendering work, the 9950X3D is there for you. (It’s a little funny to me that this a chip made almost precisely for the workload of the PC building tech YouTubers who will be reviewing it.)  It’s also a processor that Intel doesn’t have any kind of answer to.

Second-generation 3D V-Cache

Layering the 3D V-Cache under the CPU die has made most of the 9950X3D’s improvements possible. Credit: AMD

AMD says the 9000X3D chips use a “second-generation” version of its 3D V-Cache technology after using the same approach for the Ryzen 5000 and 7000 processors. The main difference is that, where the older chips stack the 64MB of extra L3 cache on top of the processor die, the 9000 series stacks the cache underneath, making it easier to cool the CPU silicon.

This makes the processors’ thermal characteristics much more like a typical Ryzen CPU without the 3D V-Cache. And because voltage and temperatures are less of a concern, the 9800X3D, 9900X3D, and 9950X3D all support the full range of overclocking and performance tuning tools that other Ryzen CPUs support.

The 12- and 16-core Ryzen X3D chips are built differently from the 8-core. As we’ve covered elsewhere, AMD’s Ryzen desktop processors are a combination of chiplets—up to two CPU core chiplets with up to eight CPU cores each and a separate I/O die that handles things like PCI Express and USB support. In the 9800X3D, you just have one CPU chiplet, and the 64MB of 3D V-Cache is stacked underneath. For the 9900X3D and 9950X3D, you get one 8-core CPU die with V-Cache underneath and then one other CPU die with 4 or 8 cores enabled and no extra cache.

AMD’s driver software is responsible for deciding what apps get run on which CPU cores. Credit: AMD

It’s up to AMD’s chipset software to decide what kinds of apps get to run on each kind of CPU core. Non-gaming workloads prioritize the normal CPU cores, which are generally capable of slightly higher peak clock speeds, while games that benefit disproportionately from the extra cache are run on those cores instead. AMD’s software can “park” the non-V-Cache CPU cores when you’re playing games to ensure they’re not accidentally being run on less-suitable CPU cores.

This technology will work the same basic way for the 9950X3D as it did for the older 7950X3D, but AMD has made some tweaks. Updates to the chipset driver mean that you can swap your current processor out for an X3D model without needing to totally reinstall Windows to get things working, for example, which was AMD’s previous recommendation for the 7000 series. Another update will improve performance for Windows 10 systems with virtualization-based security (VBS) enabled, though if you’re still on Windows 10, you should be considering an upgrade to Windows 11 so you can keep getting security updates past October.

And for situations where AMD’s drivers can’t automatically send the right workloads to the right kinds of cores, AMD also maintains a compatibility database of applications that need special treatment to take advantage of the 3D V-Cache in the 9900X3D and 9950X3D. AMD says it has added a handful of games to that list for the 9900/9950X3D launch, including Far Cry 6Deus Ex: Mankind Divided, and a couple of Total War games, among others.

Testbed notes

Common elements to all the platforms we test in our CPU testbed include a Lian Li O11 Air Mini case with an EVGA-provided Supernova 850 P6 power supply and a 280 mm Corsair iCue H115i Elite Capellix AIO cooler.

Since our last CPU review, we’ve done a bit of testbed updating to make sure that we’re accounting for a bunch of changes and turmoil on both Intel’s and AMD’s sides of the fence.

For starters, we’re running Windows 11 24H2 on all systems now, which AMD has said should marginally improve performance for architectures going all the way back to Zen 3 (on the desktop, the Ryzen 5000 series). The company made this revelation after early reviewers of the Ryzen 9000 series couldn’t re-create the oddball conditions of their own internal test setups.

As for Intel, the new testing incorporates fixes for the voltage spiking, processor-destroying bugs that affected 13th- and 14th-generation Core processors, issues that Intel fixed in phases throughout 2024. For the latest Core Ultra 200-series desktop CPUs, it also includes performance fixes Intel introduced in BIOS updates and drivers late last year and early this year. (You might have noticed that we didn’t run reviews of the 9800X3D or the Core Ultra 200 series at the time; all of this re-testing of multiple generations of CPUs was part of the reason why).

All of this is to say that any numbers you’re seeing in this review represent recent testing with newer Windows updates, BIOS updates, and drivers all installed.

One thing that isn’t top of the line at the moment is the GeForce RTX 4090, though we are using that now instead of a Radeon RX 7900 XTX.

The RTX 50 series was several months away from being announced when we began collecting updated test data, and we opted to keep the GPU the same for our 9950X3D testing so that we’d have a larger corpus of data to compare the chip to. The RTX 4090 is still, by a considerable margin, the second-fastest consumer GPU that exists right now. But at some point, when we’re ready to do yet another round of totally-from-scratch retesting, we’ll likely swap a 5090 in just to be sure we’re not bottlenecking the processor.

Performance and power: Benefits with fewer drawbacks

The 9950X3D has the second-highest CPU scores in our gaming benchmarks, and it’s behind the 9800X3D by only a handful of frames. This is one of the things we meant when we said that the 9800X3D was the better choice if you’re only worried about game performance. The same dynamic plays out between other 8- and 16-core Ryzen chips—higher power consumption and heat in the high-core-count chips usually bring game performance down just a bit despite the nominally higher boost clocks.

You’ll also pay for it in power consumption, at least at each chip’s default settings. On average, the 9950X3D uses 40 or 50 percent more power during our gaming benchmarks than the 9800X3D running the same benchmarks, even though it’s not capable of running them quite as quickly. But it’s similar to the power use of the regular 9950X, which is quite a bit slower in these gaming benchmarks, even if it does have broadly similar performance in most non-gaming benchmarks.

What’s impressive is what you see when you compare the 9950X3D to its immediate predecessor, the 7950X3D. The 9950X3D isn’t dramatically faster in games, reflecting Zen 5’s modest performance improvement over Zen 4. But the 9950X3D is a lot faster in our general-purpose benchmarks and other non-gaming CPU benchmarks because the changes to how the X3D chips are packaged have helped AMD keep clock speeds, voltages, and power limits pretty close to the same as they are for the regular 9950X.

In short, the 7950X3D gave up a fair bit of performance relative to the 7950X because of compromises needed to support 3D V-Cache. The 9950X3D doesn’t ask you to make the same compromises.

Testing the 9950X3D in its 105 W Eco Mode.

That comes with both upsides and downsides. For example, the 9950X3D looks a lot less power-efficient under load in our Handbrake video encoding test than the 7950X3D because it is using the same amount of power as a normal Ryzen processor. But that’s the other “normal” thing about the 9950X3D—the ability to manually tune those power settings and boost your efficiency if you’re OK with giving up a little performance. It’s not an either/or thing. And at least in our testing, games run just as fast when you set the 9950X3D to use the 105 W Eco Mode instead of the 170 W default TDP.

As for Intel, it just doesn’t have an answer for the X3D series. The Core Ultra 9 285K is perfectly competitive in our general-purpose CPU benchmarks and efficiency, but the Arrow Lake desktop chips struggle to compete with 14th-generation Core and Ryzen 7000 processors in gaming benchmarks, to say nothing of the Ryzen 9000 and to say even less than nothing of the 9800X3D or 9950X3D. That AMD has closed the gap between the 9950X and 9950X3D’s performance in our general-purpose CPU benchmarks means it’s hard to make an argument for Intel here.

The 9950X3D stands alone

I’m not and have never been the target audience for either the 16-core Ryzen processors or the X3D-series processors. When I’m building for myself (and when I’m recommending mainstream builds for our Ars System Guides), I’m normally an advocate for buying the most CPU you can for $200 or $300 and spending more money on a GPU.

But for the game-playing YouTubing content creators who are the 9950X3D’s intended audience, it’s definitely an impressive chip. Games can hit gobsmackingly high frame rates at lower resolutions when paired with a top-tier GPU, behind (and just barely behind) AMD’s own 9800X3D. At the same time, it’s just as good at general-use CPU-intensive tasks as the regular 9950X, fixing a trade-off that had been part of the X3D series since the beginning. AMD has also removed the limits it has in place on overclocking and adjusting power limits for the X3D processors in the 5000 and 7000 series.

So yes, it’s expensive, and no, most people probably don’t need the specific benefits it provides. It’s also possible that you’ll find edge cases where AMD’s technology for parking cores and sending the right kinds of work to the right CPU cores doesn’t work the way it should. But for people who do need or want ultra-high frame rates at lower resolutions or who have some other oddball workloads that benefit from the extra cache, the 9950X3D gives you all of the upsides with no discernible downsides other than cost. And, hey, even at $699, current-generation GPU prices almost make it look like a bargain.

The good

  • Excellent combination of the 9800X3D’s gaming performance and the 9950X’s general-purpose CPU performance
  • AMD has removed limitations on overclocking and power limit tweaking
  • Pretty much no competition for Intel for the specific kind of person the 9950X3D will appeal to

The bad

  • Niche CPUs that most people really don’t need to buy
  • Less power-efficient out of the box than the 7950X3D, though users have latitude to tune efficiency manually if they want
  • AMD’s software has sometimes had problems assigning the right kinds of apps to the right kinds of CPU cores, though we didn’t have issues with this during our testing

The ugly

  • Expensive

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Ryzen 9 9950X3D review: Seriously fast, if a step backward in efficiency Read More »

m4-max-and-m3-ultra-mac-studio-review:-a-weird-update,-but-it-mostly-works

M4 Max and M3 Ultra Mac Studio Review: A weird update, but it mostly works

Comparing the M4 Max and M3 Ultra to high-end PC desktop processors.

As for the Intel and AMD comparisons, both companies’ best high-end desktop CPUs like the Ryzen 9 9950X and Core Ultra 285K are often competitive with the M4 Max’s multi-core performance, but are dramatically less power-efficient at their default settings.

Mac Studio or M4 Pro Mac mini?

The Mac Studio (bottom) and redesigned M4 Mac mini. Credit: Andrew Cunningham

Ever since Apple beefed up the Mac mini with Pro-tier chips, there’s been a pricing overlap around and just over $2,000 where the mini and the Studio are both compelling.

A $2,000 Mac mini comes with a fully enabled M4 Pro processor (14 CPU cores, 20 GPU cores), 512GB of storage, and 48GB of RAM, with 64GB of RAM available for another $200 and 10 gigabit Ethernet available for another $100. RAM is the high-end Mac mini’s main advantage over the Studio—the $1,999 Studio comes with a slightly cut-down M4 Max (also 14 CPU cores, but 32 GPU cores), 512GB of storage, and just 36GB of RAM.

In general, if you’re spending $2,000 on a Mac desktop, I would lean toward the Studio rather than the mini. You’re getting roughly the same CPU but a much faster GPU and more ports. You get less RAM, but depending on what you’re doing, there’s a good chance that 36GB is more than enough.

The only place where the mini is clearly better than the Studio once you’ve above $2,000 is memory. If you want 64GB of RAM in your Mac, you can get it in the Mac mini for $2,200. The cheapest Mac Studio with 64GB of RAM also requires a processor upgrade, bringing the total cost to $2,700. If you need memory more than you need raw performance, or if you just need something that’s as small as it can possibly be, that’s when the high-end mini can still make sense.

A lot of power—if you need it

Apple’s M4 Max Mac Studio. Credit: Andrew Cunningham

Obviously, Apple’s hermetically sealed desktop computers have some downsides compared to a gaming or workstation PC, most notably that you need to throw out and replace the whole thing any time you want to upgrade literally any component.

M4 Max and M3 Ultra Mac Studio Review: A weird update, but it mostly works Read More »

better-than-the-real-thing?-spark-2-packs-39-amp-sims-into-$300-bluetooth-speaker

Better than the real thing? Spark 2 packs 39 amp sims into $300 Bluetooth speaker


Digital amp modeling goes very, very portable.

The Spark 2 from Positive Grid looks like a miniature old-school amp, but it is, essentially, a computer with some knobs and a speaker. It has Bluetooth, USB-C, and an associated smartphone app. It needs firmware updates, which can brick the device—ask me how I found this out—and it runs code on DSP chips. New guitar tones can be downloaded into the device, where they run as software rather than as analog electrical circuits in an amp or foot pedal.

In other words, the Spark 2 is the latest example of the “software-ization” of music.

Forget the old image of a studio filled with a million-dollar, 48-track mixing board from SSL or API and bursting with analog amps, vintage mics, and ginormous plate reverbs. Studios today are far more likely to be digital, where people record “in the box” (i.e., they track and mix on a computer running software like Pro Tools or Logic Pro) using digital models of classic (and expensive) amplifiers, coded by companies like NeuralDSP and IK Multimedia. These modeled amp sounds are then run through convolution software that relies on digital impulse responses captured from different speakers and speaker cabinets. They are modified with effects like chorus and distortion, which are all modeled, too. The results can be world-class, and they’re increasingly showing up on records.

Once the sounds are recorded, a mixer will often use digital plugins to replicate studio gear like tape delays, FET compressors, and reverbs (which may be completely algorithmic or may rely on impulse responses captured from real halls, studios, plates, and spring reverbs). These days, even the microphones might be digitally modeled by companies like Slate, Antelope, and Universal Audio.

This has put incredible power into the hands of home musicians; for a couple of thousand bucks, most home studios can own models of gear that would have cost more than a house 20 years ago. But one downside of this shift to software is that all the annoying quirks of computing devices have followed.

Want to rock out to the classic Marshall tones found in Universal Audio’s “Lion” amp simulator plugin? Just plug your guitar into your audio interface, connect the interface to a computer via USB, launch a DAW, instantiate the plugin on a blank track, choose the correct input, activate input monitoring so you can hear the results of your jamming, and adjust your DAW’s buffer size to something small in an attempt to prevent latency. A problem with any item on that list means “no jamming for you.”

You may be prompted to update the firmware in your audio interface, or to update your operating system, or to update your DAW—or even its plugins. Oh, and did I mention that Universal Audio uses the truly terrible iLok DRM system and that if your Wi-Fi drops for even a few minutes, the plugins will deactivate? Also, you’ll need to run a constant companion app in the background called UA Connect, which itself can be prone to problems.

Assuming everything is up to date and working, you’re still tethered to your computer by a cable, and you have to make all your settings tweaks with a mouse. After a day of working on computers, this is not quite how I want to spend my “music time.”

But the upsides of digital modeling are just too compelling to return to the old, appliance-like analog gear. For one thing, the analog stuff is expensive. The Lion amp plugin mentioned above gives you not one but several versions of a high-quality Marshall head unit—each one costing thousands of dollars—but you don’t need to lift it (they’re heavy!), mic it (annoying!), or play it at absurdly low levels because your baby is sleeping upstairs. For under a hundred bucks, you can get that sound of an overdriven Marshall turned up to 75 percent and played through several different speaker cabinet options (each of these is also expensive!) right on your machine.

Or consider the Tone King Imperial Mk II, a $2,700, Fender-style amp built in the US. It sounds great. But NeuralDSP offers a stunning digital model for a hundred bucks—and it comes with compressor, overdrive, delay, and reverb pedals, to say nothing of a tuner, a doubler, a pitch-shifter, and a ton of great presets.

So I want the digital amp modeling, but I also want—sometimes, at least—the tactile simplicity of physical knobs and well-built hardware. Or I want to jack in and play without waking up a computer, logging in, launching apps, or using a mouse and an audio interface. Or I want to take my amp models to places where finicky computers aren’t always welcome, like the stage of a club.

Thanks to hardware like the Profiler from Kemper, the Helix gear from Line6, the Cortex pedalboards from NeuralDSP, or Tonex gear from IK Multimedia, this is increasingly common.

The Spark line from Positive Grid has carved out its own niche in this world by offering well-built little amps that run Positive Grid’s digital amp and effects simulations. (If you don’t want the hardware, the company sells its modeling software for PC and Mac under the “Bias” label.)

The Spark 2 is the latest in this line, and I’ve been putting it through its paces over the last couple of months.

Let’s cut right to the conclusion: The Spark 2 is a well-designed, well-built piece of gear. For $300, you get a portable, 50-watt practice amp and Bluetooth speaker that can store eight guitar tones onboard and download thousands more using a smartphone app. Its models aren’t, to my ears, the most realistic out there, but if you want a device to jack into and jam, to play along with backing tracks or loops, or to record some creative ideas, this fits the bill.

Photo of Spark 2.

Credit: Positive Grid

Good practice

Everything about the Spark 2 feels well-built. The unit is surprisingly solid, and it comes with a carrying strap for portability. If you want to truly live the wire-free lifestyle, you can buy a battery pack for $79 that gives you several hours of juice.

For a practice amp, the Spark 2 is also well-connected. It has Bluetooth for streaming audio—but it also has a 3.5 mm aux in jack. It has decent, if somewhat boxy-sounding, speakers, and they get quite loud—but it also has two quarter-inch line out jacks. It has a guitar input jack and a headphone jack. It can use a power supply or a battery. It can connect to a computer via USB, and you can even record that way if you don’t have another audio interface.

Most of the unit’s top is taken up with chunky knobs. These let you select one of the eight onboard presets or adjust model parameters like gain, EQ, modulation, delay, and reverb. There’s also a knob for blending your guitar audio with music played through the device.

Buttons provide basic access to a tuner and a looper, though the associated app unlocks more complex options.

So about that app. It’s not necessary to use the Spark 2, but you’ll need the app if you want to download or create new tones from the many pieces of modeled gear. Options here go far beyond what’s possible with the knobs atop the physical unit.

Spark models a chamber reverb, for instance, which is basically a reflective room into which a speaker plays sound that a microphone picks up. The Spark chamber lets you adjust the volume level of the reverb signal, the reflection time of the chamber, the “dwell” time of the sound in the room, the amount of sound damping, and whether the sound will have some of its lows or highs cut off. (This is common in reverbs to avoid excessive low-end “mud” or top-end “brightness” building up in the reverberating signal.) You’ll need the app to adjust most of these options; the “reverb” control on the Spark 2 simply changes the level.

There’s a fair bit of modeled gear on offer: one noise gate, six compressors, 14 drive pedals, 39 amps, 13 EQ units, six delays, and nine reverbs. Most of these have numerous options. It is not nearly as overwhelming as a package like Amplitube for PCs and Macs, but it’s still a lot of stuff.

To run it all, Positive Grid has beefed up the computational power of the Spark series. The company told me that digital signal processing power has doubled since the original Spark lineup, which allows for “smoother transitions between tones, richer effects, and an expanded memory for presets and loops.” The system runs on an M7 chip “developed specifically for expanded processing power and precise tone reproduction,” and the extra power has allowed Positive Grid to run more complex models on-device, improving their preamp and amplifier sag modeling.

Despite the DSP increase, the results here just don’t compare with the sort of scary-precise tube amp and effects simulations you can run on a computer or a far more expensive hardware modeling rig. I could never get clean and “edge of breakup” tones to sound anything other than artificial, though some of the distortion sounds were quite good. Reverbs and delays also sounded solid.

But the Spark 2 wasn’t really designed for studio-quality recording, and Positive Grid is candid about this. The models running on the Spark 2 are inspired by the company’s computer work, but they are “optimized for an all-in-one, mobile-friendly playing experience,” I was told. The Spark 2 is meant for “practice, jamming, and basic recording,” and those looking for “studio-level control and complex setups” should seek out something else.

This tracks with my experience. Compared to a regular amp, the Spark 2 is crazy portable. When testing the unit, I would haul it between rooms without a second thought, searching for a place to play that wouldn’t annoy some member of my family. (Headphones? Never!) Thanks to the optional battery, I didn’t even need to plug it in. It was a simple, fun way to get some electric guitar practice in without using a screen or a computer, and its sound could fill an entire room. Compared to the weight and hassle of moving a “real” amp, this felt easy.

About that app

I’ve been talking about the Spark 2 and its screen-free experience, but of course you do need to use the app to unlock more advanced features and download new tones onto the hardware. So how good is the software?

For modifying the gear in your presets, the app works fine. Every piece of gear has a nice picture, and you just flick up or down to get a piece of equipment into or out of the effects chain. Changing parameters is simple, with large numbers popping up on screen whenever you touch a virtual control, and you can draw from a huge library of pre-made effect chains.

The app also features plenty of backing music that it can play over the Spark 2. This includes backing tracks, tabbed songs, and the “groove looper,” giving you plenty of options to work on your soloing, but it’s the artificial intelligence that Positive Grid is really pitching this time around.

You are legally required to shoehorn “AI” into every product launch now, and Positive Grid put its AI tools into the app. These include Smart Jam, which tries to adapt to your playing and accompany it in real time. The company tells me that Smart Jam was “trained on a combination of musical datasets that analyze chord structures, song patterns, and rhythmic elements,” but I could never get great results from it. Because the system doesn’t know what you’re going to play in advance, there was always a herky-jerky quality as it tried to adapt its backing track to my changing performance.

I had more success with Spark AI, which is a natural language tone-shaping engine. You tell the system what you’re looking for—the solo in “Stairway to Heaven,” perhaps—and it returns several presets meant to approximate that sound. It does work, I’ll say that. The system reliably gave me tone options that were, with a little imagination, identifiable as “in the ballpark” of what I asked for.

Perhaps the main barrier here is simply that the current Spark amp models aren’t always powerful enough to truly copy the sounds you might be looking for. Spark AI is a great way to pull up a tone that’s appropriate for whatever song you might be practicing, and to do so without forcing you to build it yourself out of pieces of virtual gear. In that sense, it’s a nice practice aid.

Rock on

As it’s pitched—a practice amp and Bluetooth speaker that costs $300—Spark 2 succeeds. It’s such a well-built and designed unit that I enjoyed using it every time I played, even if the tones couldn’t match a real tube amp or even top-quality models. And the portability was more useful than expected, even when just using it around the house.

As DSP chips grow ever more powerful, I’m looking forward to where modeling can take us. For recording purposes, some of the best models will continue to run on powerful personal computers. But for those looking to jam, or to play shows, or to haul a guitar to the beach for an afternoon, hardware products running modeling software offer incredible possibilities already—and they will “spark” even more creativity in the years to come.

Photo of Nate Anderson

Better than the real thing? Spark 2 packs 39 amp sims into $300 Bluetooth speaker Read More »

iphone-16e-review:-the-most-expensive-cheap-iphone-yet

iPhone 16e review: The most expensive cheap iPhone yet


The iPhone 16e rethinks—and prices up—the basic iPhone.

An iPhone sits on the table, displaying the time with the screen on

The iPhone 16e, with a notch and an Action Button. Credit: Samuel Axon

The iPhone 16e, with a notch and an Action Button. Credit: Samuel Axon

For a long time, the cheapest iPhones were basically just iPhones that were older than the current flagship, but last week’s release of the $600 iPhone 16e marks a big change in how Apple is approaching its lineup.

Rather than a repackaging of an old iPhone, the 16e is the latest main iPhone—that is, the iPhone 16—with a bunch of stuff stripped away.

There are several potential advantages to this change. In theory, it allows Apple to support its lower-end offerings for longer with software updates, and it gives entry-level buyers access to more current technologies and features. It also simplifies the marketplace of accessories and the like.

There’s bad news, too, though: Since it replaces the much cheaper iPhone SE in Apple’s lineup, the iPhone 16e significantly raises the financial barrier to entry for iOS (the SE started at $430).

We spent a few days trying out the 16e and found that it’s a good phone—it’s just too bad it’s a little more expensive than the entry-level iPhone should ideally be. In many ways, this phone solves more problems for Apple than it does for consumers. Let’s explore why.

Table of Contents

A beastly processor for an entry-level phone

Like the 16, the 16e has Apple’s A18 chip, the most recent in the made-for-iPhone line of Apple-designed chips. There’s only one notable difference: This variation of the A18 has just four GPU cores instead of five. That will show up in benchmarks and in a handful of 3D games, but it shouldn’t make too much of a difference for most people.

It’s a significant step up over the A15 found in the final 2022 refresh of the iPhone SE, enabling a handful of new features like AAA games and Apple Intelligence.

The A18’s inclusion is good for both Apple and the consumer; Apple gets to establish a new, higher baseline of performance when developing new features for current and future handsets, and consumers likely get many more years of software updates than they’d get on the older chip.

The key example of a feature enabled by the A18 that Apple would probably like us all to talk about the most is Apple Intelligence, a suite of features utilizing generative AI to solve some user problems or enable new capabilities across iOS. By enabling these for the cheapest iPhone, Apple is making its messaging around Apple Intelligence a lot easier; it no longer needs to put effort into clarifying that you can use X feature with this new iPhone but not that one.

We’ve written a lot about Apple Intelligence already, but here’s the gist: There are some useful features here in theory, but Apple’s models are clearly a bit behind the cutting edge, and results for things like notifications summaries or writing tools are pretty mixed. It’s fun to generate original emojis, though!

The iPhone 16e can even use Visual Intelligence, which actually is handy sometimes. On my iPhone 16 Pro Max, I can point the rear camera at an object and press the camera button a certain way to get information about it.

I wouldn’t have expected the 16e to support this, but it does, via the Action Button (which was first introduced in the iPhone 15 Pro). This is a reprogrammable button that can perform a variety of functions, albeit just one at a time. Visual Intelligence is one of the options here, which is pretty cool, even though it’s not essential.

The screen is the biggest upgrade over the SE

Also like the 16, the 16e has a 6.1-inch display. The resolution’s a bit different, though; it’s 2,532 by 1,170 pixels instead of 2,556 by 1,179. It also has a notch instead of the Dynamic Island seen in the 16. All this makes the iPhone 16e’s display seem like a very close match to the one seen in 2022’s iPhone 14—in fact, it might literally be the same display.

I really missed the Dynamic Island while using the iPhone 16e—it’s one of my favorite new features added to the iPhone in recent years, as it consolidates what was previously a mess of notification schemes in iOS. Plus, it’s nice to see things like Uber and DoorDash ETAs and sports scores at a glance.

The main problem with losing the Dynamic Island is that we’re back to the old minor mess of notifications approaches, and I guess Apple has to keep supporting the old ways for a while yet. That genuinely surprises me; I would have thought Apple would want to unify notifications and activities with the Dynamic Island just like the A18 allows the standardization of other features.

This seems to indicate that the Dynamic Island is a fair bit more expensive to include than the good old camera notch flagship iPhones had been rocking since 2017’s iPhone X.

That compromise aside, the display on the iPhone 16e is ridiculously good for a phone at this price point, and it makes the old iPhone SE’s small LCD display look like it’s from another eon entirely by comparison. It gets brighter for both HDR content and sunny-day operation; the blacks are inky and deep, and the contrast and colors are outstanding.

It’s the best thing about the iPhone 16e, even if it isn’t quite as refined as the screens in Apple’s current flagships. Most people would never notice the difference between the screens in the 16e and the iPhone 16 Pro, though.

There is one other screen feature I miss from the higher-end iPhones you can buy in 2025: Those phones can drop the display all the way down to 1 nit, which is awesome for using the phone late at night in bed without disturbing a sleeping partner. Like earlier iPhones, the 16e can only get so dark.

It gets quite bright, though; Apple claims it typically reaches 800 nits in peak brightness but that it can stretch to 1200 when viewing certain HDR photos and videos. That means it gets about twice as bright as the SE did.

Connectivity is key

The iPhone 16e supports the core suite of connectivity options found in modern phones. There’s Wi-Fi 6, Bluetooth 5.3, and Apple’s usual limited implementation of NFC.

There are three new things of note here, though, and they’re good, neutral, and bad, respectively.

USB-C

Let’s start with the good. We’ve moved from Apple’s proprietary Lightning port found in older iPhones (including the final iPhone SE) toward USB-C, now a near-universal standard on mobile devices. It allows faster charging and more standardized charging cable support.

Sure, it’s a bummer to start over if you’ve spent years buying Lightning accessories, but it’s absolutely worth it in the long run. This change means that the entire iPhone line has now abandoned Lightning, so all iPhones and Android phones will have the same main port for years to come. Finally!

The finality of this shift solves a few problems for Apple: It greatly simplifies the accessory landscape and allows the company to move toward producing a smaller range of cables.

Satellite connectivity

Recent flagship iPhones have gradually added a small suite of features that utilize satellite connectivity to make life a little easier and safer.

Among those is crash detection and roadside assistance. The former will use the sensors in the phone to detect if you’ve been in a car crash and contact help, and roadside assistance allows you to text for help when you’re outside of cellular reception in the US and UK.

There are also Emergency SOS and Find My via satellite, which let you communicate with emergency responders from remote places and allow you to be found.

Along with a more general feature that allows Messages via satellite, these features can greatly expand your options if you’re somewhere remote, though they’re not as easy to use and responsive as using the regular cellular network.

Where’s MagSafe?

I don’t expect the 16e to have all the same features as the 16, which is $200 more expensive. In fact, it has more modern features than I think most of its target audience needs (more on that later). That said, there’s one notable omission that makes no sense to me at all.

The 16e does not support MagSafe, a standard for connecting accessories to the back of the device magnetically, often while allowing wireless charging via the Qi standard.

Qi wireless charging is still supported, albeit at a slow 7.5 W, but there are no magnets, meaning a lot of existing MagSafe accessories are a lot less useful with this phone, if they’re usable at all. To be fair, the SE didn’t support MagSafe either, but every new iPhone design since the iPhone 12 way back in 2020 has—and not just the premium flagships.

It’s not like the MagSafe accessory ecosystem was some bottomless well of innovation, but that magnetic alignment is handier than you might think, whether we’re talking about making sure the phone locks into place for the fastest wireless charging speeds or hanging the phone on a car dashboard to use GPS on the go.

It’s one of those things where folks coming from much older iPhones may not care because they don’t know what they’re missing, but it could be annoying in households with multiple generations of iPhones, and it just doesn’t make any sense.

Most of Apple’s choices in the 16e seem to serve the goal of unifying the whole iPhone lineup to simplify the message for consumers and make things easier for Apple to manage efficiently, but the dropping of MagSafe is bizarre.

It almost makes me think that Apple might plan to drop MagSafe from future flagship iPhones, too, and go toward something new, just because that’s the only explanation I can think of. That otherwise seems unlikely to me right now, but I guess we’ll see.

The first Apple-designed cellular modem

We’ve been seeing rumors that Apple planned to drop third-party modems from companies like Qualcomm for years. As far back as 2018, Apple was poaching Qualcomm employees in an adjacent office in San Diego. In 2020, Apple SVP Johny Srouji announced to employees that work had begun.

It sounds like development has been challenging, but the first Apple-designed modem has arrived here in the 16e of all places. Dubbed the C1, it’s… perfectly adequate. It’s about as fast or maybe just a smidge slower than what you get in the flagship phones, but almost no user would notice any difference at all.

That’s really a win for Apple, which has struggled with a tumultuous relationship with its partners here for years and which has long run into space problems in its phones in part because the third-party modems weren’t compact enough.

This change may not matter much for the consumer beyond freeing up just a tiny bit of space for a slightly larger battery, but it’s another step in Apple’s long journey to ultimately and fully control every component in the iPhone that it possibly can.

Bigger is better for batteries

There is one area where the 16e is actually superior to the 16, much less the SE: battery life. The 16e reportedly has a 3,961 mAh battery, the largest in any of the many iPhones with roughly this size screen. Apple says it offers up to 26 hours of video playback, which is the kind of number you expect to see in a much larger flagship phone.

I charged this phone three times in just under a week with it, though I wasn’t heavily hitting 5G networks, playing many 3D games, or cranking the brightness way up all the time while using it.

That’s a bit of a bump over the 16, but it’s a massive leap over the SE, which promised a measly 15 hours of video playback. Every single phone in Apple’s lineup now has excellent battery life by any standard.

Quality over quantity in the camera system

The 16E’s camera system leaves the SE in the dust, but it’s no match for the robust system found in the iPhone 16. Regardless, it’s way better than you’d typically expect from a phone at this price.

Like the 16, the 16e has a 48 MP “Fusion” wide-angle rear camera. It typically doesn’t take photos at 48 MP (though you can do that while compromising color detail). Rather, 24 MP is the target. The 48 MP camera enables 2x zoom that is nearly visually indistinguishable from optical zoom.

Based on both the specs and photo comparisons, the main camera sensor in the 16e appears to me to be exactly the same as that one found in the 16. We’re just missing the ultra-wide lens (which allows more zoomed-out photos, ideal for groups of people in small spaces, for example) and several extra features like advanced image stabilization, the newest Photographic Styles, and macro photography.

The iPhone 16e takes excellent photos in bright conditions. Samuel Axon

That’s a lot of missing features, sure, but it’s wild how good this camera is for this price point. Even something like the Pixel 8a can’t touch it (though to be fair, the Pixel 8a is $100 cheaper).

Video capture is a similar situation: The 16e shoots at the same resolutions and framerates as the 16, but it lacks a few specialized features like Cinematic and Action modes. There’s also a front-facing camera with the TrueDepth sensor for Face ID in that notch, and it has comparable specs to the front-facing cameras we’ve seen in a couple of years of iPhones at this point.

If you were buying a phone for the cameras, this wouldn’t be the one for you. It’s absolutely worth paying another $200 for the iPhone 16 (or even just $100 for the iPhone 15 for the ultra-wide lens for 0.5x zoom; the 15 is still available in the Apple Store) if that’s your priority.

The iPhone 16’s macro mode isn’t available here, so ultra-close-ups look fuzzy. Samuel Axon

But for the 16e’s target consumer (mostly folks with the iPhone 11 or older or an iPhone SE, who just want the cheapest functional iPhone they can get) it’s almost overkill. I’m not complaining, though it’s a contributing factor to the phone’s cost compared to entry-level Android phones and Apple’s old iPhone SE.

RIP small phones, once and for all

In one fell swoop, the iPhone 16e’s replacement of the iPhone SE eliminates a whole range of legacy technologies that have held on at the lower end of the iPhone lineup for years. Gone are Touch ID, the home button, LCD displays, and Lightning ports—they’re replaced by Face ID, swipe gestures, OLED, and USB-C.

Newer iPhones have had most of those things for quite some time. The latest feature was USB-C, which came in 2023’s iPhone 15. The removal of the SE from the lineup catches the bottom end of the iPhone up with the top in these respects.

That said, the SE had maintained one positive differentiator, too: It was small enough to be used one-handed by almost anyone. With the end of the SE and the release of the 16e, the one-handed iPhone is well and truly dead. Of course, most people have been clear they want big screens and batteries above almost all else, so the writing had been on the wall for a while for smaller phones.

The death of the iPhone SE ushers in a new era for the iPhone with bigger and better features—but also bigger price tags.

A more expensive cheap phone

Assessing the iPhone 16e is a challenge. It’s objectively a good phone—good enough for the vast majority of people. It has a nearly top-tier screen (though it clocks in at 60Hz, while some Android phones close to this price point manage 120Hz), a camera system that delivers on quality even if it lacks special features seen in flagships, strong connectivity, and performance far above what you’d expect at this price.

If you don’t care about extra camera features or nice-to-haves like MagSafe or the Dynamic Island, it’s easy to recommend saving a couple hundred bucks compared to the iPhone 16.

The chief criticism I have that relates to the 16e has less to do with the phone itself than Apple’s overall lineup. The iPhone SE retailed for $430, nearly half the price of the 16. By making the 16e the new bottom of the lineup, Apple has significantly raised the financial barrier to entry for iOS.

Now, it’s worth mentioning that a pretty big swath of the target market for the 16e will buy it subsidized through a carrier, so they might not pay that much up front. I always recommend buying a phone directly if you can, though, as carrier subsidization deals are usually worse for the consumer.

The 16e’s price might push more people to go for the subsidy. Plus, it’s just more phone than some people need. For example, I love a high-quality OLED display for watching movies, but I don’t think the typical iPhone SE customer was ever going to care about that.

That’s why I believe the iPhone 16e solves more problems for Apple than it does for the consumer. In multiple ways, it allows Apple to streamline production, software support, and marketing messaging. It also drives up the average price per unit across the whole iPhone line and will probably encourage some people who would have spent $430 to spend $600 instead, possibly improving revenue. All told, it’s a no-brainer for Apple.

It’s just a mixed bag for the sort of no-frills consumer who wants a minimum viable phone and who for one reason or another didn’t want to go the Android route. The iPhone 16e is definitely a good phone—I just wish there were more options for that consumer.

The good

  • Dramatically improved display than the iPhone SE
  • Likely stronger long-term software support than most previous entry-level iPhones
  • Good battery life and incredibly good performance for this price point
  • A high-quality camera, especially for the price

The bad

  • No ultra-wide camera
  • No MagSafe
  • No Dynamic Island

The ugly

  • Significantly raises the entry price point for buying an iPhone

Photo of Samuel Axon

Samuel Axon is a senior editor at Ars Technica. He covers Apple, software development, gaming, AI, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

iPhone 16e review: The most expensive cheap iPhone yet Read More »

amd-radeon-rx-9070-and-9070-xt-review:-rdna-4-fixes-a-lot-of-amd’s-problems

AMD Radeon RX 9070 and 9070 XT review: RDNA 4 fixes a lot of AMD’s problems


For $549 and $599, AMD comes close to knocking out Nvidia’s GeForce RTX 5070.

AMD’s Radeon RX 9070 and 9070 XT are its first cards based on the RDNA 4 GPU architecture. Credit: Andrew Cunningham

AMD’s Radeon RX 9070 and 9070 XT are its first cards based on the RDNA 4 GPU architecture. Credit: Andrew Cunningham

AMD is a company that knows a thing or two about capitalizing on a competitor’s weaknesses. The company got through its early-2010s nadir partially because its Ryzen CPUs struck just as Intel’s current manufacturing woes began to set in, first with somewhat-worse CPUs that were great value for the money and later with CPUs that were better than anything Intel could offer.

Nvidia’s untrammeled dominance of the consumer graphics card market should also be an opportunity for AMD. Nvidia’s GeForce RTX 50-series graphics cards have given buyers very little to get excited about, with an unreachably expensive high-end 5090 refresh and modest-at-best gains from 5080 and 5070-series cards that are also pretty expensive by historical standards, when you can buy them at all. Tech YouTubers—both the people making the videos and the people leaving comments underneath them—have been almost uniformly unkind to the 50 series, hinting at consumer frustrations and pent-up demand for competitive products from other companies.

Enter AMD’s Radeon RX 9070 XT and RX 9070 graphics cards. These are aimed right at the middle of the current GPU market at the intersection of high sales volume and decent profit margins. They promise good 1440p and entry-level 4K gaming performance and improved power efficiency compared to previous-generation cards, with fixes for long-time shortcomings (ray-tracing performance, video encoding, and upscaling quality) that should, in theory, make them more tempting for people looking to ditch Nvidia.

Table of Contents

RX 9070 and 9070 XT specs and speeds

RX 9070 XT RX 9070 RX 7900 XTX RX 7900 XT RX 7900 GRE RX 7800 XT
Compute units (Stream processors) 64 RDNA4 (4,096) 56 RDNA4 (3,584) 96 RDNA3 (6,144) 84 RDNA3 (5,376) 80 RDNA3 (5,120) 60 RDNA3 (3,840)
Boost Clock 2,970 MHz 2,520 MHz 2,498 MHz 2,400 MHz 2,245 MHz 2,430 MHz
Memory Bus Width 256-bit 256-bit 384-bit 320-bit 256-bit 256-bit
Memory Bandwidth 650GB/s 650GB/s 960GB/s 800GB/s 576GB/s 624GB/s
Memory size 16GB GDDR6 16GB GDDR6 24GB GDDR6 20GB GDDR6 16GB GDDR6 16GB GDDR6
Total board power (TBP) 304 W 220 W 355 W 315 W 260 W 263 W

AMD’s high-level performance promise for the RDNA 4 architecture revolves around big increases in performance per compute unit (CU). An RDNA 4 CU, AMD says, is nearly twice as fast in rasterized performance as RDNA 2 (that is, rendering without ray-tracing effects enabled) and nearly 2.5 times as fast as RDNA 2 in games with ray-tracing effects enabled. Performance for at least some machine learning workloads also goes way up—twice as fast as RDNA 3 and four times as fast as RDNA 2.

We’ll see this in more detail when we start comparing performance, but AMD seems to have accomplished this goal. Despite having 64 or 56 compute units (for the 9070 XT and 9070, respectively), the cards’ performance often competes with AMD’s last-generation flagships, the RX 7900 XTX and 7900 XT. Those cards came with 96 and 84 compute units, respectively. The 9070 cards are specced a lot more like last generation’s RX 7800 XT—including the 16GB of GDDR6 on a 256-bit memory bus, as AMD still isn’t using GDDR6X or GDDR7—but they’re much faster than the 7800 XT was.

AMD has dramatically increased the performance-per-compute unit for RDNA 4. AMD

The 9070 series also uses a new 4 nm manufacturing process from TSMC, an upgrade from the 7000 series’ 5 nm process (and the 6 nm process used for the separate memory controller dies in higher-end RX 7000-series models that used chiplets). AMD’s GPUs are normally a bit less efficient than Nvidia’s, but the architectural improvements and the new manufacturing process allow AMD to do some important catch-up.

Both of the 9070 models we tested were ASRock Steel Legend models, and the 9070 and 9070 XT had identical designs—we’ll probably see a lot of this from AMD’s partners since the GPU dies and the 16GB RAM allotments are the same for both models. Both use two 8-pin power connectors; AMD says partners are free to use the 12-pin power connector if they want, but given Nvidia’s ongoing issues with it, most cards will likely stick with the reliable 8-pin connectors.

AMD doesn’t appear to be making and selling reference designs for the 9070 series the way it did for some RX 7000 and 6000-series GPUs or the way Nvidia does with its Founders Edition cards. From what we’ve seen, 2 or 2.5-slot, triple-fan designs will be the norm, the way they are for most midrange GPUs these days.

Testbed notes

We used the same GPU testbed for the Radeon RX 9070 series as we have for our GeForce RTX 50-series reviews.

An AMD Ryzen 7 9800X3D ensures that our graphics cards will be CPU-limited as little as possible. An ample 1050 W power supply, 32GB of DDR5-6000, and an AMD X670E motherboard with the latest BIOS installed round out the hardware. On the software side, we use an up-to-date installation of Windows 11 24H2 and recent GPU drivers for older cards, ensuring that our tests reflect whatever optimizations Microsoft, AMD, Nvidia, and game developers have made since the last generation of GPUs launched.

We have numbers for all of Nvidia’s RTX 50-series GPUs so far, plus most of the 40-series cards, most of AMD’s RX 7000-series cards, and a handful of older GPUs from the RTX 30-series and RX 6000 series. We’ll focus on comparing the 9070 XT and 9070 to other 1440p-to-4K graphics cards since those are the resolutions AMD is aiming at.

Performance

At $549 and $599, the 9070 series is priced to match Nvidia’s $549 RTX 5070 and undercut the $749 RTX 5070 Ti. So we’ll focus on comparing the 9070 series to those cards, plus the top tier of GPUs from the outgoing RX 7000-series.

Some 4K rasterized benchmarks.

Starting at the top with rasterized benchmarks with no ray-tracing effects, the 9070 XT does a good job of standing up to Nvidia’s RTX 5070 Ti, coming within a few frames per second of its performance in all the games we tested (and scoring very similarly in the 3DMark Time Spy Extreme benchmark).

Both cards are considerably faster than the RTX 5070—between 15 and 28 percent for the 9070 XT and between 5 and 13 percent for the regular 9070 (our 5070 scored weirdly low in Horizon Zero Dawn Remastered, so we’d treat those numbers as outliers for now). Both 9070 cards also stack up well next to the RX 7000 series here—the 9070 can usually just about match the performance of the 7900 XT, and the 9070 XT usually beats it by a little. Both cards thoroughly outrun the old RX 7900 GRE, which was AMD’s $549 GPU offering just a year ago.

The 7900 XT does have 20GB of RAM instead of 16GB, which might help its performance in some edge cases. But 16GB is still perfectly generous for a 1440p-to-4K graphics card—the 5070 only offers 12GB, which could end up limiting its performance in some games as RAM requirements continue to rise.

On ray-tracing improvements

Nvidia got a jump on AMD when it introduced hardware-accelerated ray-tracing in the RTX 20-series in 2018. And while these effects were only supported in a few games at the time, many modern games offer at least some kind of ray-traced lighting effects.

AMD caught up a little when it began shipping its own ray-tracing support in the RDNA2 architecture in late 2020, but the issue since then has always been that AMD cards have taken a larger performance hit than GeForce GPUs when these effects are turned on. RDNA3 promised improvements, but our tests still generally showed the same deficit as before.

So we’re looking for two things with RDNA4’s ray-tracing performance. First, we want the numbers to be higher than they were for comparably priced RX 7000-series GPUs, the same thing we look for in non-ray-traced (or rasterized) rendering performance. Second, we want the size of the performance hit to go down. To pick an example: the RX 7900 GRE could compete with Nvidia’s RTX 4070 Ti Super in games without ray tracing, but it was closer to a non-Super RTX 4070 in ray-traced games. It has helped keep AMD’s cards from being across-the-board competitive with Nvidia’s—is that any different now?

Benchmarks for games with ray-tracing effects enabled. Both AMD cards generally keep pace with the 5070 in these tests thanks to RDNA 4’s improvements.

The picture our tests paint is mixed but tentatively positive. The 9070 series and RDNA4 post solid improvements in the Cyberpunk 2077 benchmarks, substantially closing the performance gap with Nvidia. In games where AMD’s cards performed well enough before—here represented by Returnal—performance goes up, but roughly proportionately with rasterized performance. And both 9070 cards still punch below their weight in Black Myth: Wukong, falling substantially behind the 5070 under the punishing Cinematic graphics preset.

So the benefits you see, as with any GPU update, will depend a bit on the game you’re playing. There’s also a possibility that game optimizations and driver updates made with RDNA4 in mind could boost performance further. We can’t say that AMD has caught all the way up to Nvidia here—the 9070 and 9070 XT are both closer to the GeForce RTX 5070 than the 5070 Ti, despite keeping it closer to the 5070 Ti in rasterized tests—but there is real, measurable improvement here, which is what we were looking for.

Power usage

The 9070 series’ performance increases are particularly impressive when you look at the power-consumption numbers. The 9070 comes close to the 7900 XT’s performance but uses 90 W less power under load. It beats the RTX 5070 most of the time but uses around 30 W less power.

The 9070 XT is a little less impressive on this front—AMD has set clock speeds pretty high, and this can increase power use disproportionately. The 9070 XT is usually 10 or 15 percent faster than the 9070 but uses 38 percent more power. The XT’s power consumption is similar to the RTX 5070 Ti’s (a GPU it often matches) and the 7900 XT’s (a GPU it always beats), so it’s not too egregious, but it’s not as standout as the 9070’s.

AMD gives 9070 owners a couple of new toggles for power limits, though, which we’ll talk about in the next section.

Experimenting with “Total Board Power”

We don’t normally dabble much with overclocking when we review CPUs or GPUs—we’re happy to leave that to folks at other outlets. But when we review CPUs, we do usually test them with multiple power limits in place. Playing with power limits is easier (and occasionally safer) than actually overclocking, and it often comes with large gains to either performance (a chip that performs much better when given more power to work with) or efficiency (a chip that can run at nearly full speed without using as much power).

Initially, I experimented with the RX 9070’s power limits by accident. AMD sent me one version of the 9070 but exchanged it because of a minor problem the OEM identified with some units early in the production run. I had, of course, already run most of our tests on it, but that’s the way these things go sometimes.

By bumping the regular RX 9070’s TBP up just a bit, you can nudge it closer to 9070 XT-level performance.

The replacement RX 9070 card, an ASRock Steel Legend model, was performing significantly better in our tests, sometimes nearly closing the gap between the 9070 and the XT. It wasn’t until I tested power consumption that I discovered the explanation—by default, it was using a 245 W power limit rather than the AMD-defined 220 W limit. Usually, these kinds of factory tweaks don’t make much of a difference, but for the 9070, this power bump gave it a nice performance boost while still keeping it close to the 250 W power limit of the GeForce RTX 5070.

The 90-series cards we tested both add some power presets to AMD’s Adrenalin app in the Performance tab under Tuning. These replace and/or complement some of the automated overclocking and undervolting buttons that exist here for older Radeon cards. Clicking Favor Efficiency or Favor Performance can ratchet the card’s Total Board Power (TBP) up or down, limiting performance so that the card runs cooler and quieter or allowing the card to consume more power so it can run a bit faster.

The 9070 cards get slightly different performance tuning options in the Adrenalin software. These buttons mostly change the card’s Total Board Power (TBP), making it simple to either improve efficiency or boost performance a bit. Credit: Andrew Cunningham

For this particular ASRock 9070 card, the default TBP is set to 245 W. Selecting “Favor Efficiency” sets it to the default 220 W. You can double-check these values using an app like HWInfo, which displays both the current TBP and the maximum TBP in its Sensors Status window. Clicking the Custom button in the Adrenalin software gives you access to a Power Tuning slider, which for our card allowed us to ratchet the TBP up by up to 10 percent or down by as much as 30 percent.

This is all the firsthand testing we did with the power limits of the 9070 series, though I would assume that adding a bit more power also adds more overclocking headroom (bumping up the power limits is common for GPU overclockers no matter who makes your card). AMD says that some of its partners will ship 9070 XT models set to a roughly 340 W power limit out of the box but acknowledges that “you start seeing diminishing returns as you approach the top of that [power efficiency] curve.”

But it’s worth noting that the driver has another automated set-it-and-forget-it power setting you can easily use to find your preferred balance of performance and power efficiency.

A quick look at FSR4 performance

There’s a toggle in the driver for enabling FSR 4 in FSR 3.1-supporting games. Credit: Andrew Cunningham

One of AMD’s headlining improvements to the RX 90-series is the introduction of FSR 4, a new version of its FidelityFX Super Resolution upscaling algorithm. Like Nvidia’s DLSS and Intel’s XeSS, FSR 4 can take advantage of RDNA 4’s machine learning processing power to do hardware-backed upscaling instead of taking a hardware-agnostic approach as the older FSR versions did. AMD says this will improve upscaling quality, but it also means FSR4 will only work on RDNA 4 GPUs.

The good news is that FSR 3.1 and FSR 4 are forward- and backward-compatible. Games that have already added FSR 3.1 support can automatically take advantage of FSR 4, and games that support FSR 4 on the 90-series can just run FSR 3.1 on older and non-AMD GPUs.

FSR 4 comes with a small performance hit compared to FSR 3.1 at the same settings, but better overall quality can let you drop to a faster preset like Balanced or Performance and end up with more frames-per-second overall. Credit: Andrew Cunningham

The only game in our current test suite to be compatible with FSR 4 is Horizon Zero Dawn Remastered, and we tested its performance using both FSR 3.1 and FSR 4. In general, we found that FSR 4 improved visual quality at the cost of just a few frames per second when run at the same settings—not unlike using Nvidia’s recently released “transformer model” for DLSS upscaling.

Many games will let you choose which version of FSR you want to use. But for FSR 3.1 games that don’t have a built-in FSR 4 option, there’s a toggle in AMD’s Adrenalin driver you can hit to switch to the better upscaling algorithm.

Even if they come with a performance hit, new upscaling algorithms can still improve performance by making the lower-resolution presets look better. We run all of our testing in “Quality” mode, which generally renders at two-thirds of native resolution and scales up. But if FSR 4 running in Balanced or Performance mode looks the same to your eyes as FSR 3.1 running in Quality mode, you can still end up with a net performance improvement in the end.

RX 9070 or 9070 XT?

Just $50 separates the advertised price of the 9070 from that of the 9070 XT, something both Nvidia and AMD have done in the past that I find a bit annoying. If you have $549 to spend on a graphics card, you can almost certainly scrape together $599 for a graphics card. All else being equal, I’d tell most people trying to choose one of these to just spring for the 9070 XT.

That said, availability and retail pricing for these might be all over the place. If your choices are a regular RX 9070 or nothing, or an RX 9070 at $549 and an RX 9070 XT at any price higher than $599, I would just grab a 9070 and not sweat it too much. The two cards aren’t that far apart in performance, especially if you bump the 9070’s TBP up a little bit, and games that are playable on one will be playable at similar settings on the other.

Pretty close to great

If you’re building a 1440p or 4K gaming box, the 9070 series might be the ones to beat right now. Credit: Andrew Cunningham

We’ve got plenty of objective data in here, so I don’t mind saying that I came into this review kind of wanting to like the 9070 and 9070 XT. Nvidia’s 50-series cards have mostly upheld the status quo, and for the last couple of years, the status quo has been sustained high prices and very modest generational upgrades. And who doesn’t like an underdog story?

I think our test results mostly justify my priors. The RX 9070 and 9070 XT are very competitive graphics cards, helped along by a particularly mediocre RTX 5070 refresh from Nvidia. In non-ray-traced games, both cards wipe the floor with the 5070 and come close to competing with the $749 RTX 5070 Ti. In games and synthetic benchmarks with ray-tracing effects on, both cards can usually match or slightly beat the similarly priced 5070, partially (if not entirely) addressing AMD’s longstanding performance deficit here. Neither card comes close to the 5070 Ti in these games, but they’re also not priced like a 5070 Ti.

Just as impressively, the Radeon cards compete with the GeForce cards while consuming similar amounts of power. At stock settings, the RX 9070 uses roughly the same amount of power under load as a 4070 Super but with better performance. The 9070 XT uses about as much power as a 5070 Ti, with similar performance before you turn ray-tracing on. Power efficiency was a small but consistent drawback for the RX 7000 series compared to GeForce cards, and the 9070 cards mostly erase that disadvantage. AMD is also less stingy with the RAM, giving you 16GB for the price Nvidia charges for 12GB.

Some of the old caveats still apply. Radeons take a bigger performance hit, proportionally, than GeForce cards. DLSS already looks pretty good and is widely supported, while FSR 3.1/FSR 4 adoption is still relatively low. Nvidia has a nearly monopolistic grip on the dedicated GPU market, which means many apps, AI workloads, and games support its GPUs best/first/exclusively. AMD is always playing catch-up to Nvidia in some respect, and Nvidia keeps progressing quickly enough that it feels like AMD never quite has the opportunity to close the gap.

AMD also doesn’t have an answer for DLSS Multi-Frame Generation. The benefits of that technology are fairly narrow, and you already get most of those benefits with single-frame generation. But it’s still a thing that Nvidia does that AMDon’t.

Overall, the RX 9070 cards are both awfully tempting competitors to the GeForce RTX 5070—and occasionally even the 5070 Ti. They’re great at 1440p and decent at 4K. Sure, I’d like to see them priced another $50 or $100 cheaper to well and truly undercut the 5070 and bring 1440p-to-4K performance t0 a sub-$500 graphics card. It would be nice to see AMD undercut Nvidia’s GPUs as ruthlessly as it undercut Intel’s CPUs nearly a decade ago. But these RDNA4 GPUs have way fewer downsides than previous-generation cards, and they come at a moment of relative weakness for Nvidia. We’ll see if the sales follow.

The good

  • Great 1440p performance and solid 4K performance
  • 16GB of RAM
  • Decisively beats Nvidia’s RTX 5070, including in most ray-traced games
  • RX 9070 XT is competitive with RTX 5070 Ti in non-ray-traced games for less money
  • Both cards match or beat the RX 7900 XT, AMD’s second-fastest card from the last generation
  • Decent power efficiency for the 9070 XT and great power efficiency for the 9070
  • Automated options for tuning overall power use to prioritize either efficiency or performance
  • Reliable 8-pin power connectors available in many cards

The bad

  • Nvidia’s ray-tracing performance is still usually better
  • At $549 and $599, pricing matches but doesn’t undercut the RTX 5070
  • FSR 4 isn’t as widely supported as DLSS and may not be for a while

The ugly

  • Playing the “can you actually buy these for AMD’s advertised prices” game

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

AMD Radeon RX 9070 and 9070 XT review: RDNA 4 fixes a lot of AMD’s problems Read More »

nvidia-geforce-rtx-5070-ti-review:-an-rtx-4080-for-$749,-at-least-in-theory

Nvidia GeForce RTX 5070 Ti review: An RTX 4080 for $749, at least in theory


may the odds be ever in your favor

It’s hard to review a product if you don’t know what it will actually cost!

The Asus Prime GeForce RTX 5070 Ti. Credit: Andrew Cunningham

The Asus Prime GeForce RTX 5070 Ti. Credit: Andrew Cunningham

Nvidia’s RTX 50-series makes its first foray below the $1,000 mark starting this week, with the $749 RTX 5070 Ti—at least in theory.

The third-fastest card in the Blackwell GPU lineup, the 5070 Ti is still far from “reasonably priced” by historical standards (the 3070 Ti was $599 at launch). But it’s also $50 cheaper and a fair bit faster than the outgoing 4070 Ti Super and the older 4070 Ti. These are steps in the right direction, if small ones.

We’ll talk more about its performance shortly, but at a high level, the 5070 Ti’s performance falls in the same general range as the 4080 Super and the original RTX 4080, a card that launched for $1,199 just over two years ago. And it’s probably your floor for consistently playable native 4K gaming for those of you out there who don’t want to rely on DLSS or 4K upscaling to hit that resolution (it’s also probably all the GPU that most people will need for high-FPS 1440p, if that’s more your speed).

But it’s a card I’m ambivalent about! It’s close to 90 percent as fast as a 5080 for 75 percent of the price, at least if you go by Nvidia’s minimum list prices, which for the 5090 and 5080 have been mostly fictional so far. If you can find it at that price—and that’s a big “if,” since every $749 model is already out of stock across the board at Newegg—and you’re desperate to upgrade or are building a brand-new 4K gaming PC, you could do worse. But I wouldn’t spend more than $749 on it, and it might be worth waiting to see what AMD’s first 90-series Radeon cards look like in a couple weeks before you jump in.

Meet the GeForce RTX 5070 Ti

RTX 5080 RTX 4080 Super RTX 5070 Ti RTX 4070 Ti Super RTX 4070 Ti RTX 5070
CUDA Cores 10,752 10,240 8,960 8,448 7,680 6,144
Boost Clock 2,617 MHz 2,550 MHz 2,452 MHz 2,610 MHz 2,610 MHz 2,512 MHz
Memory Bus Width 256-bit 256-bit 256-bit 256-bit 192-bit 192-bit
Memory Bandwidth 960 GB/s 736 GB/s 896 GB/s 672 GB/s 504 GB/s 672 GB/s
Memory size 16GB GDDR7 16GB GDDR6X 16GB GDDR7 16GB GDDR6X 12GB GDDR6X 12GB GDDR7
TGP 360 W 320 W 300 W 285 W 285 W 250 W

Nvidia isn’t making a Founders Edition version of the 5070 Ti, so this time around our review unit is an Asus Prime GeForce RTX 5070 Ti provided by Asus and Nvidia. These third-party cards will deviate a little from the stock specs listed above, but factory overclocks tend to be inordinately mild, and done mostly so the GPU manufacturer can slap a big “overclocked” badge somewhere on the box. We tested this Asus card with its BIOS switch set to “performance” mode, which elevates the boost clock by an entire 30 MHz; you don’t need to be a math whiz to guess that a 1.2 percent overclock is not going to change performance much.

Compared to the 4070 Ti Super, the 5070 Ti brings two things to the table: a roughly 6 percent increase in CUDA cores and a 33 percent increase in memory bandwidth, courtesy of the switch from GDDR6X to GDDR7. The original 4070 Ti had even fewer CUDA cores, but most importantly for its 4K performance included just 12GB of memory on a 192-bit bus.

The 5070 Ti is based on the same GB203 GPU silicon as the 5080 series, but with 1,792 CUDA cores disabled. But there are a lot of similarities between the two, including the 16GB bank of GDDR7 and the 256-bit memory bus. It looks nothing like the yawning gap between the RTX 5090 and the RTX 5080, and the two cards’ similar-ish specs meant they weren’t too far away from each other in our testing. The 5070 Ti’s 300 W power requirement is also a bit lower than the 5080’s 360 W, but it’s pretty close to the 4080 and 4080 Super’s 320 W; in practice, the 5070 Ti draws about as much as the 4080 cards do under load.

Asus’ design for its Prime RTX 5070 Ti is an inoffensive 2.5-slot, triple-fan card that should fit without a problem in most builds. Credit: Andrew Cunningham

As a Blackwell GPU, the 5070 Ti also supports Nvidia’s most-hyped addition to the 50-series: support for DLSS 4 and Multi-Frame Generation (MFG). We’ve already covered this in our 5090 and 5080 reviews, but the short version is that MFG works exactly like Frame Generation did in the 40-series, except that it can now insert up to three AI-generated frames in between natively rendered frames instead of just one.

Especially if you’re already running at a reasonably high frame rate, this can make things look a lot smoother on a high-refresh-rate monitor without introducing distractingly excessive lag or weird rendering errors. The feature is mainly controversial because Nvidia is comparing 50-series performance numbers with DLSS MFG enabled to older 40-series cards without DLSS MFG to make the 50-series cards seem a whole lot faster than they actually are.

We’ll publish some frame-generation numbers in our review, both using DLSS and (for AMD cards) FSR. But per usual, we’ll continue to focus on natively rendered performance—more relevant for all the games out there that don’t support frame generation or don’t benefit much from it, and more relevant because your base performance dictates how good your generated frames will look and feel anyway.

Testbed notes

We tested the 5070 Ti in the same updated testbed and with the same updated suite of games that we started using in our RTX 5090 review. The heart of the build is an AMD Ryzen 9800X3D, ensuring that our numbers are limited as little as possible by the CPU speed.

Per usual, we prioritize testing GPUs at resolutions that we think most people will use them for. For the 5070 Ti, that means both 4K and 1440p—this card is arguably still overkill for 1440p, but if you’re trying to hit 144 or 240 Hz (or even more) on a monitor, there’s a good case to be made for it. We also use a mix of ray-traced and non-ray-traced games. For the games we test with upscaling enabled, we use DLSS on Nvidia cards and the newest supported version of FSR (usually 2.x or 3.x) for AMD cards.

Though we’ve tested and re-tested multiple cards with recent drivers in our updated testbed, we don’t have a 4070 Ti Super, 4070 Ti, or 3070 Ti available to test with. We’ve provided some numbers for those GPUs from past reviews; these are from a PC running older drivers and a Ryzen 7 7800X3D instead of a 9800X3D, and we’ve put asterisks next to them in our charts. They should still paint a reasonably accurate picture of the older GPUs’ relative performance, but take them with that small grain of salt.

Performance and power

Despite including fewer CUDA cores than either version of the 4080, some combination of architectural improvements and memory bandwidth increases help the card keep pace with both 4080 cards almost perfectly. In most of our tests, it landed in the narrow strip right in between the 4080 and the 4080 Super, and its power consumption under load was also almost identical.

Benchmarks with DLSS/FSR and/or frame generation enabled.

In every way that matters, the 5070 Ti is essentially an RTX 4080 that also supports DLSS Multi-Frame Generation. You can see why we’d be mildly enthusiastic about it at $749 but less and less impressed the closer the price creeps to $1,000.

Being close to a 4080 also means that the performance gap between the 5070 Ti and the 5080 is usually pretty small. In most of the games we tested, the 5070 Ti hovers right around 90 percent of the 5080’s performance.

The 5070 Ti is also around 60 percent as fast as an RTX 5090. The performance is a lot lower, but the price-to-performance ratio is a lot higher, possibly reflecting the fact that the 5070 Ti actually has other GPUs it has to compete with (in non-ray-traced games, the Radeon RX 7900 XTX generally keeps pace with the 5070 Ti, though at this late date it is mostly out of stock unless you’re willing to pay way more than you ought to for one).

Compared to the old 4070 Ti, the 5070 Ti can be between 20 and 50 percent faster at 4K, depending on how limited the game is by the 4070 Ti’s narrower memory bus and 12GB bank of RAM. The performance improvement over the 4070 Ti Super is more muted, ranging from as little as 8 percent to as much as 20 percent in our 4K tests. This is better than the RTX 5080 did relative to the RTX 4080 Super, but as a generational leap, it’s still pretty modest—it’s clear why Nvidia wants everyone to look at the Multi-Frame Generation numbers when making comparisons.

Waiting to put theory into practice

Asus’ RTX 5070 Ti, replete with 12-pin power plug. Credit: Andrew Cunningham

Being able to get RTX 4080-level performance for several hundred dollars less just a couple of years after the 4080 launched is kind of exciting, though that excitement is leavened by the still high-ish $749 price tag (again, assuming it’s actually available at or anywhere near that price). That certainly makes it feel more like a next-generation GPU than the RTX 5080 did—and whatever else you can say about it, the 5070 Ti certainly feels like a better buy than the 5080.

The 5070 Ti is a fast and 4K-capable graphics card, fast enough that you should be able to get some good results from all of Blackwell’s new frame-generation trickery if that’s something you want to play with. Its price-to-performance ratio does not thrill me, but if you do the math, it’s still a much better value than the 4070 Ti series was—particularly the original 4070 Ti, with the 12GB allotment of RAM that limited its usefulness and future-proofness at 4K.

Two reasons to hold off on buying a 5070 Ti, if you’re thinking about it: We’re waiting to see how AMD’s 9070 series GPUs shake out, and Nvidia’s 50-series launch so far has been kind of a mess, with low availability and price gouging both on retail sites and in the secondhand market. Pay much more than $749 for a 5070 Ti, and its delicate value proposition fades quickly. We should know more about the AMD cards in a couple of weeks. The supply situation, at least so far, seems like a problem that Nvidia can’t (or won’t) figure out how to solve.

The good

  • For a starting price of $749, you get the approximate performance and power consumption of an RTX 4080, a GPU that cost $1,199 two years ago and $999 one year ago.
  • Good 4K performance and great 1440p performance for those with high-refresh monitors.
  • 16GB of RAM should be reasonably future-proof.
  • Multi-Frame Generation is an interesting performance-boosting tool to have in your toolbox, even if it isn’t a cure-all for low framerates.
  • Nvidia-specific benefits like DLSS support and CUDA.

The bad

  • Not all that much faster than a 4070 Ti Super.
  • $749 looks cheap compared to a $2,000 GPU, but it’s still enough money to buy a high-end game console or an entire 1080p gaming PC.

The ugly

  • Pricing and availability for other 50-series GPUs to date have both been kind of a mess.
  • Will you actually be able to get it for $749? Because it doesn’t make a ton of sense if it costs more than $749.
  • Seriously, it’s been months since I reviewed a GPU that was actually widely available at its advertised price.
  • And it’s not just the RTX 5090 or 5080, it’s low-end stuff like the Intel Arc B580 and B570, too.
  • Is it high demand? Low supply? Scalpers and resellers hanging off the GPU market like the parasites they are? No one can say!
  • It makes these reviews very hard to do.
  • It also makes PC gaming, as a hobby, really difficult to get into if you aren’t into it already!
  • It just makes me mad is all.
  • If you’re reading this months from now and the GPUs actually are in stock at the list price, I hope this was helpful.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Nvidia GeForce RTX 5070 Ti review: An RTX 4080 for $749, at least in theory Read More »

are-any-of-apple’s-official-magsafe-accessories-worth-buying?

Are any of Apple’s official MagSafe accessories worth buying?


When MagSafe was introduced, it promised an accessories revolution. Meh.

Apple’s current lineup of MagSafe accessories. Credit: Samuel Axon

When Apple introduced what it currently calls MagSafe in 2020, its marketing messaging suggested that the magnetic attachment standard for the iPhone would produce a boom in innovation in accessories, making things possible that simply weren’t before.

Four years later, that hasn’t really happened—either from third-party accessory makers or Apple’s own lineup of branded MagSafe products.

Instead, we have a lineup of accessories that matches pretty much what was available at launch in 2020: chargers, cases, and just a couple more unusual applications.

With the launch of the iPhone 16 just behind us and the holidays just in front of us, a bunch of people are moving to phones that support MagSafe for the first time. Apple loves an upsell, so it offers some first-party MagSafe accessories—some useful, some not worth the cash, given the premiums it sometimes charges.

Given all that, it’s a good time to check in and quickly point out which (if any) of these first-party MagSafe accessories might be worth grabbing alongside that new iPhone and which ones you should skip in favor of third-party offerings.

Cases with MagSafe

Look, we could write thousands of words about the variety of iPhone cases available, or even just about those that support MagSafe to some degree or another—and we still wouldn’t really scratch the surface. (Unless that surface was made with Apple’s leather-replacement FineWoven material—hey-o!)

It’s safe to say there’s a third-party case for every need and every type of person out there. If you want one that meets your exact needs, you’ll be able to find it. Just know that cases that are labeled as MagSafe-ready will allow charge through and will let the magnets align correctly between a MagSafe charger and an iPhone—that’s really the whole point of the “MagSafe” name.

But if you prefer to stick with Apple’s own cases, there are currently two options: the clear cases and the silicone cases.

A clear iPhone case on a table

The clear case is definitely the superior of Apple’s two first-party MagSafe cases. Credit: Samuel Axon

The clear cases actually have a circle where the edges of the MagSafe magnets are, which is pretty nice for getting the magnets to snap without any futzing—though it’s really not necessary, since, well, magnets attract. They have a firm plastic shell that is likely to do a good job of protecting your phone when you drop it.

The Silicone case is… fine. Frankly, it’s ludicrously priced for what it is. It offers no advantages over a plethora of third-party cases that cost exactly half as much.

Recommendation: The clear case has its advantages, but the silicone case is awfully expensive for what it is. Generally, third party is the way to go. There are lots of third-party cases from manufacturers who got licensed by Apple, and you can generally trust those will work with wireless charging just fine. That was the whole point of the MagSafe branding, after all.

The MagSafe charger

At $39 or $49 (depending on length, one meter or two), these charging cables are pretty pricey. But they’re also highly durable, relatively efficient, and super easy to use. In most cases, you might as well just use any old USB-C cable.

There are some situations where you might prefer this option, though—for example, if you prop your iPhone up against your bedside lamp like a nightstand clock, or if you (like me) listen to audiobooks on wired earbuds while you fall asleep via the USB-C port, but you want to make sure the phone is still charging.

A charger with cable sits on a table

The MagSafe charger for the iPhone. Credit: Samuel Axon

So the answer on Apple’s MagSafe charger is that it’s pretty specialized, but it’s arguably the best option for those who have some specific reason not to just use USB-C.

Recommendation: Just use a USB-C cable, unless you have a specific reason to go this route—shoutout to my fellow individuals who listen to audiobooks while falling asleep but need headphones so as not to keep their spouse awake but prefer wired earbuds that use the USB-C port over AirPods to avoid losing AirPods in the bed covers. I’m sure there are dozens of us! If you do go this route, Apple’s own cable is the safest pick.

Apple’s FineWoven Wallet with MagSafe

While I’d long known people with dense wallet cases for their iPhones, I was excited about Apple’s leather (and later FineWoven) wallet with MagSafe when it was announced. I felt the wallet cases I’d seen were way too bulky, making the phone less pleasant to use.

Unfortunately, Apple’s FineWoven Wallet with MagSafe might be the worst official MagSafe product.

The problem is that the “durable microtwill” material that Apple went with instead of leather is prone to scratching, as many owners have complained. That’s a bit frustrating for something that costs nearly $60.

Apple's MagSafe wallet on a table

The MagSafe wallet has too many limitations to be worthwhile for most people. Credit: Samuel Axon

The wallet also only holds a few cards, and putting cards here means you probably can’t or at least shouldn’t try to use wireless charging, because the cards would be between the charger and the phone. Apple itself warns against doing this.

For those reasons, skip the FineWoven Wallet. There are lots of better-designed iPhone wallet cases out there, even though they might not be so minimalistic.

Recommendation: Skip this one. It’s a great idea in theory, but in practice and execution, it just doesn’t deliver. There are zillions of great wallet cases out there if you don’t mind a bit of bulk—just know you’ll have some wireless charging issues with many cases.

Other categories offered by third parties

Frankly, a lot of the more interesting applications of MagSafe for the iPhone are only available through third parties.

There are monitor mounts for using the iPhone as a webcam with Macs; bedside table stands for charging the phone while it acts as a smart display; magnetic phone stands for car dashboards that let you use GPS while you drive using MagSafe; magnetic versions for attaching power banks and portable batteries; and of course, multi-device chargers similar to the infamously canceled Airpower charging pad Apple had planned to release at one point. (I have the Belkin Boost Charge Pro 3-in-1 on my desk, and it works great.)

It’s not the revolution of new applications that some imagined when MagSafe was launched, but that’s not really a surprise. Still, there are some quality products out there. It’s both strange and a pity that Apple hasn’t made most of them itself.

No revolution here

Truthfully, MagSafe never seemed like it would be a huge smash. iPhones already supported Qi wireless charging before it came along, so the idea of magnets keeping the device aligned with the charger was always the main appeal—its existence potentially saved some users from ending up with chargers that didn’t quite work right with their phones, provided those users bought officially licensed MagSafe accessories.

Apple’s MagSafe accessories are often overpriced compared to alternatives from Belkin and other frequent partners. MagSafe seemed to do a better job bringing some standards to certain third-party products than it did bringing life to Apple’s offerings, and it certainly did not bring about a revolution of new accessory categories to the iPhone.

Still, it’s hard to blame anyone for choosing to go with Apple’s versions; the world of third-party accessories can be messy, and going the first-party route is generally a surefire way to know you’re not going to have many problems, even if the sticker’s a bit steep.

You could shop for third-party options, but sometimes you want a sure thing. With the possible exception of the FineWoven Wallet, all of these Apple-made MagSafe products are sure things.

Photo of Samuel Axon

Samuel Axon is a senior editor at Ars Technica. He covers Apple, software development, gaming, AI, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

Are any of Apple’s official MagSafe accessories worth buying? Read More »

review:-amazon’s-2024-kindle-paperwhite-makes-the-best-e-reader-a-little-better

Review: Amazon’s 2024 Kindle Paperwhite makes the best e-reader a little better

A fast Kindle?

From left to right: 2024 Paperwhite, 2021 Paperwhite, and 2018 Paperwhite. Note not just the increase in screen size, but also how the screen corners get a little more rounded with each release. Credit: Andrew Cunningham

I don’t want to oversell how fast the new Kindle is, because it’s still not like an E-Ink screen can really compete with an LCD or OLED panel for smoothness of animations or UI responsiveness. But even compared to the 2021 Paperwhite, tapping buttons, opening menus, opening books, and turning pages feels considerably snappier—not quite instantaneous, but without the unexplained pauses and hesitation that longtime Kindle owners will be accustomed to. For those who type out notes in their books, even the onscreen keyboard feels fluid and responsive.

Compared to the 2018 Paperwhite (again, the first waterproofed model, and the last one with a 6-inch screen and micro USB port), the difference is night and day. While it still feels basically fine for reading books, I find that the older Kindle can sometimes pause for so long when opening menus or switching between things that I wonder if it’s still working or whether it’s totally locked up and frozen.

“Kindle benchmarks” aren’t really a thing, but I attempted to quantify the performance improvements by running some old browser benchmarks using the Kindle’s limited built-in web browser and Google’s ancient Octane 2.0 test—the 2018, 2021, and 2024 Kindles are all running the same software update here (5.17.0), so this should be a reasonably good apples-to-apples comparison of single-core processor speed.

The new Kindle is actually way faster than older models. Credit: Andrew Cunningham

The 2021 Kindle was roughly 30 percent faster than the 2018 Kindle. The new Paperwhite is nearly twice as fast as the 2021 Paperwhite, and well over twice as fast as the 2018 Paperwhite. That alone is enough to explain the tangible difference in responsiveness between the devices.

Turning to the new Paperwhite’s other improvements: compared side by side, the new screen is appreciably bigger, more noticeably so than the 0.2-inch size difference might suggest. And it doesn’t make the Paperwhite much larger, though it is a tiny bit taller in a way that will wreck compatibility with existing cases. But you only really appreciate the upgrade if you’re coming from one of the older 6-inch Kindles.

Review: Amazon’s 2024 Kindle Paperwhite makes the best e-reader a little better Read More »

after-working-with-a-dual-screen-portable-monitor-for-a-month,-i’m-a-believer

After working with a dual-screen portable monitor for a month, I’m a believer

I typically used the FlipGo Pro with a 16: 10 laptop screen, meaning that the portable monitor provided me with a taller view that differed from what most laptops offer. When the FlipGo Pro is working as one unified screen, it delivers a 6:2 (or 2:6) experience. These more unique aspect ratios, combined with the abilities to easily rotate the lightweight FlipGo Pro from portrait to landscape mode and swap between a dual or unified monitor, amplified the gadget’s versatility and minimal desk space requirement.

Dual-screen monitors edge out dual-screen PCs

The appeal of a device that can bring you two times the screen space without being a burden to carry around is obvious. Many of the options until now, however, have felt experimental, fragile, or overly niche for most people to consider.

I recently gave praise to the concept behind a laptop with a secondary screen that attaches to the primary through a 360-degree hinge on the primary display’s left side:

AceMagic X1

The AceMagic X1 dual-screen laptop.

Credit: Scharon Harding

The AceMagic X1 dual-screen laptop. Credit: Scharon Harding

Unlike the dual-screen Lenovo Yoga Book 9i, the AceMagic X1 has an integrated keyboard and touchpad. However, the PC’s questionable durability and dated components and its maker’s sketchy reputation (malware was once found inside AceMagic mini PCs) prevent me from recommending the laptop.

Meanwhile, something like the FlipGo Pro does something that today’s dual-screen laptops fail to do in their quest to provide extra screen space. With its quick swapping from one to two screens and simple adjustability, it’s easy for users of various OSes to maximize its versatility. As tech companies continue exploring the integration of extra screens, products like the FlipGo Pro remind me of the importance of evolution over sacrifice. A second screen has less value if it takes the place of critical features or quality builds. While a dual portable monitor isn’t as flashy or groundbreaking as a laptop with two full-size displays built in, when well-executed, it could be significantly more helpful—which, at least for now, is groundbreaking enough.

After working with a dual-screen portable monitor for a month, I’m a believer Read More »

review:-the-fastest-of-the-m4-macbook-pros-might-be-the-least-interesting-one

Review: The fastest of the M4 MacBook Pros might be the least interesting one


Not a surprising generational update, but a lot of progress for just one year.

The new M4 Pro and M4 Max MacBook Pros. Credit: Andrew Cunningham

The new M4 Pro and M4 Max MacBook Pros. Credit: Andrew Cunningham

In some ways, my review of the new MacBook Pros will be a lot like my review of the new iMac. This is the third year and fourth generation of the Apple Silicon-era MacBook Pro design, and outwardly, few things have changed about the new M4, M4 Pro, and M4 Max laptops.

Here are the things that are different. Boosted RAM capacities, across the entire lineup but most crucially in the entry-level $1,599 M4 MacBook Pro, make the new laptops a shade cheaper and more versatile than they used to be. The new nano-texture display option, a $150 upgrade on all models, is a lovely matte-textured coating that completely eliminates reflections. There’s a third Thunderbolt port on the baseline M4 model (the M3 model had two), and it can drive up to three displays simultaneously (two external, plus the built-in screen). There’s a new webcam. It looks a little nicer and has a wide-angle lens that can show what’s on your desk instead of your face if you want it to. And there are new chips, which we’ll get to.

That is essentially the end of the list. If you are still using an Intel-era MacBook Pro, I’ll point you to our previous reviews, which mostly celebrate the improvements (more and different kids of ports, larger screens) while picking one or two nits (they are a bit larger and heavier than late-Intel MacBook Pros, and the display notch is an eyesore).

New chips: M4 and M4 Pro

That leaves us with the M4, M4 Pro, and M4 Max.

We’ve already talked a bunch about the M4 and M4 Pro in our reviews of the new iMac and the new Mac minis, but to recap, the M4 is a solid generational upgrade over the M3, thanks to its two extra efficiency cores on the CPU side. Comparatively, the M4 Pro is a much larger leap over the M3 Pro, mostly because the M3 Pro was such a mild update compared to the M2 Pro.

The M4’s single-core performance is between 14 and 21 percent faster than the M3s in our tests, and tests that use all the CPU cores are usually 20 or 30 percent faster. The GPU is occasionally as much as 33 percent faster than the M3 in our tests, though more often, the improvements are in the single or low double digits.

For the M4 Pro—bearing in mind that we tested the fully enabled version with 14 CPU cores and 20 GPU cores, and not the slightly cut down version sold in less expensive machines—single-core CPU performance is up by around 20-ish percent in our tests, in line with the regular M4’s performance advantage over the regular M3. The huge boost to CPU core count increases multicore performance by between 50 and 60 percent most of the time, a substantial boost that actually allows the M4 Pro to approach the CPU performance of the 2022 M1 Ultra. GPU performance is up by around 33 percent compared to M3 Pro, thanks to the additional GPU cores and memory bandwidth, but it’s still not as fast as any of Apple’s Max or Ultra chips, even the M1-series.

M4 Max

And finally, there’s the M4 Max (again, the fully enabled version, this one with 12 P-cores, 4 E-cores, 40 GPU cores, and 546GB/s of memory bandwidth). Single-core CPU performance is the biggest leap forward, jumping by between 18 and 28 percent in single-threaded benchmarks. Multi-core performance is generally up by between 15 and 20 percent. That’s a more-than-respectable generational leap, but it’s nowhere near what happened for the M4 Pro since both M3 Mac and M4 Max have the same CPU core counts.

The only weird thing we noticed in our testing was an inconsistent performance in our Handbrake video encoding test. Every time we ran it, it reliably took either five minutes and 20 seconds or four minutes and 30 seconds. For the slower result, power usage was also slightly reduced, which suggests to me that some kind of throttling is happening during this workload; we saw roughly these two results over and over across a dozen or so runs, each separated by at least five minutes to allow the Mac to cool back down. High Power mode didn’t make a difference in either direction.

CPU P/E-cores GPU cores RAM options Display support (including internal) Memory bandwidth
Apple M4 Max (low) 10/4 32 36GB Up to five 410GB/s
Apple M4 Max (high) 12/4 40 48/64/128GB Up to five 546GB/s
Apple M3 Max (high) 12/4 40 48/64/128GB Up to five 409.6GB/s
Apple M2 Max (high) 8/4 38 64/96GB Up to five 409.6GB/s

We shared our data with Apple and haven’t received a response. Note that we tested the M4 Max in the 16-inch MacBook Pro, and we’d expect any kind of throttling behavior to be slightly more noticeable in the 14-inch Pro since it has less room for cooling hardware.

The faster result is more in line with the rest of our multi-core tests for the M4 Max. Even the slower of the two results is faster than the M3 Max, albeit not by much. We also didn’t notice similar behavior for any of the other multi-core tests we ran. It’s worth keeping in mind if you plan to use the MacBook Pro for CPU-heavy, sustained workloads that will run for more than a few minutes at a time.

GPU performance in our tests varies widely compared to the M4 Max, with results ranging from as little as 10 or 15 percent (for 4K and 1440p GFXBench tests—the bigger boost to the 1080p version is coming partially from CPU improvements) to as high as 30 percent for the Cinebench 2024 GPU test. I suspect the benefits will vary depending on how much the apps you’re running benefit from the M4 Max’s improved memory bandwidth.

Power efficiency in the M4 Max isn’t dramatically different from the M3 Max—it’s more efficient by virtue of using roughly the same amount of power as the M3 Max and running a little faster, consuming less energy overall to do the same amount of work.

Credit: Andrew Cunningham

Finally, in a test of High Power mode, we did see some very small differences in the GFXBench scores, though not in other GPU-based tests like Cinebench and Blender or in any CPU-based tests. You might notice slightly better performance in games if you’re running them, but as with the M4 Pro, it doesn’t seem hugely beneficial. This is different from how it’s handled in many Windows PCs, including Snapdragon X Elite PCs with Arm-based chips in them because they do have substantially different performance in high-performance mode relative to the default “balanced” mode.

Nice to see you, yearly upgrade

The 14-inch and 16-inch MacBook Pros. The nano-texture glass displays eliminate all of the normal glossy-screen reflections and glare. Credit: Andrew Cunningham

The new MacBook Pros are all solid year-over-year upgrades, though they’ll be most interesting to people who bought their last MacBook Pro toward the end of the Intel era sometime in 2019 or 2020. The nano-texture display, extra speed, and extra RAM may be worth a look for owners of the M1 MacBook Pros if you truly need the best performance you can get in a laptop. But I’d still draw a pretty bright line between latter-day Intel Macs (aging, hot, getting toward the end of the line for macOS updates, not getting all the features of current macOS versions anyway) and any kind of Apple Silicon Mac (fully supported with all features, still-current designs, barely three years old at most).

Frankly, the computer that benefits the most is probably the $1,599 entry-level MacBook Pro, which, thanks to the 16GB RAM upgrade and improved multi-monitor support, is a fairly capable professional computer. Of all the places where Apple’s previous 8GB RAM floor felt inappropriate, it was in the M3 MacBook Pro. With the extra ports, high-refresh-rate screen, and nano-texture coating option, it’s a bit easier to articulate the kind of user who that laptop is actually for, separating it a bit from the 15-inch MacBook Air.

The M4 Pro version also deserves a shout-out for its particularly big performance jump compared to the M2 Pro and M3 Pro generations. It’s a little odd to have a MacBook Pro generation where the middle chip is the most impressive of the three, and that’s not to discount how fast the M4 Max is—it’s just the reality of the situation given Apple’s focus on efficiency rather than performance for the M3 Pro.

The good

  • RAM upgrades across the whole lineup. This particularly benefits the $1,599 M4 MacBook Air, which jumps from 8GB to 16GB
  • M4 and M4 Max are both respectable generational upgrades and offer substantial performance boosts from Intel or even M1 Macs
  • M4 Pro is a huge generational leap, as Apple’s M3 Pro used a more conservative design
  • Nano-texture display coating is very nice and not too expensive relative to the price of the laptops
  • Better multi-monitor support for M4 version
  • Other design things—ports, 120 Hz screen, keyboard, and trackpad—are all mostly the same as before and are all very nice

The bad

  • Occasional evidence of M4 Max performance throttling, though it’s inconsistent, and we only saw it in one of our benchmarks
  • Need to jump all the way to M4 Max to get the best GPU performance

The ugly

  • Expensive, especially once you start considering RAM and storage upgrades

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Review: The fastest of the M4 MacBook Pros might be the least interesting one Read More »

macos-15-sequoia:-the-ars-technica-review

macOS 15 Sequoia: The Ars Technica review

macOS 15 Sequoia: The Ars Technica review

Apple

The macOS 15 Sequoia update will inevitably be known as “the AI one” in retrospect, introducing, as it does, the first wave of “Apple Intelligence” features.

That’s funny because none of that stuff is actually ready for the 15.0 release that’s coming out today. A lot of it is coming “later this fall” in the 15.1 update, which Apple has been testing entirely separately from the 15.0 betas for weeks now. Some of it won’t be ready until after that—rumors say image generation won’t be ready until the end of the year—but in any case, none of it is ready for public consumption yet.

But the AI-free 15.0 release does give us a chance to evaluate all of the non-AI additions to macOS this year. Apple Intelligence is sucking up a lot of the media oxygen, but in most other ways, this is a typical 2020s-era macOS release, with one or two headliners, several quality-of-life tweaks, and some sparsely documented under-the-hood stuff that will subtly change how you experience the operating system.

The AI-free version of the operating system is also the one that all users of the remaining Intel Macs will be using, since all of the Apple Intelligence features require Apple Silicon. Most of the Intel Macs that ran last year’s Sonoma release will run Sequoia this year—the first time this has happened since 2019—but the difference between the same macOS version running on different CPUs will be wider than it has been. It’s a clear indicator that the Intel Mac era is drawing to a close, even if support hasn’t totally ended just yet.

macOS 15 Sequoia: The Ars Technica review Read More »