Tech

the-npu-in-your-phone-keeps-improving—why-isn’t-that-making-ai-better?

The NPU in your phone keeps improving—why isn’t that making AI better?


Shrinking AI for your phone is no simple matter.

npu phone

The NPU in your phone might not be doing very much. Credit: Aurich Lawson | Getty Images

The NPU in your phone might not be doing very much. Credit: Aurich Lawson | Getty Images

Almost every technological innovation of the past several years has been laser-focused on one thing: generative AI. Many of these supposedly revolutionary systems run on big, expensive servers in a data center somewhere, but at the same time, chipmakers are crowing about the power of the neural processing units (NPU) they have brought to consumer devices. Every few months, it’s the same thing: This new NPU is 30 or 40 percent faster than the last one. That’s supposed to let you do something important, but no one really gets around to explaining what that is.

Experts envision a future of secure, personal AI tools with on-device intelligence, but does that match the reality of the AI boom? AI on the “edge” sounds great, but almost every AI tool of consequence is running in the cloud. So what’s that chip in your phone even doing?

What is an NPU?

Companies launching a new product often get bogged down in superlatives and vague marketing speak, so they do a poor job of explaining technical details. It’s not clear to most people buying a phone why they need the hardware to run AI workloads, and the supposed benefits are largely theoretical.

Many of today’s flagship consumer processors are systems-on-a-chip (SoC) because they incorporate multiple computing elements—like CPU cores, GPUs, and imaging controllers—on a single piece of silicon. This is true of mobile parts like Qualcomm’s Snapdragon or Google’s Tensor, as well as PC components like the Intel Core Ultra.

The NPU is a newer addition to chips, but it didn’t just appear one day—there’s a lineage that brought us here. NPUs are good at what they do because they emphasize parallel computing, something that’s also important in other SoC components.

Qualcomm devotes significant time during its new product unveilings to talk about its Hexagon NPUs. Keen observers may recall that this branding has been reused from the company’s line of digital signal processors (DSPs), and there’s a good reason for that.

“Our journey into AI processing started probably 15 or 20 years ago, wherein our first anchor point was looking at signal processing,” said Vinesh Sukumar, Qualcomm’s head of AI products. DSPs have a similar architecture compared to NPUs, but they’re much simpler, with a focus on processing audio (e.g., speech recognition) and modem signals.

Qualcomm chip design NPU

The NPU is one of multiple components in modern SoCs.

Credit: Qualcomm

The NPU is one of multiple components in modern SoCs. Credit: Qualcomm

As the collection of technologies we refer to as “artificial intelligence” developed, engineers began using DSPs for more types of parallel processing, like long short-term memory (LSTM). Sukumar explained that as the industry became enamored with convolutional neural networks (CNNs), the technology underlying applications like computer vision, DSPs became focused on matrix functions, which are essential to generative AI processing as well.

While there is an architectural lineage here, it’s not quite right to say NPUs are just fancy DSPs. “If you talk about DSPs in the general term of the word, yes, [an NPU] is a digital signal processor,” said MediaTek Assistant Vice President Mark Odani. “But it’s all come a long way and it’s a lot more optimized for parallelism, how the transformers work, and holding huge numbers of parameters for processing.”

Despite being so prominent in new chips, NPUs are not strictly necessary for running AI workloads on the “edge,” a term that differentiates local AI processing from cloud-based systems. CPUs are slower than NPUs but can handle some light workloads without using as much power. Meanwhile, GPUs can often chew through more data than an NPU, but they use more power to do it. And there are times you may want to do that, according to Qualcomm’s Sukumar. For example, running AI workloads while a game is running could favor the GPU.

“Here, your measurement of success is that you cannot drop your frame rate while maintaining the spatial resolution, the dynamic range of the pixel, and also being able to provide AI recommendations for the player within that space,” says Sukumar. “In this kind of use case, it actually makes sense to run that in the graphics engine, because then you don’t have to keep shifting between the graphics and a domain-specific AI engine like an NPU.”

Livin’ on the edge is hard

Unfortunately, the NPUs in many devices sit idle (and not just during gaming). The mix of local versus cloud AI tools favors the latter because that’s the natural habitat of LLMs. AI models are trained and fine-tuned on powerful servers, and that’s where they run best.

A server-based AI, like the full-fat versions of Gemini and ChatGPT, is not resource-constrained like a model running on your phone’s NPU. Consider the latest version of Google’s on-device Gemini Nano model, which has a context window of 32k tokens. That is a more than 2x improvement over the last version. However, the cloud-based Gemini models have context windows of up to 1 million tokens, meaning they can process much larger volumes of data.

Both cloud-based and edge AI hardware will continue getting better, but the balance may not shift in the NPU’s favor. “The cloud will always have more compute resources versus a mobile device,” said Google’s Shenaz Zack, senior product manager on the Pixel team.

“If you want the most accurate models or the most brute force models, that all has to be done in the cloud,” Odani said. “But what we’re finding is that, in a lot of the use cases where there’s just summarizing some text or you’re talking to your voice assistant, a lot of those things can fit within three billion parameters.”

Squeezing AI models onto a phone or laptop involves some compromise—for example, by reducing the parameters included in the model. Odani explained that cloud-based models run hundreds of billions of parameters, the weighting that determines how a model processes input tokens to generate outputs. You can’t run anything like that on a consumer device right now, so developers have to vastly scale back the size of models for the edge. Odani says MediaTek’s latest ninth-generation NPU can handle about 3 billion parameters—a difference of several orders of magnitude.

The amount of memory available in a phone or laptop is also a limiting factor, so mobile-optimized AI models are usually quantized. That means the model’s estimation of the next token runs with less precision. Let’s say you want to run one of the larger open models, like Llama or Gemma 7b, on your device. The de facto standard is FP16, known as half-precision. At that level, a model with 7 billion parameters will lock up 13 or 14 gigabytes of memory. Stepping down to FP4 (quarter-precision) brings the size of the model in memory to a few gigs.

“When you compress to, let’s say, between three and four gigabytes, it’s a sweet spot for integration into memory constrained form factors like a smartphone,” Sukumar said. “And there’s been a lot of investment in the ecosystem and at Qualcomm to look at various ways of compressing the models without losing quality.”

It’s difficult to create a generalized AI with these limitations for mobile devices, but computers—and especially smartphones—are a wellspring of data that can be pumped into models to generate supposedly helpful outputs. That’s why most edge AI is geared toward specific, narrow use cases, like analyzing screenshots or suggesting calendar appointments. Google says its latest Pixel phones run more than 100 AI models, both generative and traditional.

Even AI skeptics can recognize that the landscape is changing quickly. In the time it takes to shrink and optimize AI models for a phone or laptop, new cloud models may appear that make that work obsolete. This is also why third-party developers have been slow to utilize NPU processing in apps. They either have to plug into an existing on-device model, which involves restrictions and rapidly moving development targets, or deploy their own custom models. Neither is a great option currently.

A matter of trust

If the cloud is faster and easier, why go to the trouble of optimizing for the edge and burning more power with an NPU? Leaning on the cloud means accepting a level of dependence and trust in the people operating AI data centers that may not always be appropriate.

“We always start off with user privacy as an element,” said Qualcomm’s Sukumar. He explained that the best inference is not general in nature—it’s personalized based on the user’s interests and what’s happening in their lives. Fine-tuning models to deliver that experience calls for personal data, and it’s safer to store and process that data locally.

Even when companies say the right things about privacy in their cloud services, they’re far from guarantees. The helpful, friendly vibe of general chatbots also encourages people to divulge a lot of personal information, and if that assistant is running in the cloud, your data is there as well. OpenAI’s copyright fight with The New York Times could lead to millions of private chats being handed over to the publisher. The explosive growth and uncertain regulatory framework of gen AI make it hard to know what’s going to happen to your data.

“People are using a lot of these generative AI assistants like a therapist,” Odani said. “And you don’t know one day if all this stuff is going to come out on the Internet.”

Not everyone is so concerned. Zack claims Google has built “the world’s most secure cloud infrastructure,” allowing it to process data where it delivers the best results. Zack uses Video Boost and Pixel Studio as examples of this approach, noting that Google’s cloud is the only way to make these experiences fast and high-quality. The company recently announced its new Private AI Compute system, which it claims is just as safe as local AI.

Even if that’s true, the edge has other advantages—edge AI is just more reliable than a cloud service. “On-device is fast,” Odani said. “Sometimes I’m talking to ChatGPT and my Wi-Fi goes out or whatever, and it skips a beat.”

The services hosting cloud-based AI models aren’t just a single website—the Internet of today is massively interdependent, with content delivery networks, DNS providers, hosting, and other services that could degrade or shut down your favorite AI in the event of a glitch. When Cloudflare suffered a self-inflicted outage recently, ChatGPT users were annoyed to find their trusty chatbot was unavailable. Local AI features don’t have that drawback.

Cloud dominance

Everyone seems to agree that a hybrid approach is necessary to deliver truly useful AI features (assuming those exist), sending data to more powerful cloud services when necessary—Google, Apple, and every other phone maker does this. But the pursuit of a seamless experience can also obscure what’s happening with your data. More often than not, the AI features on your phone aren’t running in a secure, local way, even when the device has the hardware to do that.

Take, for example, the new OnePlus 15. This phone has Qualcomm’s brand-new Snapdragon 8 Elite Gen 5, which has an NPU that is 37 percent faster than the last one, for whatever that’s worth. Even with all that on-device AI might, OnePlus is heavily reliant on the cloud to analyze your personal data. Features like AI Writer and the AI Recorder connect to the company’s servers for processing, a system OnePlus assures us is totally safe and private.

Similarly, Motorola released a new line of foldable Razr phones over the summer that are loaded with AI features from multiple providers. These phones can summarize your notifications using AI, but you might be surprised how much of it happens in the cloud unless you read the terms and conditions. If you buy the Razr Ultra, that summarization happens on your phone. However, the cheaper models with less RAM and NPU power use cloud services to process your notifications. Again, Motorola says this system is secure, but a more secure option would have been to re-optimize the model for its cheaper phones.

Even when an OEM focuses on using the NPU hardware, the results can be lacking. Look at Google’s Daily Hub and Samsung’s Now Brief. These features are supposed to chew through all the data on your phone and generate useful recommendations and actions, but they rarely do anything aside from showing calendar events. In fact, Google has temporarily removed Daily Hub from Pixels because the feature did so little, and Google is a pioneer in local AI with Gemini Nano. Google has actually moved some parts of its mobile AI experience from local to cloud-based processing in recent months.

Those “brute force” models appear to be winning, and it doesn’t hurt that companies also get more data when you interact with their private computing cloud services.

Maybe take what you can get?

There’s plenty of interest in local AI, but so far, that hasn’t translated to an AI revolution in your pocket. Most of the AI advances we’ve seen so far depend on the ever-increasing scale of cloud systems and the generalized models that run there. Industry experts say that extensive work is happening behind the scenes to shrink AI models to work on phones and laptops, but it will take time for that to make an impact.

In the meantime, local AI processing is out there in a limited way. Google still makes use of the Tensor NPU to handle sensitive data for features like Magic Cue, and Samsung really makes the most of Qualcomm’s AI-focused chipsets. While Now Brief is of questionable utility, Samsung is cognizant of how reliance on the cloud may impact users, offering a toggle in the system settings that restricts AI processing to run only on the device. This limits the number of available AI features, and others don’t work as well, but you’ll know none of your personal data is being shared. No one else offers this option on a smartphone.

Galaxy AI toggle

Samsung offers an easy toggle to disable cloud AI and run all workloads on-device.

Credit: Ryan Whitwam

Samsung offers an easy toggle to disable cloud AI and run all workloads on-device. Credit: Ryan Whitwam

Samsung spokesperson Elise Sembach said the company’s AI efforts are grounded in enhancing experiences while maintaining user control. “The on-device processing toggle in One UI reflects this approach. It gives users the option to process AI tasks locally for faster performance, added privacy, and reliability even without a network connection,” Sembach said.

Interest in edge AI might be a good thing even if you don’t use it. Planning for this AI-rich future can encourage device makers to invest in better hardware—like more memory to run all those theoretical AI models.

“We definitely recommend our partners increase their RAM capacity,” said Sukumar. Indeed, Google, Samsung, and others have boosted memory capacity in large part to support on-device AI. Even if the cloud is winning, we’ll take the extra RAM.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

The NPU in your phone keeps improving—why isn’t that making AI better? Read More »

after-nearly-30-years,-crucial-will-stop-selling-ram-to-consumers

After nearly 30 years, Crucial will stop selling RAM to consumers

DRAM contract prices have increased 171 percent year over year, according to industry data. Gerry Chen, general manager of memory manufacturer TeamGroup, warned that the situation will worsen in the first half of 2026 once distributors exhaust their remaining inventory. He expects supply constraints to persist through late 2027 or beyond.

The fault lies squarely at the feet of AI mania in the tech industry. The construction of new AI infrastructure has created unprecedented demand for high-bandwidth memory (HBM), the specialized DRAM used in AI accelerators from Nvidia and AMD. Memory manufacturers have been reallocating production capacity away from consumer products toward these more profitable enterprise components, and Micron has presold its entire HBM output through 2026.

A photo of the

A photo of the “Stargate I” site in Abilene, Texas. AI data center sites like this are eating up the RAM supply. Credit: OpenAI

At the moment, the structural imbalance between AI demand and consumer supply shows no signs of easing. OpenAI’s Stargate project has reportedly signed agreements for up to 900,000 wafers of DRAM per month, which could account for nearly 40 percent of global production.

The shortage has already forced companies to adapt. As Ars’ Andrew Cunningham reported, laptop maker Framework stopped selling standalone RAM kits in late November to prevent scalping and said it will likely be forced to raise prices soon.

For Micron, the calculus is clear: Enterprise customers pay more and buy in bulk. But for the DIY PC community, the decision will leave PC builders with one fewer option when reaching for the RAM sticks. In his statement, Sadana reflected on the brand’s 29-year run.

“Thanks to a passionate community of consumers, the Crucial brand has become synonymous with technical leadership, quality and reliability of leading-edge memory and storage products,” Sadana said. “We would like to thank our millions of customers, hundreds of partners and all of the Micron team members who have supported the Crucial journey for the last 29 years.”

After nearly 30 years, Crucial will stop selling RAM to consumers Read More »

prime-video-pulls-eerily-emotionless-ai-generated-anime-dubs-after-complaints

Prime Video pulls eerily emotionless AI-generated anime dubs after complaints

[S]o many talented voice actors, and you can’t even bother to hire a couple to dub a season of a show??????????? absolutely disrespectful.

Naturally, anime voice actors took offense, too. Damian Mills, for instance, said via X that voicing a “notable queer-coded character like Kaworu” in three Evangelion movie dubs for Prime Video (in 2007, 2009, and 2012) “meant a lot, especially being queer myself.”

Mills, who also does voice acting for other anime, including One Piece (Tanaka) and Dragon Ball Super (Frieza) added, “… using AI to replace dub actors on #BananaFish? It’s insulting and I can’t support this. It’s insane to me. What’s worse is Banana Fish is an older property, so there was no urgency to get a dub created.”

Amazon also seems to have rethought its March statement announcing that it would use AI to dub content “that would not have been dubbed otherwise.” For example, in 2017, Sentai Filmworks released an English dub of No Game, No Life: Zero with human voice actors.

Some dubs pulled

On Tuesday, Gizmodo reported that “several of the English language AI dubs for anime such as Banana Fish, No Game No Life: Zero, and more have now been removed.” However, some AI-generated dubs remain as of this writing, including an English dub for the anime series Pet and a Spanish one for Banana Fish, Ars Technica has confirmed.

Amazon hasn’t commented on the AI-generated dubs or why it took some of them down.

All of this comes despite Amazon’s March announcement that the AI-generated dubs would use “human expertise” for “quality control.”

The sloppy dubbing of cherished anime titles reflects a lack of precision in the broader industry as companies seek to leverage generative AI to save time and money. Prime Video has already been criticized for using AI-generated movie summaries and posters this year. And this summer, anime streaming service Crunchyroll blamed bad AI-generated subtitles on an agreement “violation” by a “third-party vendor.”

Prime Video pulls eerily emotionless AI-generated anime dubs after complaints Read More »

testing-shows-why-the-steam-machine’s-8gb-of-graphics-ram-could-be-a-problem

Testing shows why the Steam Machine’s 8GB of graphics RAM could be a problem

By Valve’s admission, its upcoming Steam Machine desktop isn’t swinging for the fences with its graphical performance. The specs promise decent 1080p-to-1440p performance in most games, with 4K occasionally reachable with assistance from FSR upscaling—about what you’d expect from a box with a modern midrange graphics card in it.

But there’s one spec that has caused some concern among Ars staffers and others with their eyes on the Steam Machine: The GPU comes with just 8GB of dedicated graphics RAM, an amount that is steadily becoming more of a bottleneck for midrange GPUs like AMD’s Radeon RX 7060 and 9060, or Nvidia’s GeForce RTX 4060 or 5060.

In our reviews of these GPUs, we’ve already run into some games where the RAM ceiling limits performance in Windows, especially at 1440p. But we’ve been doing more extensive testing of various GPUs with SteamOS, and we can confirm that in current betas, 8GB GPUs struggle even more on SteamOS than they do running the same games at the same settings in Windows 11.

The good news is that Valve is working on solutions, and having a stable platform like the Steam Machine to aim for should help improve things for other hardware with similar configurations. The bad news is there’s plenty of work left to do.

The numbers

We’ve tested an array of dedicated and integrated Radeon GPUs under SteamOS and Windows, and we’ll share more extensive results in another article soon (along with broader SteamOS-vs-Windows observations). But for our purposes here, the two GPUs that highlight the issues most effectively are the 8GB Radeon RX 7600 and the 16GB Radeon RX 7600 XT.

These dedicated GPUs have the benefit of being nearly identical to what Valve plans to ship in the Steam Machine—32 compute units (CUs) instead of Valve’s 28, but the same RDNA3 architecture. They’re also, most importantly for our purposes, pretty similar to each other—the same physical GPU die, just with slightly higher clock speeds and more RAM for the 7600 XT than for the regular 7600.

Testing shows why the Steam Machine’s 8GB of graphics RAM could be a problem Read More »

google-announces-second-android-16-release-of-2025-is-heading-to-pixels

Google announces second Android 16 release of 2025 is heading to Pixels

Material 3 Expressive came to Pixels earlier this year but not as part of the first Android 16 upgrade—Google’s relationship with Android versions is complicated these days. Regardless, Material 3 will get a bit more cohesive on Pixels following this update. Google will now apply Material theming to all icons on your device automatically, replacing legacy colored icons with theme-friendly versions. Similarly, dark mode will be supported across more apps, even if the devs haven’t added support. Google is also adding a few more icon shape options if you want to jazz up your home screen.

Android 16 screens

Credit: Google

By way of functional changes, Google has added a more intuitive way of managing parental controls—you can just use the managed device directly. Parents will be able to set a PIN code for accessing features like screen time, app usage, and so on without grabbing a different device. If you want more options or control, the new on-device settings will also help you configure Google Family Link.

Android for all

No Pixel? No problem. Google has also bundled up a collection of app and system updates that will begin rolling out today for all supported Android devices.

Chrome for Android is getting an update with tab pinning, mirroring a feature that has been in the desktop version since time immemorial. The Google Messages app is also taking care of some low-hanging fruit. When you’re invited to a group chat by a new number, the app will display group information and a one-tap option to leave and report the chat as spam.

Google’s official dialer app comes on Pixels, but it’s also in the Play Store for anyone to download. If you and your contacts use Google Dialer, you’ll soon be able to place calls with a “reason.” You can flag a call as “Urgent” to indicate to the recipient that they shouldn’t send you to voicemail. The urgent label will also remain in the call history if they miss the call.

Google announces second Android 16 release of 2025 is heading to Pixels Read More »

samsung-reveals-galaxy-z-trifold-with-10-inch-foldable-screen,-astronomical-price

Samsung reveals Galaxy Z TriFold with 10-inch foldable screen, astronomical price

Samsung has a new foldable smartphone, and it’s not just another Z Flip or Z Fold. The Galaxy Z TriFold has three articulating sections that house a massive 10-inch tablet-style screen, along with a traditional smartphone screen on the outside. The lavish new smartphone is launching this month in South Korea with a hefty price tag, and it will eventually make its way to the US in early 2026.

Samsung says it refined its Armor FlexHinge design for the TriFold. The device’s two hinges are slightly different sizes because the phone’s three panels have distinct shapes. The center panel is the thickest at 4.2 mm, and the other two are fractions of a millimeter thinner. The phone has apparently been designed to account for the varying sizes and weights, allowing the frame to fold up tight in a pocketable form factor.

Huawei’s impressive Mate XT tri-fold phones have been making the rounds online, but they’re not available in Western markets. Samsung’s new foldable looks similar at a glance, but the way the three panels fit together is different. The Mate XT folds in a Z-shaped configuration, using part of the main screen as the cover display. On Samsung’s phone, the left and right segments fold inward behind the separate cover screen. Samsung claims it has tested the design extensively to verify that the hinges will hold up to daily use for years.

Precision Engineering in Every Fold | Galaxy Z TriFold

While this does push the definition of “pocketable” for some people, the Galaxy Z TriFold is a tablet that technically fits in your pocket. When folded, it measures 12.9 mm thick, which is much more unwieldy than the Galaxy Z Fold 7‘s 8.9 mm profile. However, the TriFold is only a little thicker than Samsung’s older tablet-style foldables like the Galaxy Z Fold 6. The 1080p cover screen measures 6.5 inches, which is also quite similar to the Z Fold 7. It is very, very heavy for a phone, though, tipping the scales at 309 g.

Samsung reveals Galaxy Z TriFold with 10-inch foldable screen, astronomical price Read More »

even-microsoft’s-retro-holiday-sweaters-are-having-copilot-forced-upon-them

Even Microsoft’s retro holiday sweaters are having Copilot forced upon them

I can take or leave some of the things that Microsoft is doing with Windows 11 these days, but I do usually enjoy the company’s yearly limited-time holiday sweater releases. Usually crafted around a specific image or product from the company’s ’90s-and-early-2000s heyday—2022’s sweater was Clippy themed, and 2023’s was just the Windows XP Bliss wallpaper in sweater form—the sweaters usually hit the exact combination of dorky/cute/recognizable that makes for a good holiday party conversation starter.

Microsoft is reviving the tradition for 2025 after taking a year off, and the design for this year’s flagship $80 sweater is mostly in line with what the company has done in past years. The 2025 “Artifact Holiday Sweater” revives multiple pixelated icons that Windows 3.1-to-XP users will recognize, including Notepad, Reversi, Paint, MS-DOS, Internet Explorer, and even the MSN butterfly logo. Clippy is, once again, front and center, looking happy to be included.

Not all of the icons are from Microsoft’s past; a sunglasses-wearing emoji, a “50” in the style of the old flying Windows icon (for Microsoft’s 50th anniversary), and a Minecraft Creeper face all nod to the company’s more modern products. But the only one I really take issue with is on the right sleeve, where Microsoft has stuck a pixelated monochrome icon for its Copilot AI assistant.

Even Microsoft’s retro holiday sweaters are having Copilot forced upon them Read More »

netflix-quietly-drops-support-for-casting-to-most-tvs

Netflix quietly drops support for casting to most TVs

Have you been trying to cast Stranger Things from your phone, only to find that your TV isn’t cooperating? It’s not the TV—Netflix is to blame for this one, and it’s intentional. The streaming app has recently updated its support for Google Cast to disable the feature in most situations. You’ll need to pay for one of the company’s more expensive plans, and even then, Netflix will only cast to older TVs and streaming dongles.

The Google Cast system began appearing in apps shortly after the original Chromecast launched in 2013. Since then, Netflix users have been able to start video streams on TVs and streaming boxes from the mobile app. That was vital for streaming targets without their own remote or on-screen interface, but times change.

Today, Google has moved beyond the remote-free Chromecast experience, and most TVs have their own standalone Netflix apps. Netflix itself is also allergic to anything that would allow people to share passwords or watch in a new place. Over the last couple of weeks, Netflix updated its Android app to remove most casting options, mirroring a change in 2019 to kill Apple AirPlay.

The company’s support site (spotted by Android Authority) now clarifies that casting is only supported in a narrow set of circumstances. First, you need to be paying for one of the ad-free service tiers, which start at $18 per month. Those on the $8 ad-supported plan won’t have casting support.

Even then, Casting only appears for devices without a remote, like the earlier generations of Google Chromecasts, as well as some older TVs with Cast built in. For example, anyone still rocking Google’s 3rd Gen Chromecast from 2018 can cast video in Netflix, but those with the 2020 Chromecast dongle (which has a remote and a full Android OS) will have to use the TV app. Essentially, anything running Android/Google TV or a smart TV with a full Netflix app will force you to log in before you can watch anything.

Netflix quietly drops support for casting to most TVs Read More »

we-put-the-new-pocket-size-vinyl-format-to-the-test—with-mixed-results

We put the new pocket-size vinyl format to the test—with mixed results


is that a record in your pocket?

It’s a fun new format, but finding a place in the market may be challenging.

A 4-inch Tiny Vinyl record. Credit: Chris Foresman

A 4-inch Tiny Vinyl record. Credit: Chris Foresman

We recently looked at Tiny vinyl, a new miniature vinyl single format developed through a collaboration between a toy industry veteran and the world’s largest vinyl record manufacturer. The 4-inch singles are pressed in a process nearly identical to standard 12-inch LPs or 7-inch singles, except everything is smaller. They have a standard-size spindle hole and play at 33⅓ RPM, and they hold up to four minutes of music per side.

Several smaller bands, like The Band Loula and Rainbow Kitten Surprise, and some industry veterans like Blake Shelton and Melissa Etheridge, have already experimented with the format. But Tiny Vinyl partnered with US retail giant Target for its big coming-out party this fall, with 44 exclusive titles launching throughout the end of this year.

Tiny Vinyl supplied a few promotional copies of releases from former America’s Got Talent finalist Grace VanderWaal, The Band Loula, country pop stars Florida Georgia Line, and jazz legends the Vince Guaraldi Trio so I could get a first-hand look at how the records actually play. I tested these titles as well as several others I picked up at retail, playing them on an Audio Technica LP-120 direct drive manual turntable connected to a Yamaha S-301 integrated amplifier and playing through a pair of vintage Klipsch kg4 speakers.

I also played them out on a Crosley portable suitcase-style turntable, and for fun, I tried to play them on the miniature RSD3 turntable made for 3-inch singles to try to see what’s possible with a variety of hardware.

Tiny Vinyl releases cover several genres, including hip-hop, rock, country, pop, indie, and show tunes. Credit: Chris Foresman

Automatic turntables need not apply

First and foremost, I’ll note that the 4-inch diameter is essentially the same size as the label on a standard 12-inch LP. So any sort of automatic turntable won’t really work for 4-inch vinyl; most aren’t equipped to set the stylus at anything other than 12 inches or 7 inches, and even if they could, the automatic return would kick in before reaching the grooves where the music starts. Some automatic turntables allow switching to a manual mode, but they otherwise cannot play Tiny Vinyl records.

But if you have a turntable with a fully manual tonearm—including a wide array from DJ-style direct drive turntables or audiophile belt-drive turntables like those from Fluance, U-turn, or Pro-ject—you’re in luck. The tonearm can be placed on these records, and they will track the grooves well.

Lining up the stylus can be a challenge with such small records, but once it’s in place, the stylus on my LP120—a nude elliptical—tracked well. I also tried a few listens with a standard conical stylus since that’s what would be most common across a variety of low- and mid-range turntables. The elliptical stylus tracked slightly better in our experience; higher-end styli may track the extremely fine grooves even better but would probably be overkill given that the physical limitations of the format introduce some distortion, which would likely be more apparent with such gear.

While Tiny Vinyl will probably appeal most to pop music fans, I played a variety of music styles, including rock, country, dance pop, hip-hop, jazz, and even showtunes. The main sonic difference I noted when a direct comparison was available was that the Tiny Vinyl version of a track tended to sound quieter than the same track playing on a 12-inch LP at the same volume setting on the amplifier.

This Kacey Musgraves Tiny Vinyl includes songs from her album Deeper Well. Credit: Chris Foresman

It’s not unusual for different records to be mastered at different volumes; making the overall sound quieter means smaller modulations in the groove so they can be placed closer together. This is true for any album that has a side running longer than about 22 minutes, but it’s especially important to maintain the four-minute runtime on Tiny Vinyl. (This is also why the last song or two on many LP slides tend to be quieter or slower songs; it’s easier for these songs to sound better at the center of the record, where linear tracking speed decreases.)

That said, most of the songs I listened to tended to have a slight but audible increase in distortion as the grooves approached the physical limits of alignment for the stylus. This was usually only perceptible in the last several seconds of a song, which more discerning listeners would likely find objectionable. But sound quality overall is still comparable to typical vinyl records. It won’t compare to the most exacting pressings from the likes of Mobile Fidelity Labs, for instance, but then again, the sort of audiophile who would pay for the equipment to get the most out of such records probably won’t buy Tiny Vinyl in the first place, except perhaps as a conversation piece.

I also tried playing our Tiny Vinyl on a Crosley suitcase-style turntable since it has a manual tone arm. The model I have on hand has an Audio Technica AT3600L cartridge and stereo speakers, so it’s a bit nicer than the entry-level Cruiser models you’ll typically find at malls or department stores. But these are extremely popular first turntables for a lot of young people, so it seemed reasonable to consider how Tiny Vinyl sounds on these devices.

Unfortunately, I couldn’t play Tiny Vinyl on this turntable. Despite having a manual tone arm and an option to turn off the auto-start and stop of the turntable platter, the Crosley platter is designed for 7-inch and 12-inch vinyl—the Tiny Vinyl we tried wouldn’t even spin on the turntable without the addition of a slipmat of some kind.

Once I got it spinning, though, the tone arm simply would not track beyond the first couple of grooves before hitting some physical limitation of its gimbal. Since many of the suitcase-style turntables often share designs and parts, I suspect this would be a problem for most of the Crosley, Victrola, or other brands you might find at a big-box retailer.

Some releases really take advantage of the extra real estate of the gatefold jacket and printed inner sleeve,  Chris Foresman

Additionally, I compared the classic track “Linus and Lucy” from A Charlie Brown Christmas with a 2012 pressing of the full album, as well as the 2019 3-inch version using an adapter, all on the LP-120, to give readers the best comparison across formats.

Again, the LP version of the seminal soundtrack from A Charlie Brown Christmas sounded bright and noticeably louder than its 4-inch counterpart. No major surprises here. And of course, the LP includes the entire soundtrack, so if you’re a big fan of the film or the kind of contemplative, piano-based jazz that Vince Guaraldi is famous for, you’ll probably spring for the full album.

The 3-inch version of “Linus and Lucy” unsurprisingly sounds fairly comparable to the Tiny Vinyl version, with a much quieter playback at the same amplifier settings. But it also sounds a lot noisier, likely due to the differences in materials used in manufacturing.

Though 3-inch records can play on standard turntables, as I did here, they’re designed to go hand-in-hand with one of the many Crosley RSD3 variants released in the last five years, or on the Crosley Mini Cruiser turntable. If you manage to pick up an original 8ban player, you could get the original lo-fi, “noisy analog” sound that Bandai had intended as well. That’s really part of the 3-inch vinyl aesthetic.

Newer 3-inch vinyl singles are coming with a standard spindle hole, which makes them easier to play on standard turntables. It also means there are now adapters for the tiny spindle to fit these holes, so you can technically put a 4-inch single on them. But due to the design of the tonearm and its rest, the stylus won’t swing out to the edge of Tiny Vinyl; instead, you can only play starting at grooves around the 3-inch mark. It’s a little unfortunate because it would otherwise be fun to play these miniature singles on hardware that is a little more right-sized ergonomically.

Big stack of tiny records. Credit: Chris Foresman

Four-inch Tiny Vinyl singles, on the other hand, are intended to be played on standard turntables, and they do that fairly well as long as you can manually place the tonearm and it’s not otherwise limited physically from tracking its grooves. The sound was not expected to compare to a quality 12-inch pressing, and it doesn’t. But it still sounds good. And especially if your available space is at a premium, you might consider a Tiny Vinyl with the most well-known and popular tracks from a certain album or artist (like these songs from A Charlie Brown Christmas) over a full album that may cost upward of $35.

Fun for casual listeners, not for audiophiles

Overall, Tiny Vinyl still offers much of the visceral experience of playing standard vinyl records—the cover art, the liner notes, handling the record as you place it on the turntable—just in miniature. The cost is less than a typical LP, and the weight is significantly less, so there are definite benefits for casual listeners. On the other hand, serious collectors will gravitate toward 12-inch albums and—perhaps less so—7-inch singles. Ironically, the casual listeners the format would most likely appeal to are the least likely to have the equipment to play it. That will limit Tiny Vinyl’s mass-market appeal outside of just being a cool thing to put on the shelf that technically could be played on a turntable.

The Good:

  • Small enough to easily fit in a jacket pocket or the like
  • Use less resources to make and ship
  • With the gatefold jacket, printed inner sleeve, and color vinyl options, these look as cool as most full-size albums
  • Plays fine on manual turntables

The Bad:

  • Sound quality is (unsurprisingly) compromised
  • Price isn’t lower than typical 7-inch singles

The Ugly:

  • Won’t work on automatic-only turntables, like the very popular AT-LP60 series or the very popular suitcase-style turntables that are often an inexpensive “first” turntable for many

We put the new pocket-size vinyl format to the test—with mixed results Read More »

plex’s-crackdown-on-free-remote-streaming-access-starts-this-week

Plex’s crackdown on free remote streaming access starts this week

Plex has previously emphasized its need to keep up with “rising costs,” which include providing support for many different devices and codecs. It has also said that it needs money to implement new features, including an integration with Common Sense Media, a new “bespoke server management app” for managing server users, and “an open and documented API for server integrations,” including custom metadata agents,” per a March blog post.

In January 2024, TechCrunch reported that Plex was nearing profitability and raised $40 million in funding (Plex raised a $50 million growth equity round in 2021). Theoretically, the new remote access rules can also increase subscription revenue and help Plex’s backers see returns on their investments.

However, Plex’s evolution could isolate long-time users who have relied on Plex as a media server for years and those who aren’t interested in subscriptions, FAST (free ad-supported streaming TV) channels, or renting movies. Plex is unlikely to give up on its streaming business, though. In 2023, Scott Hancock, Plex’s then-VP of marketing, said that Plex had more people using its online streaming service than using its media server features since 2022. For people seeking software packages more squarely focused on media hosting, Plex alternatives, like Jellyfin, increasingly look attractive.

Plex’s crackdown on free remote streaming access starts this week Read More »

gpu-prices-are-coming-to-earth-just-as-ram-costs-shoot-into-the-stratosphere

GPU prices are coming to earth just as RAM costs shoot into the stratosphere

It’s not just PC builders

PC and phone manufacturers—and makers of components that use memory chips, like GPUs—mostly haven’t hiked prices yet. These companies buy components in large quantities, and they typically do so ahead of time, dulling the impact of the increases in the short-term. The kinds of price increases we see, and what costs are passed on to consumers, will vary from company to company.

Bloomberg reports that Lenovo is “stockpiling memory and other critical components” to get it through 2026 without issues and that the company “will aim to avoid passing on rising costs to its customers in the current quarter.” Apple may also be in a good position to weather the shortage; analysts at Morgan Stanley and Bernstein Research believe that Apple has already laid claim to the RAM that it needs and that its healthy profit margins will allow it to absorb the increases better than most.

Framework on the other hand, a smaller company known best for its repairable and upgradeable laptop designs, says “it is likely we will need to increase memory pricing soon” to reflect price increases from its suppliers. The company has also stopped selling standalone RAM kits in its online store in an effort to fight scalpers who are trying to capitalize on the shortages.

Tom’s Hardware reports that AMD has told its partners that it expects to raise GPU prices by about 10 percent starting next year and that Nvidia may have canceled a planned RTX 50-series Super launch entirely because of shortages and price increases (the main draw of this Super refresh, according to the rumor mill, would have a bump from 2GB GDDR7 chips to 3GB chips, boosting memory capacities across the lineup by 50 percent).

GPU prices are coming to earth just as RAM costs shoot into the stratosphere Read More »

vision-pro-m5-review:-it’s-time-for-apple-to-make-some-tough-choices

Vision Pro M5 review: It’s time for Apple to make some tough choices


A state of the union from someone who actually sort of uses the thing.

The M5 Vision Pro with the Dual Knit Band. Credit: Samuel Axon

With the recent releases of visionOS 26 and newly refreshed Vision Pro hardware, it’s an ideal time to check in on Apple’s Vision Pro headset—a device I was simultaneously amazed and disappointed by when it launched in early 2024.

I still like the Vision Pro, but I can tell it’s hanging on by a thread. Content is light, developer support is tepid, and while Apple has taken action to improve both, it’s not enough, and I’m concerned it might be too late.

When I got a Vision Pro, I used it a lot: I watched movies on planes and in hotel rooms, I walked around my house placing application windows and testing out weird new ways of working. I tried all the neat games and educational apps, and I watched all the immersive videos I could get ahold of. I even tried my hand at developing my own applications for it.

As the months went on, though, I used it less and less. The novelty wore off, and as cool as it remained, practicality beat coolness. By the time Apple sent me the newer model a couple of weeks ago, I had only put the original one on a few times in the prior couple of months. I had mostly stopped using it at home, but I still took it on trips as an entertainment device for hotel rooms now and then.

That’s not an uncommon story. You even see it in the subreddit for Vision Pro owners, which ought to be the home of the device’s most dedicated fans. Even there, people say, “This is really cool, but I have to go out of my way to keep using it.”

Perhaps it would have been easier to bake it into my day-to-day habits if developer and content creator support had been more robust, a classic chicken-and-egg problem.

After a few weeks of using the new Vision Pro hardware refresh daily, it’s clear to me that the platform needs a bigger rethink. As a fan of the device, I’m concerned it won’t get that, because all the rumors point to Apple pouring its future resources into smart glasses, which, to me, are a completely different product category.

What changed in the new model?

For many users, the most notable change here will be something you can buy separately (albeit at great expense) for the old model: A new headband that balances the device’s weight on your head better, making it more comfortable to wear for long sessions.

Dubbed the Dual Knit Band, it comes with an ingeniously simple adjustment knob that can be used to tighten or loosen either the band that goes across the back of your head (similar to the old band) or the one that wraps around the top.

It’s well-designed, and it will probably make the Vision Pro easier to use for many people who found the old model to be too uncomfortable—even though this model is slightly heavier than its predecessor.

The band fit is adjusted with this knob. You can turn it to loosen or tighten one strap, then pull it out and turn it again to adjust the other. Credit: Samuel Axon

I’m one of the lucky few who never had any discomfort problems with the Vision Pro, but I know a bunch of folks who said the pressure the device put on their foreheads was unbearable. That’s exactly what this new band remedies, so it’s nice to see.

The M5 chip offers more than just speed

Whereas the first Vision Pro had Apple’s M2 chip—which was already a little behind the times when it launched—the new one adds the M5. It’s much faster, especially for graphics-processing and machine-learning tasks. We’ve written a lot about the M5 in our articles on other Apple products if you’re interested to learn more about it.

Functionally, this means a lot of little things are a bit faster, like launching certain applications or generating a Persona avatar. I’ll be frank: I didn’t notice any difference that significantly impacted the user experience. I’m not saying I couldn’t tell it was faster sometimes. I’m just saying it wasn’t faster in a way that’s meaningful enough to change any attitudes about the device.

It’s most noticeable with games—both native mixed-reality Vision Pro titles and the iPad versions of demanding games that you can run on a virtual display on the device. Demanding 3D games look and run nicer, in many cases. The M5 also supports more recent graphics advancements like ray tracing and mesh shading, though very few games support them, even in terms of iPad versions.

All this is to say that while I always welcome performance improvements, they are definitely not enough to convince an M2 Vision Pro owner to upgrade, and they won’t tip things over for anyone who has been on the fence about buying one of these things.

The main perk of the new chip is improved efficiency, which is the driving force behind modestly increased battery life. When I first took the M2 Vision Pro on a plane, I tried watching 2021’s Dune. I made it through the movie, but just barely; the battery ran out during the closing credits. It’s not a short movie, but there are longer ones.

Now, the new headset can easily get another 30 or 60 minutes, depending on what you’re doing, which finally puts it in “watch any movie you want” territory.

Given how short battery life was in the original version, even a modest bump like that makes a big difference. That, alongside a marginally increased field of view (about 10 percent) and a new 120 Hz maximum refresh rate for passthrough are the best things about the new hardware. These are nice-to-haves, but they’re not transformational by any means.

We already knew the Vision Pro offered excellent hardware (even if it’s overkill for most users), but the platform’s appeal is really driven by software. Unfortunately, this is where things are running behind expectations.

For content, it’s quality over quantity

When the first Vision Pro launched, I was bullish about the promise of the platform—but a lot of that was contingent on a strong content cadence and third-party developer support.

And as I’ve written since, the content cadence for the first year was a disappointment. Whereas I expected weekly episodes of Apple’s Immersive Videos in the TV app, those short videos arrived with gaps of several months. There’s an enormous wealth of great immersive content outside of Apple’s walled garden, but Apple didn’t seem interested in making that easily accessible to Vision Pro owners. Third-party apps did some of that work, but they lagged behind those on other platforms.

The first-party content cadence picked up after the first year, though. Plus, Apple introduced the Spatial Gallery, a built-in app that aggregates immersive 3D photos and the like. It’s almost TikTok-like in that it lets you scroll through short-form content that leverages what makes the device unique, and it’s exactly the sort of thing that the platform so badly needed at launch.

The Spatial Gallery is sort of like a horizontally-scrolling TikTok for 3D photos and video. Credit: Samuel Axon

The content that is there—whether in the TV app or the Spatial Gallery—is fantastic. It’s beautifully, professionally produced stuff that really leans on the hardware. For example, there is an autobiographical film focused on U2’s Bono that does some inventive things with the format that I had never seen or even imagined before.

Bono, of course, isn’t everybody’s favorite, but if you can stomach the film’s bloviating, it’s worth watching just with an eye to what a spatial video production can or should be.

I still think there’s significant room to grow, but the content situation is better than ever. It’s not enough to keep you entertained for hours a day, but it’s enough to make putting on the headset for a bit once a week or so worth it. That wasn’t there a year ago.

The software support situation is in a similar state.

App support is mostly frozen in the year 2024

Many of us have a suite of go-to apps that are foundational to our individual approaches to daily productivity. For me, primarily a macOS user, they are:

  • Firefox
  • Spark
  • Todoist
  • Obsidian
  • Raycast
  • Slack
  • Visual Studio Code
  • Claude
  • 1Password

As you can see, I don’t use most of Apple’s built-in apps—no Safari, no Mail, no Reminders, no Passwords, no Notes… no Spotlight, even. All that may be atypical, but it has never been a problem on macOS, nor has it been on iOS for a few years now.

Impressively, almost all of these are available on visionOS—but only because it can run iPad apps as flat, virtual windows. Firefox, Spark, Todoist, Obsidian, Slack, 1Password, and even Raycast are all available as supported iPad apps, but surprisingly, Claude isn’t, even though there is a Claude app for iPads. (ChatGPT’s iPad app works, though.) VS Code isn’t available, of course, but I wasn’t expecting it to be.

Not a single one of these applications has a true visionOS app. That’s too bad, because I can think of lots of neat things spatial computing versions could do. Imagine browsing your Obsidian graph in augmented reality! Alas, I can only dream.

You can tell the native apps from the iPad ones: The iPad ones have rectangular icons nested within circles, whereas the native apps fill the whole circle. Credit: Samuel Axon

If you’re not such a huge productivity software geek like me and you use Apple’s built-in apps, things look a little better, but surprisingly, there are still a few apps that you would imagine would have really cool spatial computing features—like Apple Maps—that don’t. Maps, too, is just an iPad app.

Even if you set productivity aside and focus on entertainment, there are still frustrating gaps. Almost two years later, there is still no Netflix or YouTube app. There are decent-enough third-party options for YouTube, but you have to watch Netflix in a browser, which is lower-quality than in a native app and looks horrible on one of the Vision Pro’s big virtual screens.

To be clear, there is a modest trickle of interesting spatial app experiences coming in—most of them games, educational apps, or cool one-off ideas that are fun to check out for a few minutes.

All this is to say that nothing has really changed since February 2024. There was an influx of apps at launch that included a small number of show-stoppers (mostly educational apps), but the rest ranged from “basically the iPad app but with one or two throwaway tech-demo-style spatial features you won’t try more than once” to “basically the iPad app but a little more native-feeling” to “literally just the iPad app.” As far as support from popular, cross-platform apps, it’s mostly the same list today as it was then.

Its killer app is that it’s a killer monitor

Even though Apple hasn’t made a big leap forward in developer support, it has made big strides in making the Vision Pro a nifty companion to the Mac.

From the start, it has had a feature that lets you simply look at a Mac’s built-in display, tap your fingers, and launch a large, resizable virtual monitor. I have my own big, multi-monitor setup at home, but I have used the Vision Pro this way sometimes when traveling.

I had some complaints at the start, though. It could only do one monitor, and that monitor was limited to 60 Hz and a standard widescreen resolution. That’s better than just using a 14-inch MacBook Pro screen, but it’s a far cry from the sort of high-end setup a $3,500 price tag suggests. Furthermore, it didn’t allow you to switch audio between the two devices.

Thanks to both software and hardware updates, that has all changed. visionOS now supports three different monitor sizes: the standard widescreen aspect ratio, a wider one that resembles a standard ultra-wide monitor, and a gigantic, ultra-ultra-wide wrap-around display that I can assure you will leave no one wanting for desktop space. It looks great. Problem solved! Likewise, it will now transfer your Mac audio to the Vision Pro or its Bluetooth headphones automatically.

All of that works not just on the new Vision Pro, but also on the M2 model. The new M5 model exclusively addresses the last of my complaints: You can now achieve higher refresh rates for that virtual monitor than 60 Hz. Apple says it goes “up to 120 Hz,” but there’s no available tool for measuring exactly where it’s landing. Still, I’m happy to see any improvement here.

This is the standard width for the Mac monitor feature… Samuel Axon

Through a series of updates, Apple has turned a neat proof-of-concept feature into something that is genuinely valuable—especially for folks who like ultra-wide or multi-monitor setups but have to travel a lot (like myself) or who just don’t want to invest in the display hardware at home.

You can also play your Mac games on this monitor. I tried playing No Man’s Sky and Cyberpunk 2077 on it with a controller, and it was a fantastic experience.

This, alongside spatial video and watching movies, is the Vision Pro’s current killer app and one of the main areas where Apple has clearly put a lot of effort into improving the platform.

Stop trying to make Personas happen

Strangely, another area where Apple has invested quite a bit to make things better is in the Vision Pro’s usefulness as a communications and meetings device. Personas—the 3D avatars of yourself that you create for Zoom calls and the like—were absolutely terrible when the M2 Vision Pro came out.

There is also EyeSight, which uses your Persona to show a simulacrum of your eyes to people around you in the real world, letting them know you are aware of your surroundings and even allowing them to follow your gaze. I understand the thought behind this feature—Apple doesn’t want mixed reality to be socially isolating—but it sometimes puts your eyes in the wrong place, it’s kind of hard to see, and it honestly seems like a waste of expensive hardware.

Primarily via software updates, I’m pleased to report that Personas are drastically improved. Mine now actually looks like me, and it moves more naturally, too.

I joined a FaceTime call with Apple reps where they showed me how Personas float and emote around each other, and how we could look at the same files and assets together. It was indisputably cool and way better than before, thanks to the improved Personas.

I can’t say as much for EyeSight, which looks the same. It’s hard for me to fathom that Apple has put multiple sensors and screens on this thing to support this feature.

In my view, dropping EyeSight would be the single best thing Apple could do for this headset. Most people don’t like  it, and most people don’t want it, yet there is no question that its inclusion adds a not-insignificant amount to both the price and the weight, the product’s two biggest barriers to adoption.

Likewise, Personas are theoretically cool, and it is a novel and fun experience to join a FaceTime call with people and see how it works and what you could do. But it’s just that: a novel experience. Once you’ve done it, you’ll never feel the need to do it again. I can barely imagine anyone who would rather show up to a call as a Persona than take the headset off for 30 minutes to dial in on their computer.

Much of this headset is dedicated to this idea that it can be a device that connects you with others, but maintaining that priority is simply the wrong decision. Mixed reality is isolating, and Apple is treating that like a problem to be solved, but I consider that part of its appeal.

If this headset were capable of out-in-the-world AR applications, I would not feel that way, but the Vision Pro doesn’t support any application that would involve taking it outside the home into public spaces. A lot of the cool, theoretical AR uses I can think of would involve that, but still no dice here.

The metaverse (it’s telling that this is the first time I’ve typed that word in at least a year) already exists: It’s on our phones, in Instagram and TikTok and WeChat and Fortnite. It doesn’t need to be invented, and it doesn’t need a new, clever approach to finally make it take off. It has already been invented. It’s already in orbit.

Like the iPad and the Apple Watch before it, the Vision Pro needs to stop trying to be a general-purpose device and instead needs to lean into what makes it special.

In doing so, it will become a better user experience, and it will get lighter and cheaper, too. There’s real potential there. Unfortunately, Apple may not go that route if leaks and insider reports are to be believed.

There’s still a ways to go, so hopefully this isn’t a dead end

The M5 Vision Pro was the first of four planned new releases in the product line, according to generally reliable industry analyst Ming-Chi Kuo. Next up, he predicted, would be a full Vision Pro 2 release with a redesign, and a Vision Air, a cheaper, lighter alternative. Those would all precede true smart glasses many years down the road.

I liked that plan: keep the full-featured Vision Pro for folks who want the most premium mixed reality experience possible (but maybe drop EyeSight), and launch a cheaper version to compete more directly with headsets like Meta’s Quest line of products, or the newly announced Steam Frame VR headset from Valve, along with planned competitors by Google, Samsung, and others.

True augmented reality glasses are an amazing dream, but there are serious problems of optics and user experience that we’re still a ways off from solving before those can truly replace the smartphone as Tim Cook once predicted.

All that said, it looks like that plan has been called into question. A Bloomberg report in October claimed that Apple CEO Tim Cook had told employees that the company was redirecting resources from future passthrough HMD products to accelerate work on smart glasses.

Let’s be real: It’s always going to be a once-in-a-while device, not a daily driver. For many people, that would be fine if it cost $1,000. At $3,500, it’s still a nonstarter for most consumers.

I believe there is room for this product in the marketplace. I still think it’s amazing. It’s not going to be as big as the iPhone, or probably even the iPad, but it has already found a small audience that could grow significantly if the price and weight could come down. Removing all the hardware related to Personas and EyeSight would help with that.

I hope Apple keeps working on it. When Apple released the Apple Watch, it wasn’t entirely clear what its niche would be in users’ lives. The answer (health and fitness) became crystal clear over time, and the other ambitions of the device faded away while the company began building on top of what was working best.

You see Apple doing that a little bit with the expanded Mac spatial display functionality. That can be the start of an intriguing journey. But writers have a somewhat crass phrase: “kill your darlings.” It means that you need to be clear-eyed about your work and unsentimentally cut anything that’s not working, even if you personally love it—even if it was the main thing that got you excited about starting the project in the first place.

It’s past time for Apple to start killing some darlings with the Vision Pro, but I truly hope it doesn’t go too far and kill the whole platform.

Photo of Samuel Axon

Samuel Axon is the editorial lead for tech and gaming coverage at Ars Technica. He covers AI, software development, gaming, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

Vision Pro M5 review: It’s time for Apple to make some tough choices Read More »