Tech

smart-beds-leave-sleepers-hot-and-bothered-during-aws-outage

Smart beds leave sleepers hot and bothered during AWS outage

Some users complained that malfunctioning devices kept them awake for hours. Others bemoaned waking up in the middle of the night drenched in sweat.

Even more basic features, such as alarms, failed to work when Eight Sleep’s servers went down.

Eight Sleep will offer local control

Eight Sleep co-founder and CEO Matteo Franceschetti addressed the problems via X on Monday:

The AWS outage has impacted some of our users since last night, disrupting their sleep. That is not the experience we want to provide and I want to apologize for it.

We are taking two main actions:

1) We are restoring all the features as AWS comes back. All devices are currently working, with some experiencing data processing delays.

2) We are currently outage-proofing your Pod experience and we will be working tonight-24/7 until that is done.

On Monday evening, Franceschetti said that “all the features should be working.” On Tuesday, he claimed that a local control option would be available on Wednesday “at the latest” without providing more detail.

Eight Sleep users will be relieved to hear that the company is working to make their products usable during Internet outages. But many are also questioning why Eight Sleep didn’t implement local control sooner. This isn’t Eight Sleep’s first outage, and users can also experience personal Wi-Fi problems. And there’s an obvious user benefit to being able to control their bed’s elevation and temperature without the Internet or if Eight Sleep ever goes out of business.

For Eight Sleep, though, making flagship features available without its app while still making enough money isn’t easy. Without forcing people to put their Eight Sleep devices online, it would be harder for Eight Sleep to convince people that Autopilot subscriptions should be mandatory. Pod hardware’s high prices will deter people from multiple or frequent purchases, making alternative, more frequent revenue streams key for the 11-year-old company’s survival.

After a June outage, an Eight Sleep user claimed that the company told him that it was working on an offline mode. This week’s AWS problems seem to have hastened efforts, so users don’t lose sleep during the next outage.

Smart beds leave sleepers hot and bothered during AWS outage Read More »

upcoming-ios-and-macos-26.1-update-will-let-you-fog-up-your-liquid-glass

Upcoming iOS and macOS 26.1 update will let you fog up your Liquid Glass

Apple’s new Liquid Glass user interface design was one of the most noticeable and divisive features of its major software updates this year. It added additional fluidity and translucency throughout iOS, iPadOS, macOS, and Apple’s other operating systems, and as we noted in our reviews, the default settings weren’t always great for readability.

The upcoming 26.1 update for all of those OSes is taking a step toward addressing some of the complaints, though not by changing things about the default look of Liquid Glass. Rather, the update is adding a new toggle that will let users choose between a Clear and Tinted look for Liquid Glass, with Clear representing the default look and Tinted cranking up the opacity and contrast.

The new toggle adds a half-step between the default visual settings and the “reduce transparency” setting, which, aside from changing a bunch of other things about the look and feel of the operating system, is buried further down inside the Accessibility options. The Tinted toggle does make colors and vague shapes visible beneath the glass panes, preserving the general look of Liquid Glass while also erring on the side of contrast and visibility, where the “reduce transparency” setting is more of an all-or-nothing blunt instrument.

Upcoming iOS and macOS 26.1 update will let you fog up your Liquid Glass Read More »

youtube’s-likeness-detection-has-arrived-to-help-stop-ai-doppelgangers

YouTube’s likeness detection has arrived to help stop AI doppelgängers

AI content has proliferated across the Internet over the past few years, but those early confabulations with mutated hands have evolved into synthetic images and videos that can be hard to differentiate from reality. Having helped to create this problem, Google has some responsibility to keep AI video in check on YouTube. To that end, the company has started rolling out its promised likeness detection system for creators.

Google’s powerful and freely available AI models have helped fuel the rise of AI content, some of which is aimed at spreading misinformation and harassing individuals. Creators and influencers fear their brands could be tainted by a flood of AI videos that show them saying and doing things that never happened—even lawmakers are fretting about this. Google has placed a large bet on the value of AI content, so banning AI from YouTube, as many want, simply isn’t happening.

Earlier this year, YouTube promised tools that would flag face-stealing AI content on the platform. The likeness detection tool, which is similar to the site’s copyright detection system, has now expanded beyond the initial small group of testers. YouTube says the first batch of eligible creators have been notified that they can use likeness detection, but interested parties will need to hand Google even more personal information to get protection from AI fakes.

Sneak Peek: Likeness Detection on YouTube.

Currently, likeness detection is a beta feature in limited testing, so not all creators will see it as an option in YouTube Studio. When it does appear, it will be tucked into the existing “Content detection” menu. In YouTube’s demo video, the setup flow appears to assume the channel has only a single host whose likeness needs protection. That person must verify their identity, which requires a photo of a government ID and a video of their face. It’s unclear why YouTube needs this data in addition to the videos people have already posted with their oh-so stealable faces, but rules are rules.

YouTube’s likeness detection has arrived to help stop AI doppelgängers Read More »

testing-apple’s-m5-ipad-pro:-future-proofing-for-apple’s-perennial-overkill-tablet

Testing Apple’s M5 iPad Pro: Future-proofing for Apple’s perennial overkill tablet


It’s a gorgeous tablet, but what does an iPad need with more processing power?

Apple’s 13-inch M5 iPad Pro. Credit: Andrew Cunningham

Apple’s 13-inch M5 iPad Pro. Credit: Andrew Cunningham

This year’s iPad Pro is what you might call a “chip refresh” or an “internal refresh.” These refreshes are what Apple generally does for its products for one or two or more years after making a larger external design change. Leaving the physical design alone preserves compatibility with the accessory ecosystem.

For the Mac, chip refreshes are still pretty exciting to me, because many people who use a Mac will, very occasionally, assign it some kind of task where they need it to work as hard and fast as it can, for an extended period of time. You could be a developer compiling a large and complex app, or you could be a podcaster or streamer editing or exporting an audio or video file, or maybe you’re just playing a game. The power and flexibility of the operating system, and first- and third-party apps made to take advantage of that power and flexibility, mean that “more speed” is still exciting, even if it takes a few years for that speed to add up to something users will consistently notice and appreciate.

And then there’s the iPad Pro. Especially since Apple shifted to using the same M-series chips that it uses in Macs, most iPad Pro reviews contain some version of “this is great hardware that is much faster than it needs to be for anything the iPad does.” To wit, our review of the M4 iPad Pro from May 2024:

Still, it remains unclear why most people would spend one, two, or even three thousand dollars on a tablet that, despite its amazing hardware, does less than a comparably priced laptop—or at least does it a little more awkwardly, even if it’s impressively quick and has a gorgeous screen.

Since then, Apple has announced and released iPadOS 26, an update that makes important and mostly welcome changes to how the tablet handles windowed multitasking, file transfers, and some other kinds of background tasks. But this is the kind of thing that isn’t even going to stress out an Apple M1, let alone a chip that’s twice as powerful.

All of this is to say: A chip refresh for an iPad is nice to have. This year’s will also come with a handy RAM increase for many buyers, the first RAM boost that the base model iPad Pro has gotten in more than four years.

But without any other design changes or other improvements to hang its hat on, the fact is that chip refresh years for the iPad Pro only really improve a part of the tablet that needs the least amount of improvement. That doesn’t make them bad; who knows what the hardware requirements will be when iPadOS 30 adds some other batch of multitasking features. But it does mean these refreshes don’t feel particularly exciting or necessary; the most exciting thing about the M5 iPad Pro means you might be able to get a good deal on an M4 model as retailers clear out their stock. You aren’t going to notice the difference.

Design: M4 iPad Pro redux

The 13-inch M5 iPad Pro in its Magic Keyboard accessory with the Apple Pencil Pro attached. Credit: Andrew Cunningham

Lest we downplay this tablet’s design, the M4 version of the iPad Pro was the biggest change to the tablet since Apple introduced the modern all-screen design for the iPad Pro back in 2018. It wasn’t a huge departure, but it did introduce the iPad’s first OLED display, a thinner and lighter design, and a slightly improved Apple Pencil and updated range of accessories.

As with the 14-inch M5 MacBook Pro that Apple just launched, the easiest way to know how much you’ll like the iPad Pro depends on how you feel about screen technology (the iPad is, after all, mostly screen). If you care about the 120 Hz, high-refresh-rate ProMotion screen, the option to add a nano-texture display with a matte finish, and the infinite contrast and boosted brightness of Apple’s OLED displays, those are the best reasons to buy an iPad Pro. The $299/349 Magic Keyboard accessory for the iPad Pro also comes with backlit keys and a slightly larger trackpad than the equivalent $269/$319 iPad Air accessory.

If none of those things inspire passion in you, or if they’re not worth several hundred extra dollars to you—the nano-texture glass upgrade alone adds $700 to the price of the iPad Pro, because Apple only offers it on the 1TB and 2TB models—then the 11- and 13-inch iPads Air are going to give you a substantively identical experience. That includes compatibility with the same Apple Pencil accessory and support for all the same multitasking and Apple Intelligence features.

The M5 iPad Pro supports the same Apple Pencil Pro as the M4 iPad Pro, and the M2 and M3 iPad Air. Credit: Andrew Cunningham

One other internal change to the new iPad Pro, aside from the M5, is mostly invisible: Wi-Fi, Bluetooth, and Thread connectivity provided by the Apple N1 chip, and 5G cellular connectivity provided by the Apple C1X. Ideally, you won’t notice this swap at all, but it’s a quietly momentous change for Apple. Both of these chips cap several years of acquisitions and internal development, and further reduce Apple’s reliance on external chipmakers like Qualcomm and Broadcom, which has been one of the goals of Apple’s A- and M-series processors all along.

There’s one last change we haven’t really been able to adequately test in the handful of days we’ve had the tablet: new fast-charging support, either with Apple’s first-party Dynamic Power Adapter or any USB-C charger capable of providing 60 W or more of power. When using these chargers, Apple says the tablet’s battery can charge from 0 to 50 percent in 35 minutes. (Apple provides the same battery life estimates for the M5 iPads as the M4 models: 10 hours of Wi-Fi web usage, or 9 hours of cellular web usage, for both the 13- and 11-inch versions of the tablet.)

Two Apple M5 chips, two RAM options

Apple sent us the 1TB version of the 13-inch iPad Pro to test, which means we got the fully enabled version of the M5: four high-performance CPU cores, six high-efficiency GPU cores, 10 GPU cores, a 16-core Neural Engine, and 16GB of RAM.

Apple’s Macs still offer individually configurable processor, storage, and RAM upgrades to users—generally buying one upgrade doesn’t lock you into buying a bunch of other stuff you don’t want or need (though there are exceptions for RAM configurations in some of the higher-end Macs). But for the iPads, Apple still ties the chip and the RAM you get to storage capacity. The 256GB and 512GB iPads get three high-performance CPU cores instead of four, and 12GB of RAM instead of 16GB.

For people who buy the 256GB and 512GB iPads, this does amount to a 50 percent increase in RAM capacity from the M1, M2, and M4 iPad Pro models, or the M1, M2, and M3 iPad Airs, all of which came with 8GB of RAM. High-end models stick with the same 16GB of RAM as before (no 24GB or 32GB upgrades here, though the M5 supports them in Macs). The ceiling is in the same place, but the floor has come up.

Given that iPadOS is still mostly running on tablets with 8GB or less of RAM, I don’t expect the jump from 8GB to 12GB to make a huge difference in the day-to-day experience of using the tablet, at least for now. If you connect your iPad to an external monitor that you use as an extended display, it might help keep more apps in memory at a time; it could help if you edit complex multi-track audio or video files or images, or if you’re trying to run some kind of machine learning or AI workflows locally. Future iPadOS versions could also require more than 8GB of memory for some features. But for now, the benefit exists mostly on paper.

As for benchmarks, the M5’s gains in the iPad are somewhat more muted than they are for the M5 MacBook Pro we tested. We observed a 10 or 15 percent improvement across single- and multi-core CPU tests and graphics benchmark improvements that mostly hovered in the 15 to 30 percent range. The Geekbench 6 Compute benchmark was one outlier, pointing to a 35 percent increase in GPU performance; it’s possible that GPU or rendering-heavy workloads benefit a little more from the new neural accelerators in the M5’s GPU cores than games do.

In the MacBook review, we observed that the M5’s CPU generally had higher peak power consumption than the M4. In the fanless iPad Pro, it’s likely that Apple has reined the chip in a little bit to keep it cool, which would explain why the iPad’s M5 doesn’t see quite the same gains.

The M5 and the 12GB RAM minimum does help to put a little more distance between the M3 iPad Air and the Pros. Most iPad workloads don’t benefit in an obvious user-noticeable way from the extra performance or memory right now, but it’s something you can point to that makes the Pro more “pro” than the Air.

Changed hardware that doesn’t change much

The M5 iPad Pro is nice in the sense that “getting a little more for your money today than you could get for the same money two weeks ago” is nice. But it changes essentially nothing for potential iPad buyers.

I’m hard-pressed to think of anyone who would be well-served by the M5 iPad Pro who wouldn’t have been equally well-served by the M4 version. And if the M4 iPad Pro was already overkill for you, the M5 is just a little more so. Particularly if you have an M1 or M2 ; People with an A12X or A12Z version of the iPad Pro from 2018 or 2020 will benefit more, particularly if you’re multitasking a lot or running into limitations or RAM complaints from the apps you’re using.

But even with the iPadOS 26 update, it still seems like the capabilities of the iPad’s software lags behind the capabilities of the hardware by a few years. It’s to be expected, maybe, for an operating system that has to run on this M5 iPad Pro and a 7-year-old phone processor with 3GB of RAM.

I am starting to feel the age of the M1 MacBook Air I use, especially if I’m pushing multiple monitors with it or trying to exceed its 16GB RAM limit. The M1 iPad Air I have, on the other hand, feels like it just got an operating system that unlocks some of its latent potential. That’s the biggest problem with the iPad Pro, really—not that it’s a bad tablet, but that it’s still so much more tablet than you need to do what iPadOS and its apps can currently do.

The good

  • A fast, beautiful tablet that’s a pleasure to use.
  • The 120Hz ProMotion support and OLED display panel make this one of Apple’s best screens, period.
  • 256GB and 512GB models get a bump from 8GB to 12GB of memory.
  • Maintains compatibility with the same accessories as the M4 iPad Pro.

The bad

  • More iPad than pretty much anyone needs.
  • Passively cooled fanless Apple M5 can’t stretch its legs quite as much as the actively cooled Mac version.
  • Expensive accessories.

The ugly

  • All other hardware upgrades, including the matte nano-texture display finish, require a $600 upgrade to the 1TB version of the tablet.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Testing Apple’s M5 iPad Pro: Future-proofing for Apple’s perennial overkill tablet Read More »

hbo-max-prices-increase-by-up-to-$20-today

HBO Max prices increase by up to $20 today

HBO Max subscriptions are getting up to 10 percent more expensive, owner Warner Bros. Discovery (WBD) revealed today.

HBO Max’s ad plan is going from $10 per month to $11/month. The ad-free plan is going from $17/month to $18.49/month. And the premium ad-free plan (which adds 4K support, Dolby Atmos, and the ability to download more content) is increasing from $21 to $23.

Meanwhile, prices for HBO Max’s annual plans are increasing from $100 to $110 with ads, $170 to $185 without ads, and $210 to $230 for the premium tier.

For current subscribers, the price hikes won’t take effect until November 20, Variety reported. People who try to subscribe to the streaming service from here on out will have to pay the new prices immediately.

Price hike hints

The price hikes follow comments from WBD CEO David Zaslav last month that WBD’s flagship streaming service was “way underpriced.” Speaking at the Goldman Sachs Cornucopia + Technology conference, Zaslav’s reasoning stemmed from the service’s “quality,” as well as people previously spending “on average, $55 for content 10 years ago.”

Another hint that HBO Max would be getting more expensive is its history of getting more expensive. The service most recently raised subscription fees in June 2024, when it made its ad-free plans more expensive. HBO Max’s first price hike was in January 2023. The service launched in May 2020.

HBO Max is getting more expensive as streaming companies grapple with the financial realities of making robust, diverse libraries of classic, new, and exclusive shows and movies available globally and on-demand. HBO Max rivals Disney+, Apple TV, and Peacock have all raised prices since the summer.

For years, WBD has been arguing that streaming services are too cheap. At a Citibank conference in 2023, WBD CFO Gunnar Weidenfels said that collapsing seven media distribution windows into one “and selling it at the lowest possible price doesn’t sound like a very smart strategy.

HBO Max prices increase by up to $20 today Read More »

macbook-pro:-apple’s-most-awkward-laptop-is-the-first-to-show-off-apple-m5

MacBook Pro: Apple’s most awkward laptop is the first to show off Apple M5


the apple m5: one more than m4

Apple M5 trades blows with Pro and Max chips from older generations.

Apple’s M5 MacBook Pro. Credit: Andrew Cunningham

Apple’s M5 MacBook Pro. Credit: Andrew Cunningham

When I’m asked to recommend a Mac laptop for people, Apple’s low-end 14-inch MacBook Pro usually gets lost in the shuffle. It competes with the 13- and 15-inch MacBook Air, significantly cheaper computers that meet or exceed the “good enough” boundary for the vast majority of computer users. The basic MacBook Pro also doesn’t have the benefit of Apple’s Pro or Max-series chips, which come with many more CPU cores, substantially better graphics performance, and higher memory capacity for true professionals and power users.

But the low-end Pro makes sense for a certain type of power user. At $1,599, it’s the cheapest way to get Apple’s best laptop screen, with mini LED technology, a higher 120 Hz ProMotion refresh rate for smoother scrolling and animations, and the optional but lovely nano-texture (read: matte) finish. Unlike the MacBook Air, it comes with a cooling fan, which has historically meant meaningfully better sustained performance and less performance throttling. And it’s also Apple’s cheapest laptop with three Thunderbolt ports, an HDMI port, and an SD card slot, all genuinely useful for people who want to plug lots of things in without having multiple dongles or a bulky dock competing for the Air’s two available ports.

If you don’t find any of those arguments in the basic MacBook Pro’s favor convincing, that’s fine. The new M5 version makes almost no changes to the laptop other than the chip, so it’s unlikely to change your calculus if you already looked at the M3 or M4 version and passed it up. But it is the first Mac to ship with the M5, the first chip in Apple’s fifth-generation chip family and a preview of what’s to come for (almost?) every other Mac in the lineup. So you can at least be interested in the 14-inch MacBook Pro as a showcase for a new processor, if not as a retail product in and of itself.

The Apple Silicon MacBook Pro, take five

Apple has been using this laptop design for about four years now, since it released the M1 Pro and M1 Max versions of the MacBook Pro in late 2021. But for people who are upgrading from an older design—Apple did use the old Intel-era design, Touch Bar and all, for the low-end M1 and M2 MacBook Pros, after all—we’ll quickly hit the highlights.

This basic MacBook Pro only comes in a 14-inch screen size, up from 13-inches for the old low-end MacBook Pro, but some of that space is eaten up by the notch across the top of the display. The strips of screen on either side of the notch are usable by macOS, but only for the menu bar and icons that live in the menu bar—it’s a no-go zone for apps. The laptop is a consistent thickness throughout, rather than tapered, and has somewhat more squared-off and less-rounded corners.

Compared to the 13-inch MacBook Pro, the 14-inch version is the same thickness, but it’s a little heavier (3.4 pounds, compared to 3), wider, and deeper. For most professional users, the extra screen size and the re-addition of the HDMI port and SD card slot mostly justify the slight bump up in size. The laptop also includes three Thunderbolt 3 ports—up from two in the MacBook Airs—and the resurrected MagSafe charging port. But it is worth noting that the 14-inch MacBook Pro is nearly identical in weight to the 15-inch MacBook Air. If screen size is all you’re after, the Air may still be the better choice.

Apple’s included charger uses MagSafe on the laptop end, but USB-C chargers, docks, monitors, and other accessories will continue to charge the laptop if that’s what you prefer to keep using.

I’ve got no gripes about Apple’s current laptop keyboard—Apple uses the same key layout, spacing, and size across the entire MacBook Air and Pro line, though if I had to distinguish between the Pro and Air, I’d say the Pro’s keyboard is very, very slightly firmer and more satisfying to type on and that the force feedback of its trackpad is just a hair more clicky. The laptop’s speaker system is also more impressive than either MacBook Air, with much bassier bass and a better dynamic range.

But the main reason to prefer this low-end Pro to the Air is the screen, particularly the 120 Hz ProMotion support, the improved brightness and contrast of the mini LED display technology, and the option to add Apple’s matte nano texture finish. I usually don’t mind the amount of glare coming off my MacBook Air’s screen too much, but every time I go back to using a nano-texture screen I’m always a bit jealous of the complete lack of glare and reflections and the way you get those benefits without dealing with the dip in image quality you see from many matte-textured screen protectors. The more you use your laptop outdoors or under lighting conditions you can’t control, the more you’ll appreciate it.

The optional nano texture display adds a pleasant matte finish to the screen, but that notch is still notching. Credit: Andrew Cunningham

If the higher refresh rate and the optional matte coating (a $150 upgrade on top of an already pricey computer) don’t appeal to you, or if you can’t pay for them, then you can be pretty confident that this isn’t the MacBook for you. The 13-inch Air is lighter, and the 5-inch Air is larger, and both are cheaper. But we’re still only a couple of years past the M2 version of the low-end MacBook Pro, which didn’t give you the extra ports or the Pro-level screen.

But! Before you buy one of the still-M4-based MacBook Airs, our testing of the MacBook Pro’s new M5 chip should give you some idea of whether it’s worth waiting a few months (?) for an Air refresh.

Testing Apple’s M5

We’ve also run some M5 benchmarks as part of our M5 iPad Pro review, but having macOS rather than iPadOS running on top of it does give us a lot more testing flexibility—more benchmarks and a handful of high-end games to run, plus access to the command line for taking a look at power usage and efficiency.

To back up and re-state the chip’s specs for a moment, though, the M5 is constructed out of the same basic parts as the M4: four high-performance CPU cores, six high-efficiency CPU cores (up from four in the M1/M2/M3), 10 GPU cores, and a 16-core Neural Engine for handling some machine-learning and AI workloads.

The M5’s technical improvements are more targeted and subtle than just a boost to clock speeds or core counts. The first is a 27.5 percent increase in memory bandwidth, from the 120 GB/s of the M4 to 153 GB/s (achieved, I’m told, by a combination of faster RAM and the memory fabric that facilitates communication between different areas of the chip. Integrated GPUs are usually bottlenecked by memory bandwidth first and core count second, so memory bandwidth improvements can have a pretty direct, linear impact on graphics performance.

Apple also says it has added a “Neural Accelerator” to each of its GPU cores, separate from the Neural Engine. These will benefit a few specific types of workloads—things like MetalFX graphics upscaling or frame generation that would previously have had to use the Neural Engine can now do that work entirely within the GPU, eliminating a bit of latency and freeing the Neural Engine up to do other things. Apple is also claiming “over 4x peak GPU compute compared to M4,” which Apple says will speed up locally run AI language models and image generation software. That figure is coming mostly from the GPU improvements; according to Geekbench AI, the Neural Engine itself is only around 10 percent faster than the one on the M4.

(A note about testing: The M4 chip in these charts was in an iMac and not a MacBook Pro. But over several hardware generations, we’ve observed that the actively cooled versions of the basic M-series chips perform the same in both laptops and desktops. Comparing the M5 to the passively cooled M4 in the MacBook Air isn’t apples to apples, but comparing it to the M4 in the iMac is.)

Each of Apple’s chip generations has improved over the previous one by low-to-mid double digits, and the M5 is no different. We measured a 12 to 16 percent improvement over the M4 in single-threaded CPU tests, a 20 to 30 percent improvement in multicore tests, and roughly a 40 percent improvement in graphics benchmarks and the Mac version of the built-in Cyberpunk 2077 benchmark (one benchmark, the GPU-based version of the Blender rendering benchmark, measured a larger 60 to 70 percent improvement for the M5’s GPU, suggesting it either benefits more than most apps from the memory bandwidth improvements or the new neural accelerators).

Those performance additions add up over time. The M5 is typically a little over twice as fast as the M1, and it comes close to the performance level of some Pro and Max processors from past generations.

The M5 MacBook Pro falls short of the M4 Pro, and it will fall even shorter of the M5 Pro whenever it arrives. But its CPU performance generally beats the M3 Pro in our tests, and its GPU performance comes pretty close. Its multi-core CPU performance beats the M1 Max, and its single-core performance is over 80 percent faster. The M5 can’t come close to the graphics performance of any of these older Max or Ultra chips, but if you’re doing primarily CPU-heavy work and don’t need more than 32GB of RAM, the M5 holds up astonishingly well to Apple’s high-end silicon from just a few years ago.

It wasn’t so long ago that this kind of performance improvement was more-or-less normal across the entire tech industry, but Intel, AMD, and Nvidia’s consumer CPUs and GPUs have really slowed their rate of improvement lately, and Intel and AMD are both guilty of re-using old silicon for entry-level chips, over and over again. If you’re using a 6- or 7-year-old PC, sure, you’ll see performance improvements from something new, but it’s more of a crapshoot for a 3- to 4-year-old PC.

If there’s a downside to the M5 in our testing, it’s that its performance improvements seem to come with increased power draw relative to the M4 when all the CPU cores are engaged in heavy lifting. According to macOS built-in powermetrics tool, the M5 drew an average 28 W of power in our Handbrake video encoding test, compared to around 17 W for the M4 running the same test.

Using software tools to compare power draw between different chip manufacturers or even chip generations is dicey, because you’re trusting that different hardware is reporting its power use to the operating system in similar ways. But assuming they’re accurate, these numbers suggest that Apple could be pushing clock speeds more aggressively this generation to squeeze more performance out of the chip.

This would make some sense, since the third-generation 3nm TSMC manufacturing process used for the M5 (likely N3P) looks like a fairly mild upgrade from the second-generation 3nm process used for the M4 (N3E). TSMC says that N3P can boost performance by 5 percent at the same power use compared to N3E, or reduce power draw by 5 to 10 percent at the same performance. To get to the larger double-digit performance improvements that Apple is claiming and that we measured in our testing, you’d definitely expect to see the overall power consumption increase.

To put the M5 in context, the M2 and the M3 came a bit closer to its average power draw in our video encoding test (23.2 and 22.7 W, respectively), and the M5’s power draw comes in much lower than any past-generation Pro or Max chips. In terms of the amount of power used to complete the same task, the M5’s efficiency is worse than the M4’s according to powermetrics, but better than older generations. And Apple’s performance and power efficiency remains well ahead of what Intel or AMD can offer in their high-end products.

Impressive chip, awkward laptop

The low-end MacBook Pro has always occupied an odd in-between place in Apple’s lineup, overlapping in a lot of places with the MacBook Air and without the benefit of the much-faster chips that the 15- and 16-inch MacBook Pros could fit. The M5 MacBook Pro carries on that complicated legacy, and even with the M5 there are still lots of people for whom one of the M4 MacBook Airs is just going to be a better fit.

But it is a very nice laptop, and if your screen is the most important part of your laptop, this low-end Pro does make a decent case for itself. It’s frustrating that the matte display is a $150 upcharge, but it’s an option you can’t get on an Air, and the improved display panel and faster ProMotion refresh rate make scrolling and animations all look smoother and more fluid than they do on an Air’s screen. I still mostly think that this is a laptop without a huge constituency—too much more expensive than the Air, too much slower than the other Pros—but the people who buy it for the screen should still be mostly happy with the performance and ports.

This MacBook Pro is more exciting to me as a showcase for the Apple M5—and I’m excited to see the M5 and its higher-end Pro, Max, and (possibly) Ultra relatives show up in other Macs.

The M5 sports the highest sustained power draw of any M-series chip we’ve tested, but Apple’s past generations (the M4 in particular) have been so efficient that Apple has some room to bump up power consumption while remaining considerably more efficient than anything its competitors are offering. What you get in exchange is an impressively fast chip, as good or better than many of the Pro or Max chips in previous-generation products. For anyone still riding out the tail end of the Intel era, or for people with M1-class Macs that are showing their age, the M5 is definitely fast enough to feel like a real upgrade. That’s harder to come by in computing than it used to be.

The good

  • M5 is a solid performer that shows how far Apple has come since the M1.
  • Attractive, functional design, with a nice keyboard and trackpad, great-sounding speakers, a versatile selection of ports, and Apple’s best laptop screen.
  • Optional nano-texture display finish looks lovely and eliminates glare.

The bad

  • Harder to recommend than Apple’s other laptops if you don’t absolutely require a ProMotion screen.
  • A bit heavier than other laptops in its size class (and barely lighter than the 15-inch MacBook Air).
  • M5 can use more power than M4 did.

The ugly

  • High price for RAM and storage upgrades, and a $150 upsell for the nano-textured display.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

MacBook Pro: Apple’s most awkward laptop is the first to show off Apple M5 Read More »

google-fi-is-getting-enhanced-web-calls-and-messaging,-ai-bill-summaries

Google Fi is getting enhanced web calls and messaging, AI bill summaries

Google’s Fi cellular service is getting an upgrade, and since this is 2025, there’s plenty of AI involved. You’ll be able to ask Google AI questions about your bill, and a different variation of AI will improve call quality. AI haters need not despair—there are also some upgrades to connectivity and Fi web features.

As part of this update, a new Gemini-powered chatbot will soon be turned loose on your billing statements. The idea is that you can get bill summaries and ask specific questions of the robot without waiting for a real person. Google claims that testers have had positive experiences with the AI billing bot, so it’s rolling the feature out widely.

Next month, Google also plans to flip the switch on an AI audio enhancement. The new “optimized audio” will use AI to filter out background sounds like wind or crowd noise. If you’re using a Pixel, you already have a similar feature for your end of the call. However, this update will reduce background noise on the other end as well. Google’s MVNO has also added support for HD and HD+ calling on supported connections.

The AI stuff aside, Google is making a long-overdue improvement to Fi’s web interface. While Fi added support for RCS messaging fairly early on, the technology didn’t work with the service’s web-based features. If you wanted to call or text from your browser, you had to disable RCS on your account. That is thankfully changing.

Google Fi is getting enhanced web calls and messaging, AI bill summaries Read More »

ring-cameras-are-about-to-get-increasingly-chummy-with-law-enforcement

Ring cameras are about to get increasingly chummy with law enforcement


Amazon’s Ring partners with company whose tech has reportedly been used by ICE.

Ring’s Outdoor Cam Pro. Credit: Amazon

Law enforcement agencies will soon have easier access to footage captured by Amazon’s Ring smart cameras. In a partnership announced this week, Amazon will allow approximately 5,000 local law enforcement agencies to request access to Ring camera footage via surveillance platforms from Flock Safety. Ring cooperating with law enforcement and the reported use of Flock technologies by federal agencies, including US Immigration and Customs Enforcement (ICE), has resurfaced privacy concerns that have followed the devices for years.

According to Flock’s announcement, its Ring partnership allows local law enforcement members to use Flock software “to send a direct post in the Ring Neighbors app with details about the investigation and request voluntary assistance.” Requests must include “specific location and timeframe of the incident, a unique investigation code, and details about what is being investigated,” and users can look at the requests anonymously, Flock said.

“Any footage a Ring customer chooses to submit will be securely packaged by Flock and shared directly with the requesting local public safety agency through the FlockOS or Flock Nova platform,” the announcement reads.

Flock said its local law enforcement users will gain access to Ring Community Requests in “the coming months.”

A flock of privacy concerns

Outside its software platforms, Flock is known for license plate recognition cameras. Flock customers can also search footage from Flock cameras using descriptors to find people, such as “man in blue shirt and cowboy hat.” Besides law enforcement agencies, Flock says 6,000 communities and 1,000 businesses use their products.

For years, privacy advocates have warned against companies like Flock.

This week, US Sen. Ron Wyden (D-Ore.) sent a letter [PDF] to Flock CEO Garrett Langley saying that ICE’s Homeland Security Investigations (HSI), the Secret Service, and the US Navy’s Criminal Investigative Service have had access to footage from Flock’s license plate cameras.

“I now believe that abuses of your product are not only likely but inevitable and that Flock is unable and uninterested in preventing them,” Wyden wrote.

In August, Jay Stanley, senior policy analyst for the ACLU Speech, Privacy, and Technology Project, wrote that “Flock is building a dangerous, nationwide mass-surveillance infrastructure.” Stanley pointed to ICE using Flock’s network of cameras, as well as Flock’s efforts to build a people lookup tool with data brokers.

Matthew Guariglia, senior policy analyst at the Electronic Frontier Foundation (EFF), told Ars via email that Flock is a “mass surveillance tool” that “has increasingly been used to spy on both immigrants and people exercising their First Amendment-protected rights.”

Flock has earned this reputation among privacy advocates through its own cameras, not Ring’s.

An Amazon spokesperson told Ars Technica that only local public safety agencies will be able to make Community Requests via Flock software, and that requests will also show the name of the agency making the request.

A Flock spokesperson told Ars:

Flock does not currently have any contracts with any division of [the US Department of Homeland Security], including ICE. The Ring Community Requests process through Flock is only available for local public safety agencies for specific, active investigations. All requests are time and geographically-bound. Ring users can choose to share relevant footage or ignore the request.

Flock’s rep added that all activity within FlockOS and Flock Nova is “permanently recorded in a comprehensive CJIS-compliant audit trail for unalterable custody tracking,” referring to a set of standards created by the FBI’s Criminal Justice Information Services division.

But there’s still concern that federal agencies will end up accessing Ring footage through Flock. Guariglia told Ars:

Even without formal partnerships with federal authorities, data from these surveillance companies flow to agencies like ICE through local law enforcement. Local and state police have run more than 4,000 Flock searches on behalf of federal authorities or with a potential immigration focus, reporting has found. Additionally, just this month, it became clear that Texas police searched 83,000 Flock cameras in an attempt to prosecute a woman for her abortion and then tried to cover it up.

Ring cozies up to the law

This week’s announcement shows Amazon, which acquired Ring in 2018, increasingly positioning its consumer cameras as a law enforcement tool. After years of cops using Ring footage, Amazon last year said that it would stop letting police request Ring footage—unless it was an “emergency”—only to reverse course about 18 months later by allowing police to request Ring footage through a Flock rival, Axon.

While announcing Ring’s deals with Flock and Axon, Ring founder and CEO Jamie Siminoff claimed that the partnerships would help Ring cameras keep neighborhoods safe. But there’s doubt as to whether people buy Ring cameras to protect their neighborhood.

“Ring’s new partnership with Flock shows that the company is more interested in contributing to mounting authoritarianism than servicing the specific needs of their customers,” Guariglia told Ars.

Interestingly, Ring initiated conversations about a deal with Flock, Langely told CNBC.

Flock says that its cameras don’t use facial recognition, which has been criticized for racial biases. But local law enforcement agencies using Flock will soon have access to footage from Ring cameras with facial recognition. In a conversation with The Washington Post this month, Calli Schroeder, senior counsel at the consumer advocacy and policy group Electronic Privacy Information Center, described the new feature for Ring cameras as “invasive for anyone who walks within range of” a Ring doorbell, since they likely haven’t consented to facial recognition being used on them.

Amazon, for its part, has mostly pushed the burden of ensuring responsible facial recognition use to its customers. Schroeder shared concern with the Post that Ring’s facial recognition data could end up being shared with law enforcement.

Some people who are perturbed about Ring deepening its ties with law enforcement have complained online.

“Inviting big brother into the system. Screw that,” a user on the Ring subreddit said this week.

Another Reddit user said: “And… I’m gone. Nope, NO WAY IN HELL. Goodbye, Ring. I’ll be switching to a UniFi[-brand] system with 100 percent local storage. You don’t get my money any more. This is some 1984 BS …”

Privacy concerns are also exacerbated by Ring’s past, as the company has previously failed to meet users’ privacy expectations. In 2023, Ring agreed to pay $5.8 million to settle claims that employees illegally spied on Ring customers.

Amazon and Flock say their collaboration will only involve voluntary customers and local enforcement agencies. But there’s still reason to be concerned about the implications of people sending doorbell and personal camera footage to law enforcement via platforms that are reportedly widely used by federal agencies for deportation purposes. Combined with the privacy issues that Ring has already faced for years, it’s not hard to see why some feel that Amazon scaling up Ring’s association with any type of law enforcement is unacceptable.

And it appears that Amazon and Flock would both like Ring customers to opt in when possible.

“It will be turned on for free for every customer, and I think all of them will use it,” Langely told CNBC.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Ring cameras are about to get increasingly chummy with law enforcement Read More »

oneplus-unveils-oxygenos-16-update-with-deep-gemini-integration

OnePlus unveils OxygenOS 16 update with deep Gemini integration

The updated Android software expands what you can add to Mind Space and uses Gemini. For starters, you can add scrolling screenshots and voice memos up to 60 seconds in length. This provides more data for the AI to generate content. For example, if you take screenshots of hotel listings and airline flights, you can tell Gemini to use your Mind Space content to create a trip itinerary. This will be fully integrated with the phone and won’t require a separate subscription to Google’s AI tools.

oneplus-oxygen-os16

Credit: OnePlus

Mind Space isn’t a totally new idea—it’s quite similar to AI features like Nothing’s Essential Space and Google’s Pixel Screenshots and Journal. The idea is that if you give an AI model enough data on your thoughts and plans, it can provide useful insights. That’s still hypothetical based on what we’ve seen from other smartphone OEMs, but that’s not stopping OnePlus from fully embracing AI in Android 16.

In addition to beefing up Mind Space, OxygenOS 16 will also add system-wide AI writing tools, which is another common AI add-on. Like the systems from Apple, Google, and Samsung, you will be able to use the OnePlus writing tools to adjust text, proofread, and generate summaries.

OnePlus will make OxygenOS 16 available starting October 17 as an open beta. You’ll need a OnePlus device from the past three years to run the software, both in the beta phase and when it’s finally released. As for that, OnePlus hasn’t offered a specific date. The initial OxygenOS 16 release will be with the OnePlus 15 devices, with releases for other supported phones and tablets coming later.

OnePlus unveils OxygenOS 16 update with deep Gemini integration Read More »

12-years-of-hdd-analysis-brings-insight-to-the-bathtub-curve’s-reliability

12 years of HDD analysis brings insight to the bathtub curve’s reliability

But as seen in Backblaze’s graph above, the company’s HDDs aren’t adhering to that principle. The blog’s authors noted that in 2021 and 2025, Backblaze’s drives had a “pretty even failure rate through the significant majority of the drives’ lives, then a fairly steep spike once we get into drive failure territory.”

The blog continues:

What does that mean? Well, drives are getting better, and lasting longer. And, given that our trendlines are about the same shape from 2021 to 2025, we should likely check back in when 2029 rolls around to see if our failure peak has pushed out even further.

Speaking with Ars Technica, Doyle said that Backblaze’s analysis is good news for individuals shopping for larger hard drives because the devices are “going to last longer.”

She added:

In many ways, you can think of a datacenter’s use of hard drives as the ultimate test for a hard drive—you’re keeping a hard drive on and spinning for the max amount of hours, and often the amount of times you read/write files is well over what you’d ever see as a consumer. Industry trend-wise, drives are getting bigger, which means that oftentimes, folks are buying fewer of them. Reporting on how these drives perform in a data center environment, then, can give you more confidence that whatever drive you’re buying is a good investment.

The longevity of HDDs is also another reason for shoppers to still consider HDDs over faster, more expensive SSDs.

“It’s a good idea to decide how justified the improvement in latency is,” Doyle said.

Questioning the bathtub curve

Doyle and Paterson aren’t looking to toss the bathtub curve out with the bathwater. They’re not suggesting that the bathtub curve doesn’t apply to HDDs, but rather that it overlooks additional factors affecting HDD failure rates, including “workload, manufacturing variation, firmware updates, and operational churn.” The principle also makes the assumptions that, per the authors:

  • Devices are identical and operate under the same conditions
  • Failures happen independently, driven mostly by time
  • The environment stays constant across a product’s life

While these conditions can largely be met in datacenter environments, “conditions can’t ever be perfect,” Doyle and Patterson noted. When considering an HDD’s failure rates over time, it’s wise to consider both the bathtub curve and how you use the component.

12 years of HDD analysis brings insight to the bathtub curve’s reliability Read More »

yes,-everything-online-sucks-now—but-it-doesn’t-have-to

Yes, everything online sucks now—but it doesn’t have to


from good to bad to nothing

Ars chats with Cory Doctorow about his new book Enshittification.

We all feel it: Our once-happy digital spaces have become increasingly less user-friendly and more toxic, cluttered with extras nobody asked for and hardly anybody wants. There’s even a word for it: “enshittification,” named 2023 Word of the Year by the American Dialect Society. The term was coined by tech journalist/science fiction author Cory Doctorow, a longtime advocate of digital rights. Doctorow has spun his analysis of what’s been ailing the tech industry into an eminently readable new book, Enshittification: Why Everything Suddenly Got Worse and What To Do About It.

As Doctorow tells it, he was on vacation in Puerto Rico, staying in a remote cabin nestled in a cloud forest with microwave Internet service—i.e., very bad Internet service, since microwave signals struggle to penetrate through clouds. It was a 90-minute drive to town, but when they tried to consult TripAdvisor for good local places to have dinner one night, they couldn’t get the site to load. “All you would get is the little TripAdvisor logo as an SVG filling your whole tab and nothing else,” Doctorow told Ars. “So I tweeted, ‘Has anyone at TripAdvisor ever been on a trip? This is the most enshittified website I’ve ever used.’”

Initially, he just got a few “haha, that’s a funny word” responses. “It was when I married that to this technical critique, at a moment when things were quite visibly bad to a much larger group of people, that made it take off,” Doctorow said. “I didn’t deliberately set out to do it. I bought a million lottery tickets and one of them won the lottery. It only took two decades.”

Yes, people sometimes express regret to him that the term includes a swear word. To which he responds, “You’re welcome to come up with another word. I’ve tried. ‘Platform decay’ just isn’t as good.” (“Encrapification” and “enpoopification” also lack a certain je ne sais quoi.)

In fact, it’s the sweariness that people love about the word. While that also means his book title inevitably gets bleeped on broadcast radio, “The hosts, in my experience, love getting their engineers to creatively bleep it,” said Doctorow. “They find it funny. It’s good radio, it stands out when every fifth word is ‘enbeepification.’”

People generally use “enshittification” colloquially to mean “the degradation in the quality and experience of online platforms over time.” Doctorow’s definition is more specific, encompassing “why an online service gets worse, how that worsening unfolds,” and how this process spreads to other online services, such that everything is getting worse all at once.

For Doctorow, enshittification is a disease with symptoms, a mechanism, and an epidemiology. It has infected everything from Facebook, Twitter, Amazon, and Google, to Airbnb, dating apps, iPhones, and everything in between. “For me, the fact that there were a lot of platforms that were going through this at the same time is one of the most interesting and important factors in the critique,” he said. “It makes this a structural issue and not a series of individual issues.”

It starts with the creation of a new two-sided online product of high quality, initially offered at a loss to attract users—say, Facebook, to pick an obvious example. Once the users are hooked on the product, the vendor moves to the second stage: degrading the product in some way for the benefit of their business customers. This might include selling advertisements, scraping and/or selling user data, or tweaking algorithms to prioritize content the vendor wishes users to see rather than what those users actually want.

This locks in the business customers, who, in turn, invest heavily in that product, such as media companies that started Facebook pages to promote their published content. Once business customers are locked in, the vendor can degrade those services too—i.e., by de-emphasizing news and links away from Facebook—to maximize profits to shareholders. Voila! The product is now enshittified.

The four horsemen of the shitocalypse

Doctorow identifies four key factors that have played a role in ushering in an era that he has dubbed the “Enshittocene.” The first is competition (markets), in which companies are motivated to make good products at affordable prices, with good working conditions, because otherwise customers and workers will go to their competitors.  The second is government regulation, such as antitrust laws that serve to keep corporate consolidation in check, or levying fines for dishonest practices, which makes it unprofitable to cheat.

The third is interoperability: the inherent flexibility of digital tools, which can play a useful adversarial role. “The fact that enshittification can always be reversed with a dis-enshittifiting counter-technology always acted as a brake on the worst impulses of tech companies,” Doctorow writes. Finally, there is labor power; in the case of the tech industry, highly skilled workers were scarce and thus had considerable leverage over employers.

All four factors, when functioning correctly, should serve as constraints to enshittification. However, “One by one each enshittification restraint was eroded until it dissolved, leaving the enshittification impulse unchecked,” Doctorow writes. Any “cure” will require reversing those well-established trends.

But isn’t all this just the nature of capitalism? Doctorow thinks it’s not, arguing that the aforementioned weakening of traditional constraints has resulted in the usual profit-seeking behavior producing very different, enshittified outcomes. “Adam Smith has this famous passage in Wealth of Nations about how it’s not due to the generosity of the baker that we get our bread but to his own self-regard,” said Doctorow. “It’s the fear that you’ll get your bread somewhere else that makes him keep prices low and keep quality high. It’s the fear of his employees leaving that makes him pay them a fair wage. It is the constraints that causes firms to behave better. You don’t have to believe that everything should be a capitalist or a for-profit enterprise to acknowledge that that’s true.”

Our wide-ranging conversation below has been edited for length to highlight the main points of discussion.

Ars Technica: I was intrigued by your choice of framing device, discussing enshittification as a form of contagion. 

Cory Doctorow: I’m on a constant search for different framing devices for these complex arguments. I have talked about enshittification in lots of different ways. That frame was one that resonated with people. I’ve been a blogger for a quarter of a century, and instead of keeping notes to myself, I make notes in public, and I write up what I think is important about something that has entered my mind, for better or for worse. The downside is that you’re constantly getting feedback that can be a little overwhelming. The upside is that you’re constantly getting feedback, and if you pay attention, it tells you where to go next, what to double down on.

Another way of organizing this is the Galaxy Brain meme, where the tiny brain is “Oh, this is because consumers shopped wrong.” The medium brain is “This is because VCs are greedy.” The larger brain is “This is because tech bosses are assholes.” But the biggest brain of all is “This is because policymakers created the policy environment where greed can ruin our lives.” There’s probably never going to be just one way to talk about this stuff that lands with everyone. So I like using a variety of approaches. I suck at being on message. I’m not going to do Enshittification for the Soul and Mornings with Enshittifying Maury. I am restless, and my Myers-Briggs type is ADHD, and I want to have a lot of different ways of talking about this stuff.

Ars Technica: One site that hasn’t (yet) succumbed is Wikipedia. What has protected Wikipedia thus far? 

Cory Doctorow: Wikipedia is an amazing example of what we at the Electronic Frontier Foundation (EFF) call the public interest Internet. Internet Archive is another one. Most of these public interest Internet services start off as one person’s labor of love, and that person ends up being what we affectionately call the benevolent dictator for life. Very few of these projects have seen the benevolent dictator for life say, “Actually, this is too important for one person to run. I cannot be the keeper of the soul of this project. I am prone to self-deception and folly just like every other person. This needs to belong to its community.” Wikipedia is one of them. The founder, my friend Jimmy Wales, woke up one day and said, “No individual should run Wikipedia. It should be a communal effort.”

There’s a much more durable and thick constraint on the decisions of anyone at Wikipedia to do something bad. For example, Jimmy had this idea that you could use AI in Wikipedia to help people make entries and navigate Wikipedia’s policies, which are daunting. The community evaluated his arguments and decided—not in a reactionary way, but in a really thoughtful way—that this was wrong. Jimmy didn’t get his way. It didn’t rule out something in the future, but that’s not happening now. That’s pretty cool.

Wikipedia is not just governed by a board; it’s also structured as a nonprofit. That doesn’t mean that there’s no way it could go bad. But it’s a source of friction against enshittification. Wikipedia has its entire corpus irrevocably licensed as the most open it can be without actually being in the public domain. Even if someone were to capture Wikipedia, there’s limits on what they could do to it.

There’s also a labor constraint in Wikipedia in that there’s very little that the leadership can do without bringing along a critical mass of a large and diffuse body of volunteers. That cuts against the volunteers working in unison—they’re not represented by a union; it’s hard for them to push back with one voice. But because they’re so diffuse and because there’s no paychecks involved, it’s really hard for management to do bad things. So if there are two people vying for the job of running the Wikimedia Foundation and one of them has got nefarious plans and the other doesn’t, the nefarious plan person, if they’re smart, is going to give it up—because if they try to squeeze Wikipedia, the harder they squeeze, the more it will slip through their grasp.

So these are structural defenses against enshittification of Wikipedia. I don’t know that it was in the mechanism design—I think they just got lucky—but it is a template for how to run such a project. It does raise this question: How do you build the community? But if you have a community of volunteers around a project, it’s a model of how to turn that project over to that community.

Ars Technica: Your case studies naturally include the decay of social media, notably Facebook and the social media site formerly known as Twitter. How might newer social media platforms resist the spiral into “platform decay”?

Cory Doctorow: What you want is a foundation in which people on social media face few switching costs. If the social media is interoperable, if it’s federatable, then it’s much harder for management to make decisions that are antithetical to the interests of users. If they do, users can escape. And it sets up an internal dynamic within the firm, where the people who have good ideas don’t get shouted down by the people who have bad but more profitable ideas, because it makes those bad ideas unprofitable. It creates both short and long-term risks to the bottom line.

There has to be a structure that stops their investors from pressurizing them into doing bad things, that stops them from rationalizing their way into complying. I think there’s this pathology where you start a company, you convince 150 of your friends to risk their kids’ college fund and their mortgage working for you. You make millions of users really happy, and your investors come along and say, “You have to destroy the life of 5 percent of your users with some change.” And you’re like, “Well, I guess the right thing to do here is to sacrifice those 5 percent, keep the other 95 percent happy, and live to fight another day, because I’m a good guy. If I quit over this, they’ll just put a bad guy in who’ll wreck things. I keep those 150 people working. Not only that, I’m kind of a martyr because everyone thinks I’m a dick for doing this. No one understands that I have taken the tough decision.”

I think that’s a common pattern among people who, in fact, are quite ethical but are also capable of rationalizing their way into bad things. I am very capable of rationalizing my way into bad things. This is not an indictment of someone’s character. But it’s why, before you go on a diet, you throw away the Oreos. It’s why you bind yourself to what behavioral economists call “Ulysses pacts“: You tie yourself to the mast before you go into the sea of sirens, not because you’re weak but because you’re strong enough now to know that you’ll be weak in the future.

I have what I would call the epistemic humility to say that I don’t know what makes a good social media network, but I do know what makes it so that when they go bad, you’re not stuck there. You and I might want totally different things out of our social media experience, but I think that you should 100 percent have the right to go somewhere else without losing anything. The easier it is for you to go without losing something, the better it is for all of us.

My dream is a social media universe where knowing what network someone is using is just a weird curiosity. It’d be like knowing which cell phone carrier your friend is using when you give them a call. It should just not matter. There might be regional or technical reasons to use one network or another, but it shouldn’t matter to anyone other than the user what network they’re using. A social media platform where it’s always easier for users to leave is much more future-proof and much more effective than trying to design characteristics of good social media.

Ars Technica: How might this work in practice?

Cory Doctorow: I think you just need a protocol. This is [Mike] Maznik’s point: protocols, not products. We don’t need a universal app to make email work. We don’t need a universal app to make the web work. I always think about this in the context of administrable regulation. Making a rule that says your social media network must be good for people to use and must not harm their mental health is impossible. The fact intensivity of determining whether a platform satisfies that rule makes it a non-starter.

Whereas if you were to say, “OK, you have to support an existing federation protocol, like AT Protocol and Mastodon ActivityPub,” both have ways to port identity from one place to another and have messages auto-forward. This is also in RSS. There’s a permanent redirect directive. You do that, you’re in compliance with the regulation.

Or you have to do something that satisfies the functional requirements of the spec. So it’s not “did you make someone sad in a way that was reckless?” That is a very hard question to adjudicate. Did you satisfy these functional requirements? It’s not easy to answer that, but it’s not impossible. If you want to have our users be able to move to your platform, then you just have to support the spec that we’ve come up with, which satisfies these functional requirements.

We don’t have to have just one protocol. We can have multiple ones. Not everything has to connect to everything else, but everyone who wants to connect should be able to connect to everyone else who wants to connect. That’s end-to-end. End-to-end is not “you are required to listen to everything someone wants to tell you.” It’s that willing parties should be connected when they want to be.

Ars Technica: What about security and privacy protocols like GPG and PGP?

Cory Doctorow: There’s this argument that the reason GPG is so hard to use is that it’s intrinsic; you need a closed system to make it work. But also, until pretty recently, GPG was supported by one part-time guy in Germany who got 30,000 euros a year in donations to work on it, and he was supporting 20 million users. He was primarily interested in making sure the system was secure rather than making it usable. If you were to put Big Tech quantities of money behind improving ease of use for GPG, maybe you decide it’s a dead end because it is a 30-year-old attempt to stick a security layer on top of SMTP. Maybe there’s better ways of doing it. But I doubt that we have reached the apex of GPG usability with one part-time volunteer.

I just think there’s plenty of room there. If you have a pretty good project that is run by a large firm and has had billions of dollars put into it, the most advanced technologists and UI experts working on it, and you’ve got another project that has never been funded and has only had one volunteer on it—I would assume that dedicating resources to that second one would produce pretty substantial dividends, whereas the first one is only going to produce these minor tweaks. How much more usable does iOS get with every iteration?

I don’t know if PGP is the right place to start to make privacy, but I do think that if we can create independence of the security layer from the transport layer, which is what PGP is trying to do, then it wouldn’t matter so much that there is end-to-end encryption in Mastodon DMs or in Bluesky DMs. And again, it doesn’t matter whose sim is in your phone, so it just shouldn’t matter which platform you’re using so long as it’s secure and reliably delivered end-to-end.

Ars Technica: These days, I’m almost contractually required to ask about AI. There’s no escaping it. But it’s certainly part of the ongoing enshittification.

Cory Doctorow: I agree. Again, the companies are too big to care. They know you’re locked in, and the things that make enshittification possible—like remote software updating, ongoing analytics of use of devices—they allow for the most annoying AI dysfunction. I call it the fat-finger economy, where you have someone who works in a company on a product team, and their KPI, and therefore their bonus and compensation, is tied to getting you to use AI a certain number of times. So they just look at the analytics for the app and they ask, “What button gets pushed the most often? Let’s move that button somewhere else and make an AI summoning button.”

They’re just gaming a metric. It’s causing significant across-the-board regressions in the quality of the product, and I don’t think it’s justified by people who then discover a new use for the AI. That’s a paternalistic justification. The user doesn’t know what they want until you show it to them: “Oh, if I trick you into using it and you keep using it, then I have actually done you a favor.” I don’t think that’s happening. I don’t think people are like, “Oh, rather than press reply to a message and then type a message, I can instead have this interaction with an AI about how to send someone a message about takeout for dinner tonight.” I think people are like, “That was terrible. I regret having tapped it.” 

The speech-to-text is unusable now. I flatter myself that my spoken and written communication is not statistically average. The things that make it me and that make it worth having, as opposed to just a series of multiple-choice answers, is all the ways in which it diverges from statistical averages. Back when the model was stupider, when it gave up sooner if it didn’t recognize what word it might be and just transcribed what it thought you’d said rather than trying to substitute a more probable word, it was more accurate.  Now, what I’m getting are statistically average words that are meaningless.

That elision of nuance and detail is characteristic of what makes AI products bad. There is a bunch of stuff that AI is good at that I’m excited about, and I think a lot of it is going to survive the bubble popping. But I fear that we’re not planning for that. I fear what we’re doing is taking workers whose jobs are meaningful, replacing them with AIs that can’t do their jobs, and then those AIs are going to go away and we’ll have nothing. That’s my concern.

Ars Technica: You prescribe a “cure” for enshittification, but in such a polarized political environment, do we even have the collective will to implement the necessary policies?

Cory Doctorow: The good news is also the bad news, which is that this doesn’t just affect tech. Take labor power. There are a lot of tech workers who are looking at the way their bosses treat the workers they’re not afraid of—Amazon warehouse workers and drivers, Chinese assembly line manufacturers for iPhones—and realizing, “Oh, wait, when my boss stops being afraid of me, this is how he’s going to treat me.” Mark Zuckerberg stopped going to those all-hands town hall meetings with the engineering staff. He’s not pretending that you are his peers anymore. He doesn’t need to; he’s got a critical mass of unemployed workers he can tap into. I think a lot of Googlers figured this out after the 12,000-person layoffs. Tech workers are realizing they missed an opportunity, that they’re going to have to play catch-up, and that the only way to get there is by solidarity with other kinds of workers.

The same goes for competition. There’s a bunch of people who care about media, who are watching Warner about to swallow Paramount and who are saying, “Oh, this is bad. We need antitrust enforcement here.” When we had a functional antitrust system for the last four years, we saw a bunch of telecoms mergers stopped because once you start enforcing antitrust, it’s like eating Pringles. You just can’t stop. You embolden a lot of people to start thinking about market structure as a source of either good or bad policy. The real thing that happened with [former FCC chair] Lina Kahn doing all that merger scrutiny was that people just stopped planning mergers.

There are a lot of people who benefit from this. It’s not just tech workers or tech users; it’s not just media users. Hospital consolidation, pharmaceutical consolidation, has a lot of people who are very concerned about it. Mark Cuban is freaking out about pharmacy benefit manager consolidation and vertical integration with HMOs, as he should be. I don’t think that we’re just asking the anti-enshittification world to carry this weight.

Same with the other factors. The best progress we’ve seen on interoperability has been through right-to-repair. It hasn’t been through people who care about social media interoperability. One of the first really good state-level right-to-repair bills was the one that [Governor] Jared Polis signed in Colorado for powered wheelchairs. Those people have a story that is much more salient to normies. “

What do you mean you spent six months in bed because there’s only two powered wheelchair manufacturers and your chair broke and you weren’t allowed to get it fixed by a third party?” And they’ve slashed their repair department, so it takes six months for someone to show up and fix your chair. So you had bed sores and pneumonia because you couldn’t get your chair fixed. This is bullshit.

So the coalitions are quite large. The thing that all of those forces share—interoperability, labor power, regulation, and competition—is that they’re all downstream of corporate consolidation and wealth inequality. Figuring out how to bring all of those different voices together, that’s how we resolve this. In many ways, the enshittification analysis and remedy are a human factors and security approach to designing an enshittification-resistant Internet. It’s about understanding this as a red team, blue team exercise. How do we challenge the status quo that we have now, and how do we defend the status quo that we want?

Anything that can’t go on forever eventually stops. That is the first law of finance, Stein’s law. We are reaching multiple breaking points, and the question is whether we reach things like breaking points for the climate and for our political system before we reach breaking points for the forces that would rescue those from permanent destruction.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Yes, everything online sucks now—but it doesn’t have to Read More »

ai-powered-features-begin-creeping-deeper-into-the-bedrock-of-windows-11

AI-powered features begin creeping deeper into the bedrock of Windows 11


everything old is new again

Copilot expands with an emphasis on creating and editing files, voice input.

Microsoft is hoping that Copilot will succeed as a voice-driven assistant where Cortana failed. Credit: Microsoft

Microsoft is hoping that Copilot will succeed as a voice-driven assistant where Cortana failed. Credit: Microsoft

Like virtually every major Windows announcement in the last three years, the spate of features that Microsoft announced for the operating system today all revolve around generative AI. In particular, they’re concerned with the company’s more recent preoccupation with “agentic” AI, an industry buzzword for “telling AI-powered software to perform a task, which it then does in the background while you move on to other things.”

But the overarching impression I got, both from reading the announcement and sitting through a press briefing earlier this month, is that Microsoft is using language models and other generative AI technologies to try again with Cortana, Microsoft’s failed and discontinued entry in the voice assistant wars of the 2010s.

According to Microsoft’s Consumer Chief Marketing Officer Yusuf Mehdi, “AI PCs” should be able to recognize input “naturally, in text or voice,” to be able to guide users based on what’s on their screens at any given moment, and that AI assistants “should be able to take action on your behalf.”

The biggest of today’s announcements is the introduction of a new “Hey, Copilot” activation phrase for Windows 11 PCs, which once enabled users to summon the chatbot using only their voice rather than a mouse or keyboard (if you do want to use the keyboard, either the Copilot key or the same Windows + C keyboard shortcut that used to bring up Cortana will also summon Copilot). Saying “goodbye” will dismiss Copilot when you’re done working with it.

Macs and most smartphones have sported similar functionality for a while now, but Microsoft is obviously hoping that having Copilot answer those questions instead of Cortana will lead to success rather than another failure.

The key limitation of the original Cortana—plus Siri, Alexa, and the rest of their ilk—is that it could only really do a relatively limited and pre-determined list of actions. Complex queries, or anything the assistants don’t understand, often get bounced to a general web search. The results of that search may or may not accomplish what you wanted, but it does ultimately shift the onus back on the user to find and follow those directions.

To make Copilot more useful, Microsoft has also announced that Copilot Vision is being rolled out worldwide “in all markets where Copilot is offered” (it has been available in the US since mid-June). Copilot Vision will read the contents of a screen or an app window and can attempt to offer useful guidance or feedback, like walking you through an obscure task in Excel or making suggestions based on a group of photos or a list of items. (Microsoft additionally announced a beta for Gaming Copilot, a sort of offshoot of Copilot Vision intended specifically for walkthroughs and advice for whatever game you happen to be playing.)

Beyond these tweaks or wider rollouts for existing features, Microsoft is also testing a few new AI and Copilot-related additions that aim to fundamentally change how users interact with their Windows PCs by reading and editing files.

All of the features Microsoft is announcing today are intended for all Windows 11 PCs, not just those that meet the stricter hardware requirements of the Copilot+ PC label. That gives them a much wider potential reach than things like Recall or Click to Do, and it makes knowing what these features do and how they safeguard security and privacy that much more important.

AI features work their way into the heart of Windows

Microsoft wants general-purpose AI agents to be able to create and modify files for you, among other things, working in the background while you move on to other tasks. Credit: Microsoft

Whether you’re talking about the Copilot app, the generative AI features added to apps like Notepad and Paint, or the data-scraping Windows Recall feature, most of the AI additions to Windows in the last few years have been app-specific, or cordoned off in some way from core Windows features like the taskbar and File Explorer.

But AI features are increasingly working their way into bedrock Windows features like the taskbar and Start menu and being given capabilities that allow them to analyze or edit files or even perform file management tasks.

The standard Search field that has been part of Windows 10 and Windows 11 for the last decade, for example, is being transformed into an “Ask Copilot” field; this feature will still be able to look through local files just like the current version of the Search box, but Microsoft also envisions it as a keyboard-driven interface for Copilot for the times when you can’t or don’t want to use your voice. (We don’t know whether the “old” search functionality lives on in the Start menu or as an optional fallback for people who disable Copilot, at least not yet.)

A feature called Copilot Actions will also expand the number of ways that Copilot can interact with local files on your PC. Microsoft cites “sorting through recent vacation photos” and extracting information from PDFs and other documents as two possible use cases, and that this early preview version will focus on “a narrow set of use cases.” But it’s meant to be “a general-purpose agent” capable of “interacting with desktop and web applications.” This gives it a lot of latitude to augment or replace basic keyboard-and-mouse input for some interactions.

Screenshots of a Windows 11 testing build showed Copilot taking over the area of the taskbar that is currently reserved for the Search field. Credit: Microsoft

Finally, Microsoft is taking another stab at allowing Copilot to change the settings on your PC, something that earlier versions were able to do but were removed in a subsequent iteration. Copilot will attempt to respond to plain-language questions about your PC settings with a link to the appropriate part of Windows’ large, labyrinthine Settings app.

These new features dovetail with others Microsoft has been testing for a few weeks or months now. Copilot Connectors, rolled out to Windows Insiders earlier this month, can give Copilot access to email and file-sharing services like Gmail and Dropbox. New document creation features allow Copilot to export the contents of a Copilot chat into a Word or PDF document, Excel spreadsheet, or PowerPoint deck for more refinement and editing. And AI actions in the File Explorer appear in Windows’ right-click menu and allow for the direct manipulation of files, including batch-editing images and summarizing documents. Together with the Copilot Vision features that enable Copilot to see the full contents of Office documents rather than just the on-screen portions, all of these features inject AI into more basic everyday tasks, rather than cordoning them off in individual apps.

Per usual, we don’t know exactly when any of these new features will roll out to the general public, and some may never be available outside of the Windows Insider program. None of them are currently baked into the Windows 11 25H2 update, at least not the version that the company is currently beginning to roll out to some PCs.

Learning the lessons of Recall

Microsoft at least seems to have learned lessons from the botched rollout of Windows Recall last year.

If you didn’t follow along: Microsoft’s initial plan had been to roll out Recall with the first wave of Copilot+ PCs, but without sending it through the Windows Insider Preview program first. This program normally gives power users, developers, security researchers, and others the opportunity to kick the tires on upcoming Windows features before they’re launched, giving Microsoft feedback on bugs, security holes, or other flaws before rolling them out to all Windows PCs.

But security researchers who did manage to get their hands on the early, nearly launched version of Recall discovered a deeply flawed feature that preserved too much personal information and was trivially easy to exploit—a plain-text file with OCR text from all of a user’s PC usage could be grabbed by pretty much anybody with access to the PC, either in person or remote. It was also enabled by default on PCs that supported it, forcing users to manually opt out if they didn’t want to use it.

In the end, Microsoft pulled that version of Recall, took nearly a year to overhaul its security architecture, and spent months letting the feature make its way through the Windows Insider Preview channels before finally rolling it out to Copilot+ PCs. The resulting product still presents some risks to user privacy, as does any feature that promises to screenshot and store months of history about how you use your PC, but it’s substantially more refined, the most egregious security holes have been closed, and it’s off by default.

Copilot Actions are, at least for now, also disabled by default. And Microsoft Corporate Vice President of Windows Security Dana Huang put up a lengthy accompanying post explaining several of the steps Microsoft has taken to protect user privacy and security when using Copilot Actions. These include running AI agents with their own dedicated user accounts to reduce their access to data in your user folder; mandatory code-signing; and giving agents the fewest privileges they need to do their jobs. All of the agents’ activities will also be documented, so users can verify what actions have been taken and correct any errors.

Whether these security and privacy promises are good enough is an open question, but unlike the initial version of Recall, all of these new features will be sent out through the Windows Insider channels for testing first. If there are serious flaws, they’ll be out in public early on, rather than dropped on users unawares.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

AI-powered features begin creeping deeper into the bedrock of Windows 11 Read More »