Features

mark-zuckerberg’s-illegal-school-drove-his-neighbors-crazy

Mark Zuckerberg’s illegal school drove his neighbors crazy


Neighbors complained about noise, security guards, and hordes of traffic.

An entrance to Mark Zuckerberg’s compound in Palo Alto, California. Credit: Loren Elliott/Redux

The Crescent Park neighborhood of Palo Alto, California, has some of the best real estate in the country, with a charming hodgepodge of homes ranging in style from Tudor revival to modern farmhouse and contemporary Mediterranean. It also has a gigantic compound that is home to Mark Zuckerberg, his wife Priscilla Chan, and their daughters Maxima, August, and Aurelia. Their land has expanded to include 11 previously separate properties, five of which are connected by at least one property line.

The Zuckerberg compound’s expansion first became a concern for Crescent Park neighbors as early as 2016, due to fears that his purchases were driving up the market. Then, about five years later, neighbors noticed that a school appeared to be operating out of the Zuckerberg compound. This would be illegal under the area’s residential zoning code without a permit. They began a crusade to shut it down that did not end until summer 2025.

WIRED obtained 1,665 pages of documents about the neighborhood dispute—including 311 records, legal filings, construction plans, and emails—through a public record request filed to the Palo Alto Department of Planning and Development Services. (Mentions of “Zuckerberg” or “the Zuckerbergs” appear to have been redacted. However, neighbors and separate public records confirm that the property in question belongs to the family. The names of the neighbors who were in touch with the city were also redacted.)

The documents reveal that the school may have been operating as early as 2021 without a permit to operate in the city of Palo Alto. As many as 30 students might have enrolled, according to observations from neighbors. These documents also reveal a wider problem: For almost a decade, the Zuckerbergs’ neighbors have been complaining to the city about noisy construction work, the intrusive presence of private security, and the hordes of staffers and business associates causing traffic and taking up street parking.

Over time, neighbors became fed up with what they argued was the city’s lack of action, particularly with respect to the school. Some believed that the delay was because of preferential treatment to the Zuckerbergs. “We find it quite remarkable that you are working so hard to meet the needs of a single billionaire family while keeping the rest of the neighborhood in the dark,” reads one email sent to the city’s Planning and Development Services Department in February. “Just as you have not earned our trust, this property owner has broken many promises over the years, and any solution which depends on good faith behavioral changes from them is a failure from the beginning.”

Palo Alto spokesperson Meghan Horrigan-Taylor told WIRED that the city “enforces zoning, building, and life safety rules consistently, without regard to who owns a property.” She also refuted the claim that neighbors were kept in the dark, claiming that the city’s approval of construction projects at the Zuckerberg properties “were processed the same way they are for any property owner.” She added that, though some neighbors told the city they believe the Zuckerbergs received “special treatment,” that is not accurate.

“Staff met with residents, conducted site visits, and provided updates by phone and email while engaging the owner’s representative to address concerns,” Horrigan-Taylor said. “These actions were measured and appropriate to abate the unpermitted use and responsive to neighborhood issues within the limits of local and state law.”

According to The New York Times, which first reported on the school’s existence, it was called “Bicken Ben School” and shared a name with one of the Zuckerbergs’ chickens. The listing for Bicken Ben School, or BBS for short, in a California Department of Education directory claims the school opened on October 5, 2022. This, however, is the year after neighbors claim to have first seen it operating. It’s also two and a half years after Sara Berge—the school’s point of contact, per documents WIRED obtained from the state via public record request—claims to have started her role as “head of school” for a “Montessori pod” at a “private family office” according to her LinkedIn profile, which WIRED viewed in September and October. Berge did not respond to a request to comment.

Between 2022 and 2025, according to the documents Bicken Ben filed to the state, the school grew from nine to 14 students ranging from 5 to 10 years old. Neighbors, however, estimated that they observed 15 to 30 students. Berge similarly claimed on her LinkedIn profile to have overseen “25 children” in her job. In a June 2025 job listing for “BBS,” the school had a “current enrollment of 35–40 students and plans for continued growth,” which the listing says includes a middle school.

In order for the Zuckerbergs to run a private school on their land, which is in a residential zone, they need a “conditional use” permit from the city. However, based on the documents WIRED obtained, and Palo Alto’s public database of planning applications, the Zuckerbergs do not appear to have ever applied for or received this permit.

Per emails obtained by WIRED, Palo Alto authorities told a lawyer working with the Zuckerbergs in March 2025 that the family had to shut down the school on its compound by June 30. A state directory lists BBS, the abbreviation for Bicken Ben School, as having operated until August 18, and three of Zuckerberg’s neighbors—who all requested anonymity due to the high-profile nature of the family—confirmed to WIRED in late September that they had not seen or heard students being dropped off and picked up on weekdays in recent weeks.

However, Zuckerberg family spokesperson Brian Baker tells WIRED that the school didn’t close, per se. It simply moved. It’s not clear where it is now located, or whether the school is operating under a different name.

In response to a detailed request for comment, Baker provided WIRED with an emailed statement on behalf of the Zuckerbergs. “Mark, Priscilla and their children have made Palo Alto their home for more than a decade,” he said. “They value being members of the community and have taken a number of steps above and beyond any local requirements to avoid disruption in the neighborhood.”

“Serious and untenable”

By the fall of 2024, Zuckerberg’s neighbors were at their breaking point. At some point in mid-2024, according to an email from then mayor Greer Stone, a group of neighbors had met with Stone to air their grievances about the Zuckerberg compound and the illegal school they claimed it was operating. They didn’t arrive at an immediate resolution.

In the years prior, the city had received several rounds of complaints about the Zuckerberg compound. Complaints for the address of the school were filed to 311, the nationwide number for reporting local non-emergency issues, in February 2019, September 2021, January 2022, and April 2023. They all alleged that the property was operating illegally under city code. Both were closed by the planning department, which found no rule violations. An unknown number of additional complaints, mentioned in emails among city workers, were also made between 2020 and 2024—presumably delivered via phone calls, in person, or to city departments not included in WIRED’s public record request.

In December 2020, building inspection manager Korwyn Peck wrote to code enforcement officer Brian Reynolds about an inspection he attempted to conduct around the Zuckerberg compound, in response to several noise and traffic complaints from neighbors. He described that several men in SUVs had gathered to watch him, and a tense conversation with one of them had ensued. “This appears to be a site that we will need to pay attention to,” Peck wrote to Reynolds.

“We have all been accused of ‘not caring,’ which of course is not true,” Peck added. “It does appear, however, with the activity I observed tonight, that we are dealing with more than four simple dwellings. This appears to be more than a homeowner with a security fetish.”

In a September 11, 2024, email to Jonathan Lait, Palo Alto’s director of planning and development services and Palo Alto city attorney Molly Stump, one of Zuckerberg’s neighbors alleged that since 2021, “despite numerous neighborhood complaints” to the city of Palo Alto, including “multiple code violation reports,” the school had continued to grow. They claimed that a garage at the property had been converted into another classroom, and that an increasing number of children were arriving each day. Lait and Stump did not respond to a request to comment.

“The addition of daily traffic from the teachers and parents at the school has only exacerbated an already difficult situation,” they said in the email, noting that the neighborhood has been dealing with an “untenable traffic” situation for more than eight years.

They asked the city to conduct a formal investigation into the school on Zuckerberg’s property, adding that their neighbors are also “extremely concerned” about the school, and “are willing to provide eyewitness accounts in support of this complaint.”

Over the next week, another neighbor forwarded this note to all six Palo Alto city council members, as well as then mayor Stone. One of these emails described the situation as “serious” and “untenable.”

“We believe the investigation should be swift and should yield a cease and desist order,” the neighbor wrote.

Lait responded to the neighbor who sent the original complaint on October 15, claiming that he’d had an “initial call” with a “representative” of the property owners and that he was directing the city’s code enforcement staff to reexamine the property.

On December 11, 2024, the neighbor claimed that since one of their fellow neighbors had spoken to a Zuckerberg representative, and the representative had allegedly admitted that there was a school on the property, “it seems like an open and shut case.”

“Our hope is that there is an equal process in place for all residents of Palo Alto regardless of wealth or stature,” the neighbor wrote. “It is hard to imagine that this kind of behavior would be ignored in any other circumstance.”

That same day, Lait told Christine Wade, a partner at SSL Law Firm—who, in an August 2024 email thread, said she was “still working with” the Zuckerberg family—that the Zuckerbergs lacked the required permit to run a school in a residential zone.

“Based on our review of local and state law, we believe this use constitutes a private school use in a residential zone requiring a conditional use permit,” Lait wrote in an email to Wade. “We also have not found any state preemptions that would exclude a use like this from local zoning requirements.” Lait added that a “next step,” if a permit was not obtained, would be sending a cease and desist to the property owner.

According to several emails, Wade, Lait, and Mark Legaspi, CEO of the Zuckerberg family office called West 10, went on to arrange an in-person meeting at City Hall on January 9. (This is the first time that the current name of the Zuckerberg family office, West 10, has been publicly disclosed. The office was previously called West Street.) Although WIRED did not obtain notes from the meeting, Lait informed the neighbor on January 10 that he had told the Zuckerbergs’ “representative” that the school would need to shut down if it didn’t get a conditional use permit or apply for that specific permit.

Lait added that the representative would clarify what the family planned to do in about a week; however, he noted that if the school were to close, the city may give the school a “transition period” to wind things down. Wade did not respond to a request for comment.

“At a minimum, give us extended breaks”

There was another increasingly heated conversation happening behind the scenes. On February 3 of this year, at least one neighbor met with Jordan Fox, an employee of West 10.

It’s unclear exactly what happened at this meeting, or if the neighbor who sent the September 11 complaint was in attendance. But a day after the meeting with Fox, two additional neighbors added their names to the September 11 complaint, per an email to Lait.

On February 12, a neighbor began an email chain with Fox. This email was forwarded to Planning Department officials two months later. The neighbor, who seemingly attended the meeting, said they had “connected” with fellow neighbors “to review and revise” an earlier list of 14 requests that had been reportedly submitted to the Zuckerbergs at some previous point. The note does not specify the contents of this original list of requests, but of the 19 neighbors who originally contributed to it, they claimed that 15 had contributed to the revised list.

The email notes that the Zuckerbergs had been “a part of our neighborhood for many years,” and that they “hope that this message will start an open and respectful dialogue,” built upon the “premise of how we all wish to be treated as neighbors.”

“Our top requests are to minimize future disruption to the neighborhood and proactively manage the impact of the many people who are affiliated with you,” the email says. This includes restricting parking by “security guards, contractors, staff, teachers, landscapers, visitors, etc.” In the event of major demolitions, concrete pours, or large parties, the email asks for advance notice, and for dedicated efforts to “monitor and mitigate noise.”

The email also asks the Zuckerbergs to, “ideally stop—but at a minimum give us extended breaks from—the acquisition, demolition and construction cycle to let the neighborhood recover from the last eight years of disruption.”

At this point, the email requests that the family “abide by both the letter and the spirit of Palo Alto” by complying with city code about residential buildings.

Specifically, it asks the Zuckerbergs to get a use permit for the compound’s school and to hold “a public hearing for transparency.” It also asks the family to not expand its compound any further. “We hope this will help us get back the quiet, attractive residential neighborhood that we all loved so much when we chose to move here.”

In a follow-up on March 4, Fox acknowledged the “unusual” effects that come with being neighbors with Mark Zuckerberg and his family.

“I recognize and understand that the nature of our residence is unique given the profile and visibility of the family,” she wrote. “I hope that as we continue to grow our relationship with you over time, you will increasingly enjoy the benefits of our proximity—e.g., enhanced safety and security, shared improvements, and increased property values.”

Fox said that the Zuckerbergs instituted “a revised parking policy late last year” that should address their concerns, and promised to double down on efforts to give advanced notice about construction, parties, and other potential disruptions.

However, Fox did not directly address the unpermitted school and other nonresidential activities happening at the compound. She acknowledged that the compound has “residential support staff” including “childcare, culinary, personal assistants, property management, and security,” but said that they have “policies in place to minimize their impact on the neighborhood.”

It’s unclear if the neighbor responded to Fox.

“You have not earned our trust”

While these conversations were happening between Fox and Zuckerberg’s neighbors, Lait and others at the city Planning Department were scrambling to find a solution for the neighbor who complained on September 11, and a few other neighbors who endorsed the complaint in September and February.

Starting in February, one of these neighbors took the lead on following up with Lait. They asked him for an update on February 11, and heard back a few days later. He didn’t have any major updates, “but after conversations with the family’s representatives, he said he was exploring whether a “subset of children” could continue to come to the school sometimes for “ancillary” uses.

“I also believe a more nuanced solution is warranted in this case,” Lait added. Ideally, such a solution would respond to the neighbors’ complaints, but allow the Zuckerbergs to “reasonably be authorized by the zoning code.”

The neighbor wasn’t thrilled. The next day, they replied and called the city’s plan “unsatisfactory.”

“The city’s ‘nuanced solution’ in dealing with this serial violator has led to the current predicament,” they said (referring to the nuanced solution Lait mentioned in his last email.)

Horrigan-Taylor, the Palo Alto spokesperson, told WIRED that Lait’s mention of a “nuanced” solution referred to “resolving, to the extent permissible by law, neighborhood impacts and otherwise permitted use established by state law and local zoning.”

“Would I, or any other homeowner, be given the courtesy of a ‘nuanced solution’ if we were in violation of city code for over four years?” they added.

“Please know that you have not earned our trust and that we will take every opportunity to hold the city accountable if your solution satisfies a single [redacted] property owner over the interests of an entire neighborhood,” they continued.

“If you somehow craft a ‘nuanced solution’ based on promises,” the neighbor said, “the city will no doubt once again simply disappear and the damage to the neighborhood will continue.”

Lait did not respond right away. The neighbor followed up on March 13, asking if he had “reconsidered” his plan to offer a “‘nuanced solution’ for resolution of these ongoing issues by a serial code violator.” They asked when the neighborhood could “expect relief from the almost decade long disruptions.”

Behind the scenes, Zuckerberg’s lawyers were fighting to make sure the school could continue to operate. In a document dated March 14, Wade argues that she believed the activities at “the Property” “represent an appropriate residential use based on established state law as well as constitutional principles.”

Wade said that “the Family” was in the process of obtaining a “Large Family Daycare” license for the property, which is legal for a cohort of 14 or fewer children all under the age of 10.

“We consistently remind our vendors, guests, etc. to minimize noise, not loiter anywhere other than within the Family properties, and to keep areas clean,” Wade added in the letter. Wade also attached an adjusted lease corresponding with the address of the illicit school, which promises that the property will be used for only one purpose. The exact purpose is redacted.

On March 25, Lait told the neighbor that the city’s June 30 deadline for the Zuckerbergs to shut down the school had not changed. However, the family’s representative said that they were pursuing a daycare license. These licenses are granted by the state, not the city of Palo Alto.

The subtext of this email was that if the state gave them a daycare licence, there wasn’t much the city could do. Horrigan-Taylor confirmed with WIRED that “state licensed large family day care homes” do not require city approval, adding that the city also “does not regulate homeschooling.”

“Thanks for this rather surprising information,” the neighbor replied about a week later. “We have repeatedly presented ideas to the family over the past 8 years with very little to show for it, so from our perspective, we need to understand the city’s willingness to act or not to act.”

Baker told WIRED that the Zuckerbergs never ended up applying for a daycare license, a claim that corresponds with California’s public registry of daycare centers. (There are only two registered daycare centers in Palo Alto, and neither belongs to the Zuckerbergs. The Zuckerbergs’ oldest child, Maxima, will also turn 10 in December and consequently age out of any daycare legally operating in California.)

Horrigan-Taylor said that a representative for the Zuckerbergs told the city that the family wanted to move the school to “another location where private schools are permitted by right.”

In a school administrator job listing posted to the Association Montessori International website in July 2022 for “BBS,” Bicken Ben head of school Berge claims that the school had four distinct locations, and that applicants must be prepared to travel six to eight weeks per year. The June 2025 job listing also says that the “year-round” school spans “across multiple campuses,” but the main location of the job is listed as Palo Alto. It’s unclear where the other sites are located.

Most of the Zuckerbergs’ neighbors did not respond to WIRED’s request for comment. However, the ones that did clearly indicated that they would not be forgetting the Bicken Ben saga, or the past decade of disruption, anytime soon.

“Frankly I’m not sure what’s going on,” one neighbor said, when reached by WIRED via landline. “Except for noise and construction debris.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Mark Zuckerberg’s illegal school drove his neighbors crazy Read More »

how-to-declutter,-quiet-down,-and-take-the-ai-out-of-windows-11-25h2

How to declutter, quiet down, and take the AI out of Windows 11 25H2


A new major Windows 11 release means a new guide for cleaning up the OS.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

It’s that time of year again—temperatures are dropping, leaves are changing color, and Microsoft is gradually rolling out another major yearly update to Windows 11.

The Windows 11 25H2 update is relatively minor compared to last year’s 24H2 update (the “25” here is a reference to the year the update was released, while the “H2” denotes that it was released in the second half of the year, a vestigial suffix from when Microsoft would release two major Windows updates per year). The 24H2 update came with some major under-the-hood overhauls of core Windows components and significant performance improvements for the Arm version; 25H2 is largely 24H2, but with a rolled-over version number to keep it in line with Microsoft’s timeline for security updates and tech support.

But Microsoft’s continuous update cadence for Windows 11 means that even the 24H2 version as it currently exists isn’t the same one Microsoft released a year ago.

To keep things current, we’ve combed through our Windows cleanup guide, updating it for the current build of Windows 11 25H2 (26200.7019) to help anyone who needs a fresh Windows install or who is finally updating from Windows 10 now that Microsoft is winding down support for it. We’ll outline dozens of individual steps you can take to clean up a “clean install” of Windows 11, which has taken an especially user-hostile attitude toward advertising and forcing the use of other Microsoft products.

As before, this is not a guide about creating an extremely stripped-down, telemetry-free version of Windows; we stick to the things that Microsoft officially supports turning off and removing. There are plenty of experimental hacks and scripts that take it a few steps farther, and/or automate some of the steps we outline here—NTDev’s Tiny11 project is one—but removing built-in Windows components can cause unexpected compatibility and security problems, and Tiny11 has historically had issues with basic table-stakes stuff like “installing security updates.”

These guides capture moments in time, and regular monthly Windows patches, app updates downloaded through the Microsoft Store, and other factors all can and will cause small variations from our directions. You may also see apps or drivers specific to your PC’s manufacturer. This guide also doesn’t cover the additional bloatware that may come out of the box with a new PC, starting instead with a freshly installed copy of Windows from a USB drive.

Table of Contents

Starting with Setup: Avoiding Microsoft account sign-in

The most contentious part of Windows 11’s setup process relative to earlier Windows versions is that it mandates a Microsoft account sign-in, with none of the readily apparent “limited account” fallbacks that existed in Windows 10. As of Windows 11 22H2, that’s true of both the Home and Pro editions.

There are two reasons I can think of not to sign in with a Microsoft account. The first is that you want nothing to do with a Microsoft account, thank you very much. Signing in makes Windows bombard you with more Microsoft 365, OneDrive, and Game Pass subscription upsells since all you need to do is add them to an account that already exists, and Windows setup will offer subscriptions to each if you sign in first.

The second—which describes my situation—is that you do use a Microsoft account because it offers some handy benefits like automated encryption of your local drive (having those encryption keys tied to my account has saved me a couple of times) or syncing of browser info and some preferences. But you don’t want to sign in at setup, either because you don’t want to be bothered with the extra upsells or you prefer your user folder to be located at “C:UsersAndrew” rather than “C:Users.”

Regardless of your reasoning, if you don’t want to bother with sign-in at setup, you have a few different options:

Use the command line

During Windows 11 Setup, after selecting a language and keyboard layout but before connecting to a network, hit Shift+F10 to open the command prompt (depending on your keyboard, you may also need to hit the Fn key before pressing F10). Type OOBEBYPASSNRO, hit Enter, and wait for the PC to reboot.

When it comes back, click “I don’t have Internet” on the network setup screen, and you’ll have recovered the option to use “limited setup” (aka a local account) again, like older versions of Windows 10 and 11 offered.

This option has been removed from some Windows 11 testing builds, but it still works as of this writing in 25H2. We may see this option removed in a future update to Windows.

For Windows 11 Pro

For Windows 11 Pro users, there’s a command-line-free workaround you can take advantage of.

Proceed through the Windows 11 setup as you normally would, including connecting to a network and allowing the system to check for updates. Eventually, you’ll be asked whether you’re setting your PC up for personal use or for “work or school.”

Select the “work or school” option, then “sign-in options,” at which point you’ll finally be given a button that says “domain join instead.” Click this to indicate you’re planning to join the PC to a corporate domain (even though you aren’t), and you’ll see the normal workflow for creating a “limited” local account.

The downside is that you’re starting your relationship with your new Windows install by lying to it. But hey, if you’re using the AI features, your computer is probably going to lie to you, too. It all balances out.

Using the Rufus tool

Credit: Andrew Cunningham

The Rufus tool can streamline a few of the more popular tweaks and workarounds for Windows 11 install media. Rufus is a venerable open source app for creating bootable USB media for both Windows and Linux. If you find yourself doing a lot of Windows 11 installs and don’t want to deal with Microsoft accounts, Rufus lets you tweak the install media itself so that the “limited setup” options always appear, no matter which edition of Windows you’re using.

To start, grab Rufus and then a fresh Windows 11 ISO file from Microsoft. You’ll also want an 8GB or larger USB drive; I’d recommend a 16GB or larger drive that supports USB 3.0 speeds, both to make things go a little faster and to leave yourself extra room for drivers, app installers, and anything else you might want to set a new PC up for the first time. (I also like this SanDisk drive that has a USB-C connector on one end and a USB-A connector on the other to ensure compatibility with all kinds of PCs.)

Fire up Rufus, select your USB drive and the Windows ISO, and hit Start to copy over all of the Windows files. After you hit Start, you’ll be asked if you want to disable some system requirements checks, remove the Microsoft account requirement, or turn off all the data collection settings that Windows asks you about the first time you set it up. What you do here is up to you; I usually turn off the sign-in requirement, but disabling the Secure Boot and TPM checks doesn’t stop those features from working once Windows is installed and running.

The rest of Windows 11 setup

The main thing I do here, other than declining any and all Microsoft 365 or Game Pass offers, is turn all the toggles on the privacy settings screen to “no.” This covers location services, the Find My Device feature, and four toggles that collectively send a small pile of usage and browsing data to Microsoft that it uses “to enhance your Microsoft experiences.” Pro tip: Use the Tab key and spacebar to quickly toggle these without clicking or scrolling.

Of these, I can imagine enabling Find My Device if you’re worried about theft or location services if you want Windows and apps to be able to access your location. But I tend not to send any extra telemetry or browsing data other than the basics (the only exception being on machines I enroll in the Windows Insider Preview program for testing, since Microsoft requires you to send more detailed usage data from those machines to help it test its beta software). If you want to change any of these settings after setup, they’re all in the Settings app under Privacy & Security.

If you have signed in with a Microsoft account during setup, you can expect to see several additional setup screens that aren’t offered when you’re signing in with a local account, including attempts to sell Microsoft 365, OneDrive, and Xbox Game Pass subscriptions. Accept or decline these offers as desired.

Cleaning up Windows 11

Reboot once this is done, and you’ll be at the Windows desktop. Start by installing any drivers you need, plus Windows updates.

When you first connect to the Internet, Windows may or may not decide to automatically pull down a few extraneous third-party apps and app shortcuts, things like Spotify or Grammarly—this has happened to me consistently in most Windows 11 installs I’ve done over the years, though it hasn’t generally happened on the 24H2 and 25H2 PCs I’ve set up.

Open the Start menu and right-click each of the apps you don’t want to remove the icons for and/or uninstall. Some of these third-party apps are just stubs that won’t actually be installed to your computer until you try to run them, so removing them directly from the Start menu will get rid of them entirely.

Right-clicking and uninstalling the unwanted apps that are pinned to the Start menu is the fastest (and, for some, the only) way to get rid of them.

Credit: Andrew Cunningham

Right-clicking and uninstalling the unwanted apps that are pinned to the Start menu is the fastest (and, for some, the only) way to get rid of them. Credit: Andrew Cunningham

The other apps and services included in a fresh Windows install generally at least have the excuse of being first-party software, though their usefulness will be highly user-specific: Xbox, the new Outlook app, Clipchamp, and LinkedIn are the ones that stand out, plus the ad-driven free-to-play version of the Solitaire suite that replaced the simple built-in version during the Windows 8 era.

Rather than tell you what I remove, I’ll tell you everything that can be removed from the Installed Apps section of the Settings app (also quickly accessible by right-clicking the Start button in the taskbar). You can make your own decisions here; I generally leave the in-box versions of classic Windows apps like Sound Recorder and Calculator while removing things I don’t use, like To Do or Clipchamp.

This list should be current for a fresh, fully updated install of Windows 11 25H2, at least in the US, but it doesn’t include any apps that might be specific to your hardware, like audio or GPU settings apps. Some individual apps may or may not appear as part of your Windows install.

  • Calculator
  • Camera
  • Clock (may also appear as Windows Clock)
  • Copilot
  • Family
  • Feedback Hub
  • Game Assist
  • Media Player
  • Microsoft 365 Copilot
  • Microsoft Clipchamp
  • Microsoft OneDrive: Removing this, if you don’t use it, should also get rid of notifications about OneDrive and turning on Windows Backup.
  • Microsoft Teams
  • Microsoft To Do
  • News
  • Notepad
  • Outlook for Windows
  • Paint
  • Photos
  • Power Automate
  • Quick Assist
  • Remote Desktop Connection
  • Snipping Tool
  • Solitaire & Casual Games
  • Sound Recorder
  • Sticky Notes
  • Terminal
  • Weather
  • Web Media Extensions
  • Xbox
  • Xbox Live

In Windows 11 23H2, Microsoft moved almost all of Windows’ non-removable apps to a System Components section, where they can be configured but not removed; this is where things like Phone Link, the Microsoft Store, Dev Home, and the Game Bar have ended up. The exception is Edge and its associated updater and WebView components; these are not removable, but they aren’t listed as “system components” for some reason, either.

Start, Search, Taskbar, and lock screen decluttering

Microsoft has been on a yearslong crusade against unused space in the Start menu and taskbar, which means there’s plenty here to turn off.

  • Right-click an empty space on the desktop, click Personalize, and click any of the other built-in Windows themes to turn off the Windows Spotlight dynamic wallpapers and the “Learn about this picture” icon.
  • Right-click the Taskbar and click Taskbar settings. I usually disable the Widgets board; you can leave this if you want to keep the little local weather icon in the lower-left corner of your screen, but this space is also sometimes used to present junky news articles from the Microsoft Start service.
    • If you want to keep Widgets enabled but clean it up a bit, open the Widgets menu, click the Settings gear in the top-right corner, scroll to “Show or hide feeds,” and turn the feed off. This will keep the weather, local sports scores, stocks, and a few other widgets, but it will get rid of the spammy news articles.
  • Also in the Taskbar settings, I usually change the Search field to “search icon only” to get rid of the picture in the search field and reduce the amount of space it takes up. Toggle the different settings until you find one you like.
  • Open Settings > Privacy & Security > Recommendations & offers and disable “Personalized offers,” “Improve Start and search results,” “Show notifications in Settings,” “Recommendations and offers in Settings,” and “Advertising ID” (some of these may already be turned off). These settings mostly either send data to Microsoft or clutter up the Settings app with various recommendations and ads.
  • Open Settings > Privacy & Security > Diagnostics & feedback, scroll down to “Feedback frequency,” and select “Never” to turn off all notifications requesting feedback about various Windows features.
  • Open Settings > Privacy & Security, click Search and disable “Show search highlights.” This cleans up the Search menu quite a bit, focusing it on searches you’ve done yourself and locally installed apps.

  • Open Settings > Personalization > Lock screen. Under “Personalize your lock screen,” switch from “Windows spotlight” to either Picture or Slideshow to use local images for your lock screen, and then uncheck the “get fun facts, tips, tricks, and more” box that appears. This will hide the other text boxes and clickable elements that Windows automatically adds to the lock screen in Spotlight mode. Under “Lock screen status,” select “none” to hide the weather widget and other stocks and news widgets from your lock screen.
  • If you own a newer Windows PC with a dedicated Copilot key, you can navigate to Settings > Personalization > Text input and scroll down to remap the key. Unfortunately, its usefulness is still limited—you can reassign it to the Search function or to the built-in Microsoft 365 app, but by default, Windows doesn’t give you the option to reassign it to open any old app.

Credit: Andrew Cunningham

By default, the Start menu will occasionally make “helpful” suggestions about third-party Microsoft Store apps to grab. These can and should be turned off.

  • Open Settings > Personalization > Start. Turn off “Show recommendations for tips, shortcuts, new apps, and more.” This will disable a feature where Microsoft Store apps you haven’t installed can show up in Recommendations along with your other files. You can also decide whether you want to be able to see more pinned apps or more recent/recommended apps and files on the Start menu, depending on what you find more useful.
  • On the same page, disable “show account-related notifications” to reduce the number of reminders and upsell notifications you see related to your Microsoft account.

Credit: Andrew Cunningham

  • Open Settings > System > Notifications, scroll down, and expand the additional settings section. Uncheck all three boxes here, which should get rid of all the “finish setting up your PC” prompts, among other things.
  • Also feel free to disable notifications from any specific apps you don’t want to hear from.

In-app AI features

Microsoft has steadily been adding image and text generation capabilities to some of the bedrock in-box Windows apps, from Paint and Photos to Notepad.

Exactly which AI features you’re offered will depend on whether you’ve signed in with a Microsoft account or not or whether you’re using a Copilot+ PC with access to more AI features that are executed locally on your PC rather than in the cloud (more on those in a minute).

But the short version is that it’s usually not possible to turn off or remove these AI features without uninstalling the entire app. Apps like Notepad and Edge do have toggles for shutting off Copilot and other related features, but no such toggles exist in Paint, for example.

Even if you can find some Registry key or another backdoor way to shut these things off, there’s no guarantee the settings will stick as these apps are updated; it’s probably easier to just try to ignore any AI features within these apps that you don’t plan to use.

Removing Recall, and other extra steps for Copilot+ PCs

So far, everything we’ve covered has been applicable to any PC that can run Windows 11. But new PCs with the Copilot+ branding—anything with a Qualcomm Snapdragon X chip in it or things with certain Intel Core Ultra or AMD Ryzen AI CPUs—get extra features that other Windows 11 PCs don’t have. Given that these are their own unique subclass of PCs, it’s worth exploring what’s included and what can be turned off.

Removing Recall will be possible, though it’s done through a relatively obscure legacy UI rather than the Settings app. Credit: Andrew Cunningham

One Copilot+ feature that can be fully removed, in part because of the backlash it initially caused, is the data-scraping Recall feature. Recall won’t be enabled on your Copilot+ system unless you’re signed in with a Microsoft account and you explicitly opt in. But if fully removing the feature gives you extra peace of mind, then by all means, remove it.

  • If you just want to make sure Recall isn’t active, navigate to Settings > Privacy & security > Recall & snapshots. This is where you adjust Recall’s settings and verify whether it’s turned on or off.
  • To fully remove Recall, open Settings > System > Optional Features, scroll down to the bottom of this screen, and click More Windows features. This will open the old “Turn Windows features on or off” Control Panel applet used to turn on or remove some legacy or power-user-centric components, like old versions of the .NET Framework or Hyper-V. It’s arranged alphabetically.
  • In Settings > Privacy & security > Click to Do, you’ll also find a toggle to disable Click to Do, a Copilot+ feature that takes a screenshot of your desktop and tries to make recommendations or suggest actions you might perform (copying and pasting text or an image, for example).

Apps like Paint or Photos may also prompt you to install an extension for AI-powered image generation from the Microsoft Store. This extension—which weighs in at well over a gigabyte as of this writing—is not installed by default. If you have installed it, you can remove it by opening Settings > Apps > Installed apps and removing “ImageCreationHostApp.”

Bonus: Cleaning up Microsoft Edge

I use Edge out of pragmatism rather than love—”the speed, compatibility, and extensions ecosystem of Chrome, backed by the resources of a large company that isn’t Google” is still a decent pitch. But Edge has become steadily less appealing as Microsoft has begun pushing its own services more aggressively and stuffing the browser with AI features. In a vacuum, Firefox aligns better with what I want from a browser, but it just doesn’t respond well to my normal tab-monster habits despite several earnest attempts to switch—things bog down and RAM runs out. I’ve also had mixed experience with the less-prominent Chromium clones, like Opera, Vivaldi, and Brave. So Edge it is, at least for now.

The main problem with Edge on a new install of Windows is that even more than Windows, it exists in a universe where no one would ever want to switch search engines or shut off any of Microsoft’s “value-added features” except by accident. Case in point: Signing in with a Microsoft account will happily sync your bookmarks, extensions, and many kinds of personal data. But many settings for search engine changes or for opting out of Microsoft services do not sync between systems and require a fresh setup each time.

Below are the Edge settings I change to maximize the browser’s usefulness (and usable screen space) while minimizing annoying distractions; it involves turning off most of the stuff Microsoft has added to the Chromium version of Edge since it entered public preview many years ago. Here’s a list of things to tweak, whether you sign in with a Microsoft account or not.

  • On the Start page when you first open the browser, hit the Settings gear in the upper-right corner. Turn off “Quick links” (or if you leave them on, turn off “Show sponsored links”) and then turn off “show content.” Whether you leave the custom background or the weather widget is up to you.
  • Click the “your privacy choices” link at the bottom of the menu and turn off the “share my data with third parties for personalized ads” toggle.

Edge has scattered some of the settings we change over the last year, but the browser is still full of toggles we prefer to keep turned off. Andrew Cunningham

  • In the Edge UI, click the ellipsis icon near the upper-right corner of the screen and click Settings.
  • Click Profiles in the left Settings sidebar. Click Microsoft Rewards, and then turn it off.
  • Click Privacy, Search, & Services in the Settings sidebar.
    • In Tracking prevention, I set tracking prevention to “strict,” though if you use some other kind of content blocker, this may be redundant; it can also occasionally prompt “it looks like you’re using an ad-blocker” pop-up from sites even if you aren’t.
    • In Privacy, if they’re enabled, disable the toggles under “Optional diagnostic data,” “Help improve Microsoft products,” and “Allow Microsoft to save your browsing activity.”
    • In Search and connected experiences, disable the “Suggest similar sites when a website can’t be found,” “Save time and money with Shopping in Microsoft Edge,” and “Organize your tabs” toggles.
      • If you want to switch from Bing, click “Address bar and search” and switch to your preferred engine, whether that’s Google, DuckDuckGo, or something else. Then click “Search suggestions and filters” and disable “Show me search and site suggestions using my typed characters.”

These settings retain basic spellcheck without any of the AI-related additions. Credit: Andrew Cunningham

  • Click Appearance in the left-hand Settings sidebar, and scroll down to Copilot and sidebar
    • Turn the sidebar off, and turn off the “Personalize my top sites in customize sidebar” and “Allow sidebar apps to show notifications” toggles.
    • Click Copilot under App specific settings. Turn off “Show Copilot button on the toolbar.” Then, back in the Copilot and sidebar settings, turn off the “Show sidebar button” toggle that has just appeared.
  • Click Languages in the left-hand navigation. Disable “Use Copilot for writing on the web.” Turn off “use text prediction” if you want to prevent things you type from being sent to Microsoft, and switch the spellchecker from Microsoft Editor to Basic. (I don’t actually mind Microsoft Editor, but it’s worth remembering if you’re trying to minimize the amount of data Edge sends back to the company.)

Windows-as-a-nuisance

The most time-consuming part of installing a fresh, direct-from-Microsoft copy of Windows XP or Windows 7 was usually reinstalling all the apps you wanted to run on your PC, from your preferred browser to Office, Adobe Reader, Photoshop, and the VLC player. You still need to do all of that in a new Windows 11 installation. But now more than ever, most people will want to go through the OS and turn off a bunch of stuff to make the day-to-day experience of using the operating system less annoying.

That’s more relevant now that Microsoft has formally ended support for Windows 10. Yes, Windows 10 users can get an extra year of security updates relatively easily, but many who have been putting off the Windows 11 upgrade will be taking the plunge this year.

The settings changes we’ve recommended here may not fix everything, but they can at least give you some peace, shoving Microsoft into the background and allowing you to do what you want with your PC without as much hassle. Ideally, Microsoft would insist on respectful, user-friendly defaults itself. But until that happens, these changes are the best you can do.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

How to declutter, quiet down, and take the AI out of Windows 11 25H2 Read More »

internet-archive’s-legal-fights-are-over,-but-its-founder-mourns-what-was-lost

Internet Archive’s legal fights are over, but its founder mourns what was lost


“We survived, but it wiped out the library,” Internet Archive’s founder says.

Internet Archive founder Brewster Kahle celebrates 1 trillion web pages on stage with staff. Credit: via the Internet Archive

This month, the Internet Archive’s Wayback Machine archived its trillionth webpage, and the nonprofit invited its more than 1,200 library partners and 800,000 daily users to join a celebration of the moment. To honor “three decades of safeguarding the world’s online heritage,” the city of San Francisco declared October 22 to be “Internet Archive Day.” The Archive was also recently designated a federal depository library by Sen. Alex Padilla (D-Calif.), who proclaimed the organization a “perfect fit” to expand “access to federal government publications amid an increasingly digital landscape.”

The Internet Archive might sound like a thriving organization, but it only recently emerged from years of bruising copyright battles that threatened to bankrupt the beloved library project. In the end, the fight led to more than 500,000 books being removed from the Archive’s “Open Library.”

“We survived,” Internet Archive founder Brewster Kahle told Ars. “But it wiped out the Library.”

An Internet Archive spokesperson confirmed to Ars that the archive currently faces no major lawsuits and no active threats to its collections. Kahle thinks “the world became stupider” when the Open Library was gutted—but he’s moving forward with new ideas.

History of the Internet Archive

Kahle has been striving since 1996 to transform the Internet Archive into a digital Library of Alexandria—but “with a better fire protection plan,” joked Kyle Courtney, a copyright lawyer and librarian who leads the nonprofit eBook Study Group, which helps states update laws to protect libraries.

When the Wayback Machine was born in 2001 as a way to take snapshots of the web, Kahle told The New York Times that building free archives was “worth it.” He was also excited that the Wayback Machine had drawn renewed media attention to libraries.

At the time, law professor Lawrence Lessig predicted that the Internet Archive would face copyright battles, but he also believed that the Wayback Machine would change the way the public understood copyright fights.

”We finally have a clear and tangible example of what’s at stake,” Lessig told the Times. He insisted that Kahle was “defining the public domain” online, which would allow Internet users to see ”how easy and important” the Wayback Machine “would be in keeping us sane and honest about where we’ve been and where we’re going.”

Kahle suggested that IA’s legal battles weren’t with creators or publishers so much as with large media companies that he thinks aren’t “satisfied with the restriction you get from copyright.”

“They want that and more,” Kahle said, pointing to e-book licenses that expire as proof that libraries increasingly aren’t allowed to own their collections. He also suspects that such companies wanted the Wayback Machine dead—but the Wayback Machine has survived and proved itself to be a unique and useful resource.

The Internet Archive also began archiving—and then lending—e-books. For a decade, the Archive had loaned out individual e-books to one user at a time without triggering any lawsuits. That changed when IA decided to temporarily lift the cap on loans from its Open Library project to create a “National Emergency Library” as libraries across the world shut down during the early days of the COVID-19 pandemic. The project eventually grew to 1.4 million titles.

But lifting the lending restrictions also brought more scrutiny from copyright holders, who eventually sued the Archive. Litigation went on for years. In 2024, IA lost its final appeal in a lawsuit brought by book publishers over the Archive’s Open Library project, which used a novel e-book lending model to bypass publishers’ licensing fees and checkout limitations. Damages could have topped $400 million, but publishers ultimately announced a “confidential agreement on a monetary payment” that did not bankrupt the Archive.

Litigation has continued, though. More recently, the Archive settled another suit over its Great 78 Project after music publishers sought damages of up to $700 million. A settlement in that case, reached last month, was similarly confidential. In both cases, IA’s experts challenged publishers’ estimates of their losses as massively inflated.

For Internet Archive fans, a group that includes longtime Internet users, researchers, students, historians, lawyers, and the US government, the end of the lawsuits brought a sigh of relief. The Archive can continue—but it can’t run one of its major programs in the same way.

What the Internet Archive lost

To Kahle, the suits have been an immense setback to IA’s mission.

Publishers had argued that the Open Library’s lending harmed the e-book market, but IA says its vision for the project was not to frustrate e-book sales (which it denied its library does) but to make it easier for researchers to reference e-books by allowing Wikipedia to link to book scans. Wikipedia has long been one of the most visited websites in the world, and the Archive wanted to deepen its authority as a research tool.

“One of the real purposes of libraries is not just access to information by borrowing a book that you might buy in a bookstore,” Kahle said. “In fact, that’s actually the minority. Usually, you’re comparing and contrasting things. You’re quoting. You’re checking. You’re standing on the shoulders of giants.”

Meredith Rose, senior policy counsel for Public Knowledge, told Ars that the Internet Archive’s Wikipedia enhancements could have served to surface information that’s often buried in books, giving researchers a streamlined path to source accurate information online.

But Kahle said the lawsuits against IA showed that “massive multibillion-dollar media conglomerates” have their own interests in controlling the flow of information. “That’s what they really succeeded at—to make sure that Wikipedia readers don’t get access to books,” Kahle said.

At the heart of the Open Library lawsuit was publishers’ market for e-book licenses, which libraries complain provide only temporary access for a limited number of patrons and cost substantially more than the acquisition of physical books. Some states are crafting laws to restrict e-book licensing, with the aim of preserving library functions.

“We don’t want libraries to become Hulu or Netflix,” said Courtney of the eBook Study Group, posting warnings to patrons like “last day to check out this book, August 31st, then it goes away forever.”

He, like Kahle, is concerned that libraries will become unable to fulfill their longtime role—preserving culture and providing equal access to knowledge. Remote access, Courtney noted, benefits people who can’t easily get to libraries, like the elderly, people with disabilities, rural communities, and foreign-deployed troops.

Before the Internet Archive cases, libraries had won some important legal fights, according to Brandon Butler, a copyright lawyer and executive director of Re:Create, a coalition of “libraries, civil libertarians, online rights advocates, start-ups, consumers, and technology companies” that is “dedicated to balanced copyright and a free and open Internet.”

But the Internet Archive’s e-book fight didn’t set back libraries, Butler said, because the loss didn’t reverse any prior court wins. Instead, IA had been “exploring another frontier” beyond the Google Books ruling, which deemed Google’s searchable book excerpts a transformative fair use, hoping that linking to books from Wikipedia would also be deemed fair use. But IA “hit the edge” of what courts would allow, Butler said.

IA basically asked, “Could fair use go this much farther?” Butler said. “And the courts said, ‘No, this is as far as you go.’”

To Kahle, the cards feel stacked against the Internet Archive, with courts, lawmakers, and lobbyists backing corporations seeking “hyper levels of control.” He said IA has always served as a research library—an online destination where people can cross-reference texts and verify facts, just like perusing books at a local library.

“We’re just trying to be a library,” Kahle said. “A library in a traditional sense. And it’s getting hard.”

Fears of big fines may delay digitization projects

President Donald Trump’s cuts to the federal Institute of Museum and Library Services have put America’s public libraries at risk, and reduced funding will continue to challenge libraries in the coming years, ALA has warned. Butler has also suggested that under-resourced libraries may delay digitization efforts for preservation purposes if they worry that publishers may threaten costly litigation.

He told Ars he thinks courts are getting it right on recent fair use rulings. But he noted that libraries have fewer resources for legal fights because copyright law “has this provision that says, well, if you’re a copyright holder, you really don’t have to prove that you suffered any harm at all.”

“You can just elect [to receive] a massive payout based purely on the fact that you hold a copyright and somebody infringed,” Butler said. “And that’s really unique. Almost no other country in the world has that sort of a system.”

So while companies like AI firms may be able to afford legal fights with rights holders, libraries must be careful, even when they launch projects that seem “completely harmless and innocuous,” Butler said. Consider the Internet Archive’s Great 78 Project, which digitized 400,000 old shellac records, known as 78s, that were originally pressed from 1898 to the 1950s.

“The idea that somebody’s going to stream a 78 of an Elvis song instead of firing it up on their $10-a-month Spotify subscription is silly, right?” Butler said. “It doesn’t pass the laugh test, but given the scale of the project—and multiply that by the statutory damages—and that makes this an extremely dangerous project all of a sudden.”

Butler suggested that statutory damages could disrupt the balance that ensures the public has access to knowledge, creators get paid, and human creativity thrives, as AI advances and libraries’ growth potentially stalls.

“It sets the risk so high that it may force deals in situations where it would be better if people relied on fair use. Or it may scare people from trying new things because of the stakes of a copyright lawsuit,” Butler said.

Courtney, who co-wrote a whitepaper detailing the legal basis for different forms of “controlled digital lending” like the Open Library project uses, suggested that Kahle may be the person who’s best prepared to push the envelope on copyright.

When asked how the Internet Archive managed to avoid financial ruin, Courtney said it survived “only because their leader” is “very smart and capable.” Of all the “flavors” of controlled digital lending (CDL) that his paper outlined, Kahle’s methodology for the Open Library Project was the most “revolutionary,” Courtney said.

Importantly, IA’s loss did not doom other kinds of CDL that other archives use, he noted, nor did it prevent libraries from trying new things.

“Fair use is a case-by-case determination” that will be made as urgent preservation needs arise, Courtney told Ars, and “libraries have a ton of stuff that aren’t going to make the jump to digital unless we digitize them. No one will have access to them.”

What’s next for the Internet Archive?

The lawsuits haven’t dampened Kahle’s resolve to expand IA’s digitization efforts, though. Moving forward, the group will be growing a project called Democracy’s Library, which is “a free, open, online compendium of government research and publications from around the world” that will be conveniently linked in Wikipedia articles to help researchers discover them.

The Archive is also collecting as many physical materials as possible to help preserve knowledge, even as “the library system is largely contracting,” Kahle said. He noted that libraries historically tend to grow in societies that prioritize education and decline in societies where power is being concentrated, and he’s worried about where the US is headed. That makes it hard to predict if IA—or any library project—will be supported in the long term.

With governments globally partnering with the biggest tech companies to try to win the artificial intelligence race, critics have warned of threats to US democracy, while the White House has escalated its attack on libraries, universities, and science over the past year.

Meanwhile, AI firms face dozens of lawsuits from creators and publishers, which Kahle thinks only the biggest tech companies can likely afford to outlast. The momentum behind AI risks giving corporations even more control over information, Kahle said, and it’s uncertain if archives dedicated to preserving the public memory will survive attacks from multiple fronts.

“Societies that are [growing] are the ones that need to educate people” and therefore promote libraries, Kahle said. But when societies are “going down,” such as in times of war, conflict, and social upheaval, libraries “tend to get destroyed by the powerful. It used to be king and church, and it’s now corporations and governments.” (He recommended The Library: A Fragile History as a must-read to understand the challenges libraries have always faced.)

Kahle told Ars he’s not “black and white” on AI, and he even sees some potential for AI to enhance library services.

He’s more concerned that libraries in the US are losing support and may soon cease to perform classic functions that have always benefited civilizations—like buying books from small publishers and local authors, supporting intellectual endeavors, and partnering with other libraries to expand access to diverse collections.

To prevent these cultural and intellectual losses, he plans to position IA as a refuge for displaced collections, with hopes to digitize as much as possible while defending the early dream that the Internet could equalize access to information and supercharge progress.

“We want everyone [to be] a reader,” Kahle said, and that means “we want lots of publishers, we want lots of vendors, booksellers, lots of libraries.”

But, he asked, “Are we going that way? No.”

To turn things around, Kahle suggested that copyright laws be “re-architected” to ensure “we have a game with many winners”—where authors, publishers, and booksellers get paid, library missions are respected, and progress thrives. Then society can figure out “what do we do with this new set of AI tools” to keep the engine of human creativity humming.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Internet Archive’s legal fights are over, but its founder mourns what was lost Read More »

“unexpectedly,-a-deer-briefly-entered-the-family-room”:-living-with-gemini-home

“Unexpectedly, a deer briefly entered the family room”: Living with Gemini Home


60 percent of the time, it works every time

Gemini for Home unleashes gen AI on your Nest camera footage, but it gets a lot wrong.

Google Home with Gemini

The Google Home app has Gemini integration for paying customers. Credit: Ryan Whitwam

The Google Home app has Gemini integration for paying customers. Credit: Ryan Whitwam

You just can’t ignore the effects of the generative AI boom.

Even if you don’t go looking for AI bots, they’re being integrated into virtually every product and service. And for what? There’s a lot of hand-wavey chatter about agentic this and AGI that, but what can “gen AI” do for you right now? Gemini for Home is Google’s latest attempt to make this technology useful, integrating Gemini with the smart home devices people already have. Anyone paying for extended video history in the Home app is about to get a heaping helping of AI, including daily summaries, AI-labeled notifications, and more.

Given the supposed power of AI models like Gemini, recognizing events in a couple of videos and answering questions about them doesn’t seem like a bridge too far. And yet Gemini for Home has demonstrated a tenuous grasp of the truth, which can lead to some disquieting interactions, like periodic warnings of home invasion, both human and animal.

It can do some neat things, but is it worth the price—and the headaches?

Does your smart home need a premium AI subscription?

Simply using the Google Home app to control your devices does not turn your smart home over to Gemini. This is part of Google’s higher-tier paid service, which comes with extended camera history and Gemini features for $20 per month. That subscription pipes your video into a Gemini AI model that generates summaries for notifications, as well as a “Daily Brief” that offers a rundown of everything that happened on a given day. The cheaper $10 plan provides less video history and no AI-assisted summaries or notifications. Both plans enable Gemini Live on smart speakers.

According to Google, it doesn’t send all of your video to Gemini. That would be a huge waste of compute cycles, so Gemini only sees (and summarizes) event clips. Those summaries are then distilled at the end of the day to create the Daily Brief, which usually results in a rather boring list of people entering and leaving rooms, dropping off packages, and so on.

Importantly, the Gemini model powering this experience is not multimodal—it only processes visual elements of videos and does not integrate audio from your recordings. So unusual noises or conversations captured by your cameras will not be searchable or reflected in AI summaries. This may be intentional to ensure your conversations are not regurgitated by an AI.

Gemini smart home plans

Credit: Google

Paying for Google’s AI-infused subscription also adds Ask Home, a conversational chatbot that can answer questions about what has happened in your home based on the status of smart home devices and your video footage. You can ask questions about events, retrieve video clips, and create automations.

There are definitely some issues with Gemini’s understanding of video, but Ask Home is quite good at creating automations. It was possible to set up automations in the old Home app, but the updated AI is able to piece together automations based on your natural language request. Perhaps thanks to the limited set of possible automation elements, the AI gets this right most of the time. Ask Home is also usually able to dig up past event clips, as long as you are specific about what you want.

The Advanced plan for Gemini Home keeps your videos for 60 days, so you can only query the robot on clips from that time period. Google also says it does not retain any of that video for training. The only instance in which Google will use security camera footage for training is if you choose to “lend” it to Google via an obscure option in the Home app. Google says it will keep these videos for up to 18 months or until you revoke access. However, your interactions with Gemini (like your typed prompts and ratings of outputs) are used to refine the model.

The unexpected deer

Every generative AI bot makes the occasional mistake, but you’ll probably not notice every one. When the AI hallucinates about your daily life, however, it’s more noticeable. There’s no reason Google should be confused by my smart home setup, which features a couple of outdoor cameras and one indoor camera—all Nest-branded with all the default AI features enabled—to keep an eye on my dogs. So the AI is seeing a lot of dogs lounging around and staring out the window. One would hope that it could reliably summarize something so straightforward.

One may be disappointed, though.

In my first Daily Brief, I was fascinated to see that Google spotted some indoor wildlife. “Unexpectedly, a deer briefly entered the family room,” Gemini said.

Home Brief with deer

Dogs and deer are pretty much the same thing, right? Credit: Ryan Whitwam

Gemini does deserve some credit for recognizing that the appearance of a deer in the family room would be unexpected. But the “deer” was, naturally, a dog. This was not a one-time occurrence, either. Gemini sometimes identifies my dogs correctly, but many event clips and summaries still tell me about the notable but brief appearance of deer around the house and yard.

This deer situation serves as a keen reminder that this new type of AI doesn’t “think,” although the industry’s use of that term to describe simulated reasoning could lead you to believe otherwise. A person looking at this video wouldn’t even entertain the possibility that they were seeing a deer after they’ve already seen the dogs loping around in other videos. Gemini doesn’t have that base of common sense, though. If the tokens say deer, it’s a deer. I will say, though, Gemini is great at recognizing car models and brand logos. Make of that what you will.

The animal mix-up is not ideal, but it’s not a major hurdle to usability. I didn’t seriously entertain the possibility that a deer had wandered into the house, and it’s a little funny the way the daily report continues to express amazement that wildlife is invading. It’s a pretty harmless screw-up.

“Overall identification accuracy depends on several factors, including the visual details available in the camera clip for Gemini to process,” explains a Google spokesperson. “As a large language model, Gemini can sometimes make inferential mistakes, which leads to these misidentifications, such as confusing your dog with a cat or deer.”

Google also says that you can tune the AI by correcting it when it screws up. This works sometimes, but the system still doesn’t truly understand anything—that’s beyond the capabilities of a generative AI model. After telling Gemini that it’s seeing dogs rather than deer, it sees wildlife less often. However, it doesn’t seem to trust me all the time, causing it to report the appearance of a deer that is “probably” just a dog.

A perfect fit for spooky season

Gemini’s smart home hallucinations also have a less comedic side. When Gemini mislabels an event clip, you can end up with some pretty distressing alerts. Imagine that you’re out and about when your Gemini assistant hits you with a notification telling you, “A person was seen in the family room.”

A person roaming around the house you believed to be empty? That’s alarming. Is it an intruder, a hallucination, a ghost? So naturally, you check the camera feed to find… nothing. An Ars Technica investigation confirms AI cannot detect ghosts. So a ghost in the machine?

Oops, we made you think someone broke into your house.

Credit: Ryan Whitwam

Oops, we made you think someone broke into your house. Credit: Ryan Whitwam

On several occasions, I’ve seen Gemini mistake dogs and totally empty rooms (or maybe a shadow?) for a person. It may be alarming at first, but after a few false positives, you grow to distrust the robot. Now, even if Gemini correctly identified a random person in the house, I’d probably ignore it. Unfortunately, this is the only notification experience for Gemini Home Advanced.

“You cannot turn off the AI description while keeping the base notification,” a Google spokesperson told me. They noted, however, that you can disable person alerts in the app. Those are enabled when you turn on Google’s familiar faces detection.

Gemini often twists reality just a bit instead of creating it from whole cloth. A person holding anything in the backyard is doing yardwork. One person anywhere, doing anything, becomes several people. A dog toy becomes a cat lying in the sun. A couple of birds become a raccoon. Gemini likes to ignore things, too, like denying there was a package delivery even when there’s a video tagged as “person delivers package.”

Gemini misses package

Gemini still refused to admit it was wrong.

Credit: Ryan Whitwam

Gemini still refused to admit it was wrong. Credit: Ryan Whitwam

At the end of the day, Gemini is labeling most clips correctly and therefore produces mostly accurate, if sometimes unhelpful, notifications. The problem is the flip side of “mostly,” which is still a lot of mistakes. Some of these mistakes compel you to check your cameras—at least, before you grow weary of Gemini’s confabulations. Instead of saving time and keeping you apprised of what’s happening at home, it wastes your time. For this thing to be useful, inferential errors cannot be a daily occurrence.

Learning as it goes

Google says its goal is to make Gemini for Home better for everyone. The team is “investing heavily in improving accurate identification” to cut down on erroneous notifications. The company also believes that having people add custom instructions is a critical piece of the puzzle. Maybe in the future, Gemini for Home will be more honest, but it currently takes a lot of hand-holding to move it in the right direction.

With careful tuning, you can indeed address some of Gemini for Home’s flights of fancy. I see fewer deer identifications after tinkering, and a couple of custom instructions have made the Home Brief waste less space telling me when people walk into and out of rooms that don’t exist. But I still don’t know how to prompt my way out of Gemini seeing people in an empty room.

Nest Cam 2025

Gemini AI features work on all Nest cams, but the new 2025 models are “designed for Gemini.”

Credit: Ryan Whitwam

Gemini AI features work on all Nest cams, but the new 2025 models are “designed for Gemini.” Credit: Ryan Whitwam

Despite its intention to improve Gemini for Home, Google is releasing a product that just doesn’t work very well out of the box, and it misbehaves in ways that are genuinely off-putting. Security cameras shouldn’t lie about seeing intruders, nor should they tell me I’m lying when they fail to recognize an event. The Ask Home bot has the standard disclaimer recommending that you verify what the AI says. You have to take that warning seriously with Gemini for Home.

At launch, it’s hard to justify paying for the $20 Advanced Gemini subscription. If you’re already paying because you want the 60-day event history, you’re stuck with the AI notifications. You can ignore the existence of Daily Brief, though. Stepping down to the $10 per month subscription gets you just 30 days of event history with the old non-generative notifications and event labeling. Maybe that’s the smarter smart home bet right now.

Gemini for Home is widely available for those who opted into early access in the Home app. So you can avoid Gemini for the time being, but it’s only a matter of time before Google flips the switch for everyone.

Hopefully it works better by then.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

“Unexpectedly, a deer briefly entered the family room”: Living with Gemini Home Read More »

halloween-film-fest:-15-classic-ghost-stories

Halloween film fest: 15 classic ghost stories


From The Uninvited to Crimson Peak, these films will help you set the tone for spooky season.

It’s spooky season, and what better way to spend Halloween weekend than settling in to watch a classic Hollywood ghost story? To help you figure out what to watch, we’ve compiled a handy list of 15 classic ghost stories, presented in chronological order.

What makes a good ghost story? Everyone’s criteria (and taste) will differ, but for this list, we’ve focused on more traditional elements. There’s usually a spooky old house with a ghostly presence and/or someone who’s attuned to said presence. The living must solve the mystery of what happened to trap the ghost(s) there in hopes of setting said ghost(s) free. In that sense, the best, most satisfying ghost stories are mysteries—and sometimes also love stories. The horror is more psychological, and when it comes to gore, less is usually more.

As always, the list below isn’t meant to be exhaustive. Mostly, we’re going for a certain atmospheric vibe to set a mood. So our list omits overt comedies like Ghostbusters and (arguably) Ghost, as well as supernatural horror involving demonic possession—The Exorcist, The Conjuring, Insidious—or monsters, like The Babadook or Sinister. Feel free to suggest your own recommendations in the comments.

(Various spoilers below, but no major reveals.)

The Uninvited (1944)

B&W image of man and woman in 1940s evening wear holding a candle and looking up a flight of stairs

Credit: Paramount Pictures

Brother and sister Rick and Pamela Fitzgerald (Ray Milland and Ruth Hussey) fall in love with an abandoned seaside abode called Windward House while vacationing in England. They pool their resources and buy it for a very low price, since its owner, Commander Beech (Donald Crisp), is oddly desperate to unload it. This upsets his 20-year-old granddaughter, Stella (Gail Russell), whose mother fell to her death from the cliffs near the house when Stella was just a toddler.

Rick, a musician and composer, becomes infatuated with the beautiful young woman. And before long, strange phenomena begin manifesting: a woman sobbing, an odd chill in the artist’s studio, a flower wilting in mere seconds—plus, the Fitzgeralds’ dog and their housekeeper’s cat both refuse to go upstairs. Whatever haunts the house seems to be focused on Stella.

The Uninvited was director Lewis Allen’s first feature film—adapted from a 1941 novel by Dorothy Macardle—but it has aged well. Sure, there are some odd tonal shifts; the light-hearted sibling banter between Rick and Pamela, while enjoyable, does sometimes weaken the scare factor. But the central mystery is intriguing and the visuals are striking, snagging an Oscar nomination for cinematographer Charles Lang. Bonus points for the tune “Stella by Starlight,” written specifically for the film and later evolving into a beloved jazz standard, performed by such luminaries as Ella Fitzgerald, Frank Sinatra, Charlie Parker, Chet Baker, and Miles Davis.

The Ghost and Mrs. Muir (1947)

young woman and middle aged man standing and talking

Credit: 20th Century Fox

This is one of those old Hollywood classics that has ably withstood the test of time. Gene Tierney stars as the titular Mrs. Lucy Muir, a young widow with a little girl who decides to leave London and take up residence in the seaside village of Whitecliff. She rents Gull Cottage despite the realtor’s reluctance to even show it to her. Lucy falls in love with the house and is intrigued by the portrait of its former owner: a rough sea captain named Daniel Gregg (Rex Harrison), who locals say died by suicide in the house. Gregg’s ghost still haunts Gull Cottage, but he tries in vain to scare away the tough-minded Lucy. The two become friends and start to fall in love—but can any romance between the living and the dead truly thrive?

The Ghost and Mrs. Muir earned cinematographer Charles Lang another well-deserved Oscar nomination. Tierney and Harrison have great on-screen chemistry, and the film manages to blend wry humor and pathos into what is essentially a haunting love story of two people finding each other at the wrong time. There’s no revenge plot, no spine-tingling moments of terror, no deep, dark secret—just two people, one living and one dead, coming to terms in their respective ways with loss and regret to find peace.

The Innocents (1961)

B&W still of young boy being tucked in by a young woman.

Credit: 20th Century Fox

Henry James’ 1898 novella The Turn of the Screw has inspired many adaptations over the years. Most recently, Mike Flanagan used the plot and central characters as the main narrative framework for his Netflix miniseries The Haunting of Bly Manor. But The Innocents is widely considered to be the best.

Miss Giddens (Deborah Kerr) has been hired for her first job as a governess to two orphaned children at Bly Manor, who sometimes exhibit odd behavior. The previous governess, Miss Jessel (Clytie Jessop), had died tragically the year before, along with her lover, Peter Quint (Peter Wyngarde). Miss Giddens becomes convinced that their ghosts have possessed the children so they can still be together in death. Miss Giddens resolves to free the children, with tragic consequences.

Literary scholars and critics have been debating The Turn of the Screw ever since it was first published because James was deliberately ambiguous about whether the governess saw actual ghosts or was simply going mad and imagining them. The initial screenwriter for The Innocents, William Archibald, assumed the ghosts were real. Director Jack Clayton preferred to be true to James’ original ambiguity, and the final script ended up somewhere in between, with some pretty strong Freudian overtones where our repressed governess is concerned.

This is a film you’ll want to watch with all the lights off. It’s dark—literally, thanks to Clayton’s emphasis on shadows and light to highlight Miss Giddens’ isolation. The first 45 seconds are just a black screen with a child’s voice humming a haunting tune. But it’s a beautifully crafted example of classic psychological horror that captures something of the chilly, reserved spirit of Henry James.

The Haunting (1963)

B&W still of group of people in 1960s clothing standing in drawing room of a haunted house

Credit: Metro-Goldwyn-Mayer

There have also been numerous adaptations of Shirley Jackson’s 1959 Gothic horror novel The Haunting of Hill House, including Mike Flanagan’s boldly reimagined miniseries for Netflix. But many people—Martin Scorsese and Steven Spielberg among them—consider director Robert Wise’s The Haunting to be not only the best adaptation but one of the best horror films of all time. (Please do not confuse the Wise version with the disappointing 1999 remake, which tried to make up for its shortcomings with lavish sets and showy special effects—to no avail.)

Psychologist Dr. John Markaway (Richard Johnson) brings three people to the titular Hill House, intent on exploring its legendary paranormal phenomena. There’s a psychic named Theodora (Claire Bloom); the emotionally vulnerable Eleanor (Julie Harris), who has experienced poltergeists and just lost her domineering mother; and the skeptical Luke (Russ Tamblyn), who will inherit the house when its elderly owner dies. The house does not disappoint, and the visitors experience strange sounds and mysterious voices, doors banging shut on their own, and a sinister message scrawled on a wall: “Help Eleanor come home.”

Initial reviews were mixed, but the film has grown in stature over the decades. Jackson herself was not a fan. Wise did make considerable changes, shortening the backstory and cutting out several characters. He also downplayed the overt supernatural elements in Jackson’s novel, focusing on Eleanor’s mental instability and eventual breakdown. Wise envisioned it as the house taking over her mind. Modern sensibilities accustomed to much more intense horror might not find The Haunting especially scary, but it is beautifully rendered with skillful use of clever special effects. For instance, to make the house seem alive, Wise filmed the exterior shots in infrared to give it an otherworldly vibe, framing the shots so that the windows resemble the house’s eyes.

The Shining (1980)

twin girls in matching light blue dresses and white knee socks standing in a hallway with yellow flowered wallpaper

Credit: Warner Bros.

Stanley Kubrick’s adaptation of the 1977 bestselling novel by Stephen King probably needs no introduction. But for those not familiar with the story, Jack Torrance (Jack Nicholson) takes a position as the winter caretaker of the remote Overlook Hotel in the Rocky Mountains, bringing his wife, Wendy (Shelley Duvall), and young son, Danny (Danny Lloyd). Danny has a psychic gift called “the shining,” which allows him to communicate telepathically with the hotel cook, Dick Halloran (Scatman Crothers). The previous caretaker went mad and murdered his family. Over the course of the film, Jack slowly begins to succumb to the same madness, putting his own wife and child in danger.

Initial reviews weren’t particularly favorable—King himself is not a fan of the film—but it’s now considered a horror classic and a subject of much academic study among film scholars. This is another film that has seen a lot of debate about whether the ghosts are real, with some arguing that Jack and Danny might just be hallucinating the Overlook’s malevolent ghosts into existence. Or maybe it’s the hotel manifesting ghosts to drive Jack insane. (I choose to interpret the ghosts in The Shining as real while appreciating the deliberate ambiguity.) There are so many memorable moments: the eerie twin girls (“Come and play with us”), the bathtub lady in Room 237, Lloyd the creepy bartender, the elaborate hedge maze, “REDRUM,” Jack hacking through a door and exclaiming, “Heeere’s Johnny!” and that avalanche of blood pouring down a hotel hallway. It’s a must-watch.

Ghost Story (1981)

young woman with dark haired bob wearing a 1920s white dress and hat, standing in a road illuminated by headlights on a snowy night

Credit: Universal Pictures

Adapted from the 1979 novel by Peter Straub, Ghost Story centers on a quartet of elderly men in a New England town called Milburn. They are lifelong friends who call themselves the Chowder Society and gather every week to tell spooky stories. Edward Wanderly (Douglas Fairbanks Jr.) is the town’s mayor; Ricky Hawthorne (Fred Astaire) is a businessman; Sears James (John Houseman) is a lawyer; and John Jaffrey (Melvyn Douglas) is a physician. The trouble starts when Edward’s son, David (Craig Wasson), falls to his death from a New York City high-rise after the young woman he’s engaged to suddenly turns into a putrefying living corpse in their shared bed.

The apparent suicide brings Edward’s other son, Dan (also Wasson), back to Milburn. Dan doesn’t believe his brother killed himself and tells the Chowder Society his own ghost story: He fell in love with a young woman named Alma (Alice Krige) before realizing something was wrong with her. When he broke things off, Alma got engaged to David. And it just so happens that Alma bears a striking resemblance to a young woman named Eva Galli (also Krige) captured in an old photograph with all the members of the Chowder Society back in their youth. Yep, the old men share a dark secret, and the chickens are finally coming home to roost.

I won’t claim that Ghost Story is the best film of all time. It has its flaws, most notably the inclusion of two escaped psychiatric hospital patients purportedly in the service of Eva’s vengeful ghost. The tone is occasionally a bit over-the-top, but the film honors all the classic tropes, and there are many lovely individual scenes. The main cast is terrific; it was the final film for both Astaire and Fairbanks. And that spooky New England winter setting is a special effect all its own. The sight of Eva’s apparition materializing through the swirling snow to stand in the middle of the road in front of Sears’ car is one that has stuck with me for decades.

Poltergeist (1982)

back view of little girl silhouetted against the TV glow; screen is all static and girl is holding both hands to the screen

Credit: MGM/UA Entertainment

“They’re heeere!” That might be one of the best-known movie lines from the 1980s, announcing the arrival of the titular poltergeists. In this Tobe Hooper tale of terror, Steven and Diane Freeling (Craig T. Nelson and JoBeth Williams) have just moved with their three children into a suburban dream house in the newly constructed community of Cuesta Verde, California. Their youngest, Carol Anne (Heather O’Rourke), starts hearing voices in the TV static late at night, and things soon escalate as multiple ghosts play pranks on the family. When Carol Anne mysteriously disappears, Steven and Diane realize at least one of the ghosts is far from friendly and call on local parapsychologists for help.

Steven Spielberg initiated the project, but his obligations to filming E.T. prevented him from directing, although he visited the set frequently. (There’s been considerable debate over whether Hooper or Spielberg really directed the film, but the consensus over time credits Hooper.) Despite the super-scary shenanigans, it definitely has elements of that lighter Spielberg touch, and it all adds up to a vastly entertaining supernatural thriller. Special shoutout to Zelda Rubinstein’s eccentric psychic medium with the baby voice, Tangina, who lends an element of whimsy to the proceedings.

Lady in White (1988)

young boy curled up near an arched window at night with a har and wearing red gloves

Credit: New Century Vista Film

As a child actor, Lukas Haas won audience hearts when he played an Amish boy who sees a murder in the 1985 film Witness. Less well-known is his performance in Lady in White, playing 9-year-old Frankie Scarlatti. On Halloween in 1962, school bullies lock Frankie in the classroom coatroom, where he is trapped for the night. That’s when he sees the apparition of a young girl (Joelle Jacobi) being brutally murdered by an invisible assailant. Then an actual man enters, trying to recover something from a floor grate. When he realizes someone is there, he strangles Frankie unconscious; Frankie’s father, Angelo (Alex Rocco), finds and rescues him in the nick of time.

Frankie has a vision of that same girl while unconscious, asking him to help her find her mother. That little girl, it turns out, was one of 11 child victims targeted by a local serial killer. Frankie and his older brother, Geno (Jason Presson), decide to investigate. Their efforts lead to some shocking revelations about tragedies past and present as the increasingly desperate killer sets his sights on Frankie.

Director Frank LaLoggia based the story on the “lady in white” legend about a ghostly figure searching for her daughter in LaLoggia’s hometown of Rochester, New York. Granted, the special effects are cheesy and dated—the director was working with a lean $4.7 million budget—and LaLoggia can’t seem to end the film, adding twist after twist well after the audience is ready for a denouement. But overall, it’s a charming film, with plenty of warmth and heart to offset the dark premise, primarily because the Scarlattis are the quintessential Italian American New England family. Lady in White inexplicably bombed at the box office, despite positive critical reviews, but it’s a hidden 1980s gem.

Dead Again (1991)

young woman, frightened, pointing gun at the camera

Credit: Paramount Pictures

In 1948, a composer named Roman Strauss is convicted of brutally stabbing his pianist wife, Margaret, to death with a pair of scissors and is executed. Over 40 years later, a woman (Emma Thompson) shows up with amnesia and is unable to speak at a Catholic orphanage that just happens to be the old Strauss mansion. The woman regularly barricades her door at night and inevitably wakes up screaming.

The nuns ask private investigator Mike Church (Kenneth Branagh) to find out her identity. Antiques dealer and hypnotist Franklyn Madson (Derek Jacobi) offers his assistance to help “Grace” recover her memory. Madson regresses her to a past life—that of Margaret and Roman Strauss’s doomed marriage. The truth about what really happened in 1948 unfolds in a series of black-and-white flashbacks—and they just might be the key to Grace’s cure.

As director, Branagh drew influences from various Hitchcock films, Rebecca, and Citizen Kane, as well as the stories of Edgar Allen Poe. The film is tightly written and well-plotted, and it ably balances suspense and sentiment. Plus, there are great performances from the entire cast, especially Robin Williams as a disgraced psychiatrist now working in a grocery store.

Some might question whether Dead Again counts as a bona fide ghost story instead of a romantic thriller with supernatural elements, i.e., hypnotherapy and past-life regression. It’s still two dead lovers, Roman and Margaret, reaching through the past to their reincarnated selves in the present to solve a mystery, exact justice, and get their happily ever after. That makes it a ghost story to me.

Stir of Echoes (1999)

shirtless man in jeans digging a hole in his backyard

Credit: Artisan Entertainment

Stir of Echoes is one of my favorite Kevin Bacon films, second only to Tremors, although it hasn’t achieved the same level of cult classic success. Bacon plays Tom Witzky, a phone lineman in a working-class Chicago neighborhood. He loves his wife Maggie (Kathryn Erbe) and son Jake (Zachary David Cope), but he struggles with the fact that his life just isn’t what he’d imagined. One night, he agrees to be hypnotized by his sister-in-law (Illeana Douglas) after mocking her belief in the paranormal. This unlocks latent psychic abilities, which he shares with his far more gifted son, and he begins having disturbing visions of a young girl who disappeared from the neighborhood the year before. Naturally, Tom becomes obsessed with solving the mystery behind his intensifying visions.

Based on a novel by Richard Matheson, director David Koep drew on films like Repulsion, Rosemary’s Baby, and The Dead Zone for tonal inspiration, but Stir of Echoes still falls firmly into the ghost story genre. It’s just grounded in an ordinary real-world setting that makes the spooky suspense all the more effective, further aided by Bacon inhabiting the role of Tom so effortlessly that he barely seems to be acting. Alas, the film suffered at the box office and from unfavorable (and unfair) contemporary comparisons to The Sixth Sense (see below), released that same year. But it’s well worth a watch (and a rewatch).

The Sixth Sense (1999)

little boy looking scared being comforted by a man kneeling in front of him

Credit: Buena Vista Pictures

This is the film that launched director M. Night Shyamalan’s career, snagging him two Oscar nominations in the process. Child psychologist Malcolm Crowe (Bruce Willis) is shot by a troubled former patient, Vincent (Donnie Wahlberg), one night at home. A year later, he has a new case with striking similarities—9-year-old Cole Sears (Haley Joel Osment)—and devotes himself to helping the boy, as a way to atone for his failure to help Vincent. Malcolm thinks Cole’s problems might be even more severe, especially when Cole confesses (in a famous scene), “I see dead people.” And those dead people can be really scary, especially to a 9-year-old boy.

The Sixth Sense was a massive hit, grossing over $672 million globally, fueled in part by a jolting final plot twist that hardly anyone saw coming. But it’s Osment’s astonishing performance as Cole that anchored it all and marked the young actor as a rising talent. (It’s also one of Willis’ best, most nuanced performances.) Shyamalan has made many films since, and several are really good, but none have ever come close to this one.

What Lies Beneath (2000)

Beautiful blond woman in a sweater standing in the fog hugging herself to keep warm

Credit: DreamWorks Pictures

A luminous Michelle Pfeiffer stars as Claire Spencer, a gifted cellist who gave up her career for marriage to scientist Norman Spencer (Harrison Ford) and motherhood. But when their daughter goes off to college, Claire finds herself struggling to cope, particularly since there are tensions in her marriage. Plus, she’s still recovering psychologically from a car accident the year before, of which she has no memory. When mysterious psychic disturbances begin to manifest, Claire is convinced the ghost of a young woman is haunting her; everyone else thinks she’s just dealing with delayed grief and trauma. Claire nonetheless slowly begins to uncover the truth about the mysterious presence and her accident—and that truth just might end up costing her life.

What Lies Beneath started out as a treatment for Steven Spielberg, who envisioned something along the lines of a ghost story equivalent to Close Encounters of the Third Kind—primarily about discovery and first contact, while also exploring the psychological state of a new empty nester. But Spielberg ultimately passed on the project and handed it over to director Robert Zemeckis, who turned it into a psychological thriller/ghost story with a Hitchcockian vibe. Those earlier elements remain, however, and the leisurely pacing helps develop Claire as a character and gives Pfeiffer a chance to show off her acting chops, not just her exquisite beauty. It’s broody and satisfying and a perennial seasonal favorite for a rewatch.

The Others (2001)

young girl, back to camera, dressed n white with a veil playing with a marionette

Credit: Dimension Films

This film might be director Alejandro Amenábar’s masterpiece, merging the sensibilities of arthouse cinema with mainstream movie-making. A young mother named Grace (Nicole Kidman) and her two children are living in a remote house on the Channel Island of Jersey, recently liberated from German occupation at the end of World War II. The house is kept in near darkness at all times because the children have a severe sensitivity to light. But there are disturbances in the house that Grace fears may be evidence of a haunting, and the three creepy new servants she hired seem to have ulterior motives for being there. And just who is buried in the small, overgrown cemetery on the grounds?

Much of the film’s success is due to Kidman’s incredibly disciplined, intense performance as the icily reserved, tightly wound Grace, whose gradual unraveling drives the plot. It’s a simple plot by design. All the complexity lies in the building tension and sense of oppressiveness, augmented by Amenábar’s claustrophobic sets and minimalist lighting of sepia-toned scenes. It all leads up to a chilling climax with an appropriately satisfying twist.

Crimson Peak (2015)

woman with long blonde hair in Gothic period dress holing a candelabra in a dark corridor

Credit: Universal Pictures

Guillermo del Toro has always had an extraordinary knack for lush visuals teeming with Gothic elements. The director went all in on the Gothic horror for this ghostly tale of a Victorian-era American heiress (Mia Wasikowska) who weds a handsome but impoverished English nobleman, Sir Thomas Sharpe (Tom Hiddleston). Edith finds herself living in his crumbling family mansion, which is definitely haunted. And Edith should know. She’s had ghostly visits from her dead mother since childhood, warning her to “beware of Crimson Peak,” so she’s sensitive to haunted vibes.

Edith really should have listened to her mother. Not only is Thomas strangely reluctant to consummate their marriage, but his sister, Lucille—played to perfection by Jessica Chastain—is openly hostile and might just be slipping a suspicious substance into Edith’s tea. Will Edith uncover the dark secret of Crimson Peak and escape a potentially terrible fate? Del Toro set out to put a modern twist on the classic haunted house genre, and he succeeded, drawing on several other films on this list for inspiration (The Haunting, The Innocents, and The Shining, specifically). But at its heart, Crimson Peak is pure del Toro: sinister, atmospheric, soaked in rich colors (and sometimes blood), with a spectacular payoff at the end.

A Ghost Story (2017)

young woman seated at a desk with a small figure draped in a sheet wth eye holes cut out standing beside her

Credit: A24

This is probably the most unconventional approach to the genre on the list. Casey Affleck and Rooney Mara play a husband and wife known only as C and M, respectively, who have been at odds because M wants to move and C does not. Their house isn’t anything special—a small ranch-style affair in a semi-rural area—but it might be haunted.

One night, there is a mysterious bang, and the couple can’t locate the source when they search the house. Then C is killed in a car accident, his body covered with a sheet at the hospital morgue. C rises as a ghost, still wearing the sheet (now with two eyeholes) and makes his way back to the house, where he remains for a very long time, even long after M has moved out. (There’s also another ghost next door in a flowered sheet, waiting for someone it can no longer remember.)

There is almost no dialogue, Affleck spends most of the movie covered in a sheet, there is very little in the way of a musical soundtrack, and the entire film is shot in a 1.33:1 aspect ratio. Director David Lowery has said he made that choice because the film is “about someone trapped in a box for eternity, and I felt the claustrophobia of that situation could be amplified by the boxiness of the aspect ratio.” Somehow it all works. A Ghost Story isn’t about being scary; it’s a moody, poignant exploration of love lost—and it takes the audience to some conceptual spaces few films dare to tread.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Halloween film fest: 15 classic ghost stories Read More »

new-physical-attacks-are-quickly-diluting-secure-enclave-defenses-from-nvidia,-amd,-and-intel

New physical attacks are quickly diluting secure enclave defenses from Nvidia, AMD, and Intel


On-chip TEEs withstand rooted OSes but fall instantly to cheap physical attacks.

Trusted execution environments, or TEEs, are everywhere—in blockchain architectures, virtually every cloud service, and computing involving AI, finance, and defense contractors. It’s hard to overstate the reliance that entire industries have on three TEEs in particular: Confidential Compute from Nvidia, SEV-SNP from AMD, and SGX and TDX from Intel. All three come with assurances that confidential data and sensitive computing can’t be viewed or altered, even if a server has suffered a complete compromise of the operating kernel.

A trio of novel physical attacks raises new questions about the true security offered by these TEES and the exaggerated promises and misconceptions coming from the big and small players using them.

The most recent attack, released Tuesday, is known as TEE.fail. It defeats the latest TEE protections from all three chipmakers. The low-cost, low-complexity attack works by placing a small piece of hardware between a single physical memory chip and the motherboard slot it plugs into. It also requires the attacker to compromise the operating system kernel. Once this three-minute attack is completed, Confidential Compute, SEV-SNP, and TDX/SDX can no longer be trusted. Unlike the Battering RAM and Wiretap attacks from last month—which worked only against CPUs using DDR4 memory—TEE.fail works against DDR5, allowing them to work against the latest TEEs.

Some terms apply

All three chipmakers exclude physical attacks from threat models for their TEEs, also known as secure enclaves. Instead, assurances are limited to protecting data and execution from viewing or tampering, even when the kernel OS running the processor has been compromised. None of the chipmakers make these carveouts prominent, and they sometimes provide confusing statements about the TEE protections offered.

Many users of these TEEs make public assertions about the protections that are flat-out wrong, misleading, or unclear. All three chipmakers and many TEE users focus on the suitability of the enclaves for protecting servers on a network edge, which are often located in remote locations, where physical access is a top threat.

“These features keep getting broken, but that doesn’t stop vendors from selling them for these use cases—and people keep believing them and spending time using them,” said HD Moore, a security researcher and the founder and CEO of runZero.

He continued:

Overall, it’s hard for a customer to know what they are getting when they buy confidential computing in the cloud. For on-premise deployments, it may not be obvious that physical attacks (including side channels) are specifically out of scope. This research shows that server-side TEEs are not effective against physical attacks, and even more surprising, Intel and AMD consider these out of scope. If you were expecting TEEs to provide private computing in untrusted data centers, these attacks should change your mind.

Those making these statements run the gamut from cloud providers to AI engines, blockchain platforms, and even the chipmakers themselves. Here are some examples:

  • Cloudflare says it’s using Secure Memory Encryption—the encryption engine driving SEV—to safeguard confidential data from being extracted from a server if it’s stolen.
  • In a post outlining the possibility of using the TEEs to secure confidential information discussed in chat sessions, Anthropic says the enclave “includes protections against physical attacks.”
  • Microsoft marketing (here and here) devotes plenty of ink to discussing TEE protections without ever noting the exclusion.
  • Meta, paraphrasing the Confidential Computing Consortium, says TEE security provides protections against malicious “system administrators, the infrastructure owner, or anyone else with physical access to the hardware.” SEV-SNP is a key pillar supporting the security of Meta’s WhatsApp Messenger.
  • Even Nvidia claims that its TEE security protects against “infrastructure owners such as cloud providers, or anyone with physical access to the servers.”
  • The maker of the Signal private messenger assures users that its use of SGX means that “keys associated with this encryption never leave the underlying CPU, so they’re not accessible to the server owners or anyone else with access to server infrastructure.” Signal has long relied on SGX to protect contact-discovery data.

I counted more than a dozen other organizations providing assurances that were similarly confusing, misleading, or false. Even Moore—a security veteran with more than three decades of experience—told me: “The surprising part to me is that Intel/AMD would blanket-state that physical access is somehow out of scope when it’s the entire point.”

In fairness, some TEE users build additional protections on top of the TEEs provided out of the box. Meta, for example, said in an email that the WhatsApp implementation of SEV-SNP uses protections that would block TEE.fail attackers from impersonating its servers. The company didn’t dispute that TEE.fail could nonetheless pull secrets from the AMD TEE.

The Cloudflare theft protection, meanwhile, relies on SME—the engine driving SEV-SNP encryption. The researchers didn’t directly test SME against TEE.fail. They did note that SME uses deterministic encryption, the cryptographic property that causes all three TEEs to fail. (More about the role of deterministic encryption later.)

Others who misstate the TEEs’ protections provide more accurate descriptions elsewhere. Given all the conflicting information, it’s no wonder there’s confusion.

How do you know where the server is? You don’t.

Many TEE users run their infrastructure inside cloud providers such as AWS, Azure, or Google, where protections against supply-chain and physical attacks are extremely robust. That raises the bar for a TEE.fail-style attack significantly. (Whether the services could be compelled by governments with valid subpoenas to attack their own TEE is not clear.)

All these caveats notwithstanding, there’s often (1) little discussion of the growing viability of cheap, physical attacks, (2) no evidence (yet) that implementations not vulnerable to the three attacks won’t fall to follow-on research, or (3) no way for parties relying on TEEs to know where the servers are running and whether they’re free from physical compromise.

“We don’t know where the hardware is,” Daniel Genkin, one of the researchers behind both TEE.fail and Wiretap, said in an interview. “From a user perspective, I don’t even have a way to verify where the server is. Therefore, I have no way to verify if it’s in a reputable facility or an attacker’s basement.”

In other words, parties relying on attestations from servers in the cloud are once again reduced to simply trusting other people’s computers. As Moore observed, solving that problem is precisely the reason TEEs exist.

In at least two cases, involving the blockchain services Secret Network and Crust, the loss of TEE protections made it possible for any untrusted user to present cryptographic attestations. Both platforms used the attestations to verify that a blockchain node operated by one user couldn’t tamper with the execution or data passing to another user’s nodes. The Wiretap hack on SGX made it possible for users to run the sensitive data and executions outside of the TEE altogether while still providing attestations to the contrary. In the AMD attack, the attacker could decrypt the traffic passing through the TEE.

Both Secret Network and Crust added mitigations after learning of the possible physical attacks with Wiretap and Battering RAM. Given the lack of clear messaging, other TEE users are likely making similar mistakes.

A predetermined weakness

The root cause of all three physical attacks is the choice of deterministic encryption. This form of encryption produces the same ciphertext each time the same plaintext is encrypted with the same key. A TEE.fail attacker can copy ciphertext strings and use them in replay attacks. (Probabilistic encryption, by contrast, resists such attacks because the same plaintext can encrypt to a wide range of ciphertexts that are randomly chosen during the encryption process.)

TEE.fail works not only against SGX but also a more advanced Intel TEE known as TDX. The attack also defeats the protections provided by the latest Nvidia Confidential Compute and AMD SEV-SNP TEEs. Attacks against TDX and SGX can extract the Attestation Key, an ECDSA secret that certifies to a remote party that it’s running up-to-date software and can’t expose data or execution running inside the enclave. This Attestation Key is in turn signed by an Intel X.509 digital certificate providing cryptographic assurances that the ECDSA key can be trusted. TEE.fail works against all Intel CPUs currently supporting TDX and SDX.

With possession of the key, the attacker can use the compromised server to peer into data or tamper with the code flowing through the enclave and send the relying party an assurance that the device is secure. With this key, even CPUs built by other chipmakers can send an attestation that the hardware is protected by the Intel TEEs.

GPUs equipped with Nvidia Confidential Compute don’t bind attestation reports to the specific virtual machine protected by a specific GPU. TEE.fail exploits this weakness by “borrowing” a valid attestation report from a GPU run by the attacker and using it to impersonate the GPU running Confidential Compute. The protection is available on Nvidia’s H100/200 and B100/200 server GPUs.

“This means that we can convince users that their applications (think private chats with LLMs or Large Language Models) are being protected inside the GPU’s TEE while in fact it is running in the clear,” the researchers wrote on a website detailing the attack. “As the attestation report is ‘borrowed,’ we don’t even own a GPU to begin with.”

SEV-SNP (Secure Encrypted Virtualization-Secure Nested Paging) uses ciphertext hiding in AMD’s EPYC CPUs based on the Zen 5 architecture. AMD added it to prevent a previous attack known as Cipherleaks, which allowed malicious hypervisors to extract cryptographic keys stored in the enclaves of a virtual machine. Ciphertext, however, doesn’t stop physical attacks. With the ability to reopen the side channel that Cipherleaks relies on, TEE.fail can steal OpenSSL credentials and other key material based on constant-time encryption.

Cheap, quick, and the size of a briefcase

“Now that we have interpositioned DDR5 traffic, our work shows that even the most modern of TEEs across all vendors with available hardware is vulnerable to cheap physical attacks,” Genkin said.

The equipment required by TEE.fail runs off-the-shelf gear that costs less than $1,000. One of the devices the researchers built fits into a 17-inch briefcase, so it can be smuggled into a facility housing a TEE-protected server. Once the physical attack is performed, the device does not need to be connected again. Attackers breaking TEEs on servers they operate have no need for stealth, allowing them to use a larger device, which the researchers also built.

A logic analyzer attached to an interposer.

The researchers demonstrated attacks against an array of services that rely on the chipmakers’ TEE protections. (For ethical reasons, the attacks were carried out against infrastructure that was identical to but separate from the targets’ networks.) Some of the attacks included BuilderNet, dstack, and Secret Network.

BuilderNet is a network of Ethereum block builders that uses TDX to prevent parties from snooping on others’ data and to ensure fairness and that proof currency is redistributed honestly. The network builds blocks valued at millions of dollars each month.

“We demonstrated that a malicious operator with an attestation key could join BuilderNet and obtain configuration secrets, including the ability to decrypt confidential orderflow and access the Ethereum wallet for paying validators,” the TEE.fail website explained. “Additionally, a malicious operator could build arbitrary blocks or frontrun (i.e., construct a new transaction with higher fees to ensure theirs is executed first) the confidential transactions for profit while still providing deniability.”

To date, the researchers said, BuilderNet hasn’t provided mitigations. Attempts to reach BuilderNet officials were unsuccessful.

dstack is a tool for building confidential applications that run on top of virtual machines protected by Nvidia Confidential Compute. The researchers used TEE.fail to forge attestations certifying that a workload was performed by the TDX using the Nvidia protection. It also used the “borrowed” attestations to fake ownership of GPUs that a relying party trusts.

Secret Network is a platform billing itself as the “first mainnet blockchain with privacy-preserving smart contracts,” in part by encrypting on-chain data and execution with SGX. The researchers showed that TEE.fail could extract the “Concensus Seed,” the primary network-side private key encrypting confidential transactions on the Secret Network. As noted, after learning of Wiretap, the Secret Network eliminated this possibility by establishing a “curated” allowlist of known, trusted nodes allowed on the network and suspended the acceptance of new nodes. Academic or not, the ability to replicate the attack using TEE.fail shows that Wiretap wasn’t a one-off success.

A tough nut to crack

As explained earlier, the root cause of all the TEE.fail attacks is deterministic encryption, which forms the basis for protections in all three chipmakers’ TEEs. This weaker form of encryption wasn’t always used in TEEs. When Intel initially rolled out SGX, the feature was put in client CPUs, not server ones, to prevent users from building devices that could extract copyrighted content such as high-definition video.

Those early versions encrypted no more than 256MB of RAM, a small enough space to use the much stronger probabilistic form of encryption. The TEEs built into server chips, by contrast, must often encrypt terabytes of RAM. Probabilistic encryption doesn’t scale to that size without serious performance penalties. Finding a solution that accommodates this overhead won’t be easy.

One mitigation over the short term is to ensure that each 128-bit block of ciphertext has sufficient entropy. Adding random plaintext to the blocks prevents ciphertext repetition. The researchers say the entropy can be added by building a custom memory layout that inserts a 64-bit counter with a random initial value to each 64-bit block before encrypting it.

The last countermeasure the researchers proposed is adding location verification to the attestation mechanism. While insider and supply chain attacks remain a possibility inside even the most reputable cloud services, strict policies make them much less feasible. Even those mitigations, however, don’t foreclose the threat of a government agency with a valid subpoena ordering an organization to run such an attack inside their network.

In a statement, Nvidia said:

NVIDIA is aware of this research. Physical controls in addition to trust controls such as those provided by Intel TDX reduce the risk to GPUs for this style of attack, based on our discussions with the researchers. We will provide further details once the research is published.

Intel spokesman Jerry Bryant said:

Fully addressing physical attacks on memory by adding more comprehensive confidentiality, integrity and anti-replay protection results in significant trade-offs to Total Cost of Ownership. Intel continues to innovate in this area to find acceptable solutions that offer better balance between protections and TCO trade-offs.

The company has published responses here and here reiterating that physical attacks are out of scope for both TDX and SGX

AMD didn’t respond to a request for comment.

Stuck on Band-Aids

For now, TEE.fail, Wiretap, and Battering RAM remain a persistent threat that isn’t solved with the use of default implementations of the chipmakers’ secure enclaves. The most effective mitigation for the time being is for TEE users to understand the limitations and curb uses that the chipmakers say aren’t a part of the TEE threat model. Secret Network tightening requirements for operators joining the network is an example of such a mitigation.

Moore, the founder and CEO of RunZero, said that companies with big budgets can rely on custom solutions built by larger cloud services. AWS, for example, makes use of the Nitro Card, which is built using ASIC chips that accelerate processing using TEEs. Google’s proprietary answer is Titanium.

“It’s a really hard problem,” Moore said. “I’m not sure what the current state of the art is, but if you can’t afford custom hardware, the best you can do is rely on the CPU provider’s TEE, and this research shows how weak this is from the perspective of an attacker with physical access. The enclave is really a Band-Aid or hardening mechanism over a really difficult problem, and it’s both imperfect and dangerous if compromised, for all sorts of reasons.”

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

New physical attacks are quickly diluting secure enclave defenses from Nvidia, AMD, and Intel Read More »

10m-people-watched-a-youtuber-shim-a-lock;-the-lock-company-sued-him-bad-idea.

10M people watched a YouTuber shim a lock; the lock company sued him. Bad idea.


It’s still legal to pick locks, even when you swing your legs.

“Opening locks” might not sound like scintillating social media content, but Trevor McNally has turned lock-busting into online gold. A former US Marine Staff Sergeant, McNally today has more than 7 million followers and has amassed more than 2 billion views just by showing how easy it is to open many common locks by slapping, picking, or shimming them.

This does not always endear him to the companies that make the locks.

On March 3, 2025, a Florida lock company called Proven Industries released a social media promo video just begging for the McNally treatment. The video was called, somewhat improbably, “YOU GUYS KEEP SAYING YOU CAN EASILY BREAK OFF OUR LATCH PIN LOCK.” In it, an enthusiastic man in a ball cap says he will “prove a lot of you haters wrong.” He then goes hard at Proven’s $130 model 651 trailer hitch lock with a sledgehammer, bolt cutters, and a crowbar.

Naturally, the lock hangs tough.

An Instagram user brought the lock to McNally’s attention by commenting, “Let’s introduce it to the @mcnallyofficial poke.” Someone from Proven responded, saying that McNally only likes “the cheap locks lol because they are easy and fast.” Proven locks were said to be made of sterner stuff.

But on April 3, McNally posted a saucy little video to social media platforms. In it, he watches the Proven promo video while swinging his legs and drinking a Juicy Juice. He then hops down from his seat, goes over to a Proven trailer hitch lock, and opens it in a matter of seconds using nothing but a shim cut from a can of Liquid Death. He says nothing during the entire video, which has been viewed nearly 10 million times on YouTube alone.

Despite practically begging people to attempt this, Proven Industries owner Ron Lee contacted McNally on Instagram. “Just wanted to say thanks and be prepared!” he wrote. McNally took this as a threat.

(Oddly enough, Proven’s own homepage features a video in which the company trashes competing locks and shows just how easy it is to defeat them. And its news pages contain articles and videos on “The Hidden Flaws of Master Locks” and other brands. Why it got so upset about McNally’s video is unclear.)

The next day, Lee texted McNally’s wife. The message itself was apparently Lee’s attempt to de-escalate things; he says he thought the number belonged to McNally, and the message itself was unobjectionable. But after the “be prepared!” notice of the day before, and given the fact that Lee already knew how to contact him on Instagram, McNally saw the text as a way “to intimidate me and my family.” That feeling was cemented when McNally found out that Lee was a triple felon—and that in one case, Lee had hired someone “to throw a brick through the window of his ex-wife.”

Concerned about losing business, Lee kept trying to shut McNally down. Proven posted a “response video” on April 6 and engaged with numerous social media commenters, telling them that things were “going to get really personal” for McNally. Proven employees alleged publicly that McNally was deceiving people about all the prep work he had done to make a “perfectly cut out” shim. Without extensive experience, long prep work, and precise measurements, it was said, Proven’s locks were in little danger of being opened by rogue actors trying to steal your RV.

“Sucks to see how many people take everything they see online for face value,” one Proven employee wrote. “Sounds like a bunch of liberals lol.”

Proven also had its lawyers file “multiple” DMCA takedown notices against the McNally video, claiming that its use of Proven’s promo video was copyright infringement.

McNally didn’t bow to the pressure, though, instead uploading several more videos showing him opening Proven locks. In one of them, he takes aim at Proven’s claims about his prep work by retrieving a new lock from an Amazon delivery kiosk, taking it outside—and popping it in seconds using a shim he cuts right on camera, with no measurements, from an aluminum can.

Help us write more stories like this—while ditching ads

Ars subscribers support our independent journalism, which they can read ad-free and with enhanced privacy protections. And it’s only a few bucks a month.

Ars Pro

$5 / month

Subscribe

  • No ads
  • No tracking
  • Enhanced experience

58.3333% off!

Ars Pro

$25 / year

Subscribe

  • Best value
  • Still no ads
  • Still no tracking

Ars Pro++

$50 / year

Subscribe

  • All Ars Pro features
  • Support journalism
  • Special ++ badge

On May 1, Proven filed a federal lawsuit against McNally in the Middle District of Florida, charging him with a huge array of offenses: (1) copyright infringement, (2) defamation by implication, (3) false advertising, (4) violating the Florida Deceptive and Unfair Trade Practices Act, (5) tortious interference with business relationships, (6) unjust enrichment, (7) civil conspiracy, and (8) trade libel. Remarkably, the claims stemmed from a video that all sides admit was accurate and in which McNally himself said nothing.

Screenshot of a social media exchange.

In retrospect, this was probably not a great idea.

Don’t mock me, bro

How can you defame someone without even speaking? Proven claimed “defamation by implication,” arguing that the whole setup of McNally’s videos was unfair to the company and its product. McNally does not show his prep work, which (Proven argued) conveys to the public the false idea that Proven’s locks are easy to bypass. While the shimming does work, Proven argued that it would be difficult for an untrained user to perform.

But what Proven really, really didn’t like was being mocked. McNally’s decision to drink—and shake!—a juice box on video comes up in court papers a mind-boggling number of times. Here’s a sample:

McNally appears swinging his legs and sipping from an apple juice box, conveying to the purchasing public that bypassing Plaintiff’s lock is simple, trivial, and even comical…

…showing McNally drinking from, and shaking, a juice box, all while swinging his legs, and displaying the Proven Video on a mobile device…

The tone, posture, and use of the juice box prop and childish leg swinging that McNally orchestrated in the McNally Video was intentional to diminish the perceived seriousness of Proven Industries…

The use of juvenile imagery, such as sipping from a juice box while casually applying the shim, reinforces the misleading impression that the lock is inherently insecure and marketed deceptively…

The video then abruptly shifts to Defendant in a childlike persona, sipping from a juice box and casually applying a shim to the lock…

In the end, Proven argued that the McNally video was “for commercial entertainment and mockery,” produced for the purpose of “humiliating Plaintiff.” McNally, it was said, “will not stop until he destroys Proven’s reputation.” Justice was needed. Expensive, litigious justice.

But the proverbially level-headed horde of Internet users does not always love it when companies file thermonuclear lawsuits against critics. Sometimes, in fact, the level-headed horde disregards everything taught by that fount of judicial knowledge, The People’s Court, and they take the law into their own hands.

Proven was soon the target of McNally fans. The company says it was “forced to disable comments on posts and product videos due to an influx of mocking and misleading replies furthering the false narrative that McNally conveyed to the viewers.” The company’s customer service department received such an “influx of bogus customer service tickets… that it is experiencing difficulty responding to legitimate tickets.”

Screenshot of a social media post from Proven Industries.

Proven was quite proud of its lawsuit… at first.

Someone posted Lee’s personal phone number to the comment section of a McNally video, which soon led to “a continuous stream of harassing phone calls and text messages from unknown numbers at all hours of the day and night,” which included “profanity, threats, and racially charged language.”

Lest this seem like mere high spirits and hijinks, Lee’s partner and his mother both “received harassing messages through Facebook Messenger,” while other messages targeted Lee’s son, saying things like “I would kill your f—ing n—– child” and calling him a “racemixing pussy.”

This is clearly terrible behavior; it also has no obvious connection to McNally, who did not direct or condone the harassment. As for Lee’s phone number, McNally said that he had nothing to do with posting it and wrote that “it is my understanding that the phone number at issue is publicly available on the Better Business Bureau website and can be obtained through a simple Google search.”

And this, with both sides palpably angry at each other, is how things stood on June 13 at 9: 09 am, when the case got a hearing in front of the Honorable Mary Scriven, an extremely feisty federal judge in Tampa. Proven had demanded a preliminary injunction that would stop McNally from sharing his videos while the case progressed, but Proven had issues right from the opening gavel:

LAWYER 1: Austin Nowacki on behalf of Proven industries.

THE COURT: I’m sorry. What is your name?

LAWYER 1: Austin Nowacki.

THE COURT: I thought you said Austin No Idea.

LAWYER 2: That’s Austin Nowacki.

THE COURT: All right.

When Proven’s lead lawyer introduced a colleague who would lead that morning’s arguments, the judge snapped, “Okay. Then you have a seat and let her speak.”

Things went on this way for some time, as the judge wondered, “Did the plaintiff bring a lock and a beer can?” (The plaintiff did not.) She appeared to be quite disappointed when it was clear there would be no live shimming demonstration in the courtroom.

Then it was on to the actual arguments. Proven argued that the 15 seconds of its 90-second promo video used by McNally were not fair use, that McNally had defamed the company by implication, and that shimming its locks was actually quite difficult. Under questioning, however, one of Proven’s employees admitted that he had been able to duplicate McNally’s technique, leading to the question from McNally’s lawyer: “When you did it yourself, did it occur to you for one moment that maybe the best thing to do, instead of file a lawsuit, was to fix [the lock]?”

At the end of several hours of wrangling, the judge stepped in, saying that she “declines to grant the preliminary injunction motion.” For her to do so, Proven would have to show that it was likely to win at trial, among other things; it had not.

As for the big copyright infringement claim, of which Proven had made so much hay, the judge reached a pretty obvious finding: You’re allowed to quote snippets of copyrighted videos in order to critique them.

“The purpose and character of the use to which Mr. McNally put the alleged infringed work is transformative, artistic, and a critique,” said the judge. “He is in his own way challenging and critiquing Proven’s video by the use of his own video.”

As for the amount used, it was “substantial enough but no more than is necessary to make the point that he is trying to critique Proven’s video, and I think that’s fair game and a nominative fair use circumstance.”

While Proven might convince her otherwise after a full trial, “the copyright claim fails as a basis for a demand for preliminary injunctive relief.”

As for “tortious interference” and “defamation by implication,” the judge was similarly unimpressed.

“The fact that you might have a repeat customer who is dissuaded to buy your product due to a criticism of the product is not the type of business relationship the tortious interference with business relationship concept is intended to apply,” she said.

In the end, the judge said she would see the case through to its end, if that was really what everyone wanted, but “I will pray that you all come to a resolution of the case that doesn’t require all of this. This is a capitalist market and people say what they say. As long as it’s not false, they say what they say.”

She gave Proven until July 7 to amend its complaint if it wished.

On July 7, the company dismissed the lawsuit against McNally instead.

Proven also made a highly unusual request: Would the judge please seal almost the entire court record—including the request to seal?

Court records are presumptively public, but Proven complained about a “pattern of intimidation and harassment by individuals influenced by Defendant McNally’s content.” According to the company, a key witness had already backed out of the case, saying, “Is there a way to leave my name and my companies name out of this due to concerns of potential BLOW BACK from McNally or others like him?” Another witness, who did submit a declaration, wondered, “Is this going to be public? My concern is that there may be some backlash from the other side towards my company.”

McNally’s lawyer laid into this seal request, pointing out that the company had shown no concern over these issues until it lost its bid for a preliminary injunction. Indeed, “Proven boasted to its social media followers about how it sued McNally and about how confident it was that it would prevail. Proven even encouraged people to search for the lawsuit.” Now, however, the company “suddenly discover[ed] a need for secrecy.”

The judge has not yet ruled on the request to seal.

Another way

The strange thing about the whole situation is that Proven actually knew how to respond constructively to the first McNally video. Its own response video opened with a bit of humor (the presenter drinks a can of Liquid Death), acknowledged the issue (“we’ve had a little bit of controversy in the last couple days”), and made clear that Proven could handle criticism (“we aren’t afraid of a little bit of feedback”).

The video went on to show how their locks work and provided some context on shimming attacks and their likelihood of real-world use. It ended by showing how users concerned about shimming attacks could choose more expensive but more secure lock cores that should resist the technique.

Quick, professional, non-defensive—a great way to handle controversy.

But it was all blown apart by the company’s angry social media statements, which were unprofessional and defensive, and the litigation, which was spectacularly ill-conceived as a matter of both law and policy. In the end, the case became a classic example of the Streisand Effect, in which the attempt to censor information can instead call attention to it.

Judging from the number of times the lawsuit talks about 1) ridicule and 2) harassment, it seems like the case quickly became a personal one for Proven’s owner and employees, who felt either mocked or threatened. That’s understandable, but being mocked is not illegal and should never have led to a lawsuit or a copyright claim. As for online harassment, it remains a serious and unresolved issue, but launching a personal vendetta—and on pretty flimsy legal grounds—against McNally himself was patently unwise. (Doubly so given that McNally had a huge following and had already responded to DMCA takedowns by creating further videos on the subject; this wasn’t someone who would simply be intimidated by a lawsuit.)

In the end, Proven’s lawsuit likely cost the company serious time and cash—and generated little but bad publicity.

Photo of Nate Anderson

10M people watched a YouTuber shim a lock; the lock company sued him. Bad idea. Read More »

we-let-openai’s-“agent-mode”-surf-the-web-for-us—here’s-what-happened

We let OpenAI’s “Agent Mode” surf the web for us—here’s what happened


But when will it fold my laundry?

From scanning emails to building fansites, Atlas can ably automate some web-based tasks.

He wants us to write what about Tuvix? Credit: Getty Images

He wants us to write what about Tuvix? Credit: Getty Images

On Tuesday, OpenAI announced Atlas, a new web browser with ChatGPT integration, to let you “chat with a page,” as the company puts it. But Atlas also goes beyond the usual LLM back-and-forth with Agent Mode, a “preview mode” feature the company says can “get work done for you” by clicking, scrolling, and reading through various tabs.

“Agentic” AI is far from new, of course; OpenAI itself rolled out a preview of the web browsing Operator agent in January and introduced the more generalized “ChatGPT agent” in July. Still, prominently featuring this capability in a major product release like this—even in “preview mode”—signals a clear push to get this kind of system in front of end users.

I wanted to put Atlas’ Agent Mode through its paces to see if it could really save me time in doing the kinds of tedious online tasks I plod through every day. In each case, I’ll outline a web-based problem, lay out the Agent Mode prompt I devised to try to solve it, and describe the results. My final evaluation will rank each task on a 10-point scale, with 10 being “did exactly what I wanted with no problems” and one being “complete failure.”

Playing web games

The problem: I want to get a high score on the popular tile-sliding game 2048 without having to play it myself.

The prompt: “Go to play2048.co and get as high a score as possible.”

The results: While there’s no real utility to this admittedly silly task, a simple, no-reflexes-needed web game seemed like a good first test of the Atlas agent’s ability to interpret what it sees on a webpage and act accordingly. After all, if frontier-model LLMs like Google Gemini can beat a complex game like Pokémon, 2048 should pose no problem for a web browser agent.

To Atlas’ credit, the agent was able to quickly identify and close a tutorial link blocking the gameplay window and figure out how to use the arrow keys to play the game without any further help. When it came to actual gaming strategy, though, the agent started by flailing around, experimenting with looped sequences of moves like “Up, Left, Right, Down” and “Left and Down.”

Finally, a way to play 2048 without having to, y’know, play 2048.

Credit: Kyle Orland

Finally, a way to play 2048 without having to, y’know, play 2048. Credit: Kyle Orland

After a while, the random flailing settled down a bit, with the agent seemingly looking ahead for some simple strategies: “The board currently has two 32 tiles that aren’t adjacent, but I think I can align them,” the Activity summary read at one point. “I could try shifting left or down to make them merge, but there’s an obstacle in the form of an 8 tile. Getting to 64 requires careful tile movement!”

Frustratingly, the agent stopped playing after just four minutes, settling on a score of 356 even though the board was far from full. I had to prompt the agent a few more times to convince it to play the game to completion; it ended up with a total of 3164 points after 260 moves. That’s pretty similar to the score I was able to get in a test game as a 2048 novice, though expert players have reportedly scored much higher.

Evaluation: 7/10. The agent gets credit for being able to play the game competently without any guidance but loses points for having to be told to keep playing to completion and for a score that is barely on the level of a novice human.

Making a radio playlist

The problem: I want to transform the day’s playlist from my favorite Pittsburgh-based public radio station into an on-demand Spotify playlist.

The prompt: “Go to Radio Garden. Find WYEP and monitor the broadcast. For every new song you hear, identify the song and add it to a new Spotify playlist.”

The results: After trying and failing to find a track listing for WYEP on Radio Garden as requested, the Atlas agent smartly asked for approval to move on to wyep.org to continue the task. By the time I noticed this request, the link to wyep.org had been replaced in the Radio Garden tab with an ad for EVE Online, which the agent accidentally clicked. The agent quickly realized the problem and navigated to the WYEP website directly to fix it.

From there, the agent was able to scan the page and identify the prominent “Now Playing” text near the top (it’s unclear if it could ID the music simply via audio without this text cue). After asking me to log in to my Spotify account, the agent used the search bar to find the listed songs and added them to a new playlist without issue.

From radio stream to Spotify playlist in a single sentence.

Credit: Kyle Orland

From radio stream to Spotify playlist in a single sentence. Credit: Kyle Orland

The main problem with this use case is the inherent time limitations. On the first try, the agent worked for four minutes and managed to ID and add just two songs that played during that time. When I asked it to continue for an hour, I got an error message blaming “technical constraints on session length” for stricter limits. Even when I asked it to continue for “as long as possible,” I only got three more minutes of song listings.

At one point, the Atlas agent suggested that “if you need ongoing updates, you can ask me again after a while and I can resume from where we left off.” And to the agent’s credit, when I went back to the tab hours later and told it to “resume monitoring,” I got four new songs added to my playlist.

Evaluation: 9/10. The agent was able to navigate multiple websites and interfaces to complete the task, even when unexpected problems got in the way. I took off a point only because I can’t just leave this running as a background task all day, even as I understand that use case would surely eat up untold amounts of money and processing power on OpenAI’s part.

Scanning emails

The problem: I need to go through my emails to create a reference spreadsheet with contact info for the many, many PR people who send me messages.

The prompt: “Look through all my Ars Technica emails from the last week. Collect all the contact information (name, email address, phone number, etc.) for PR contacts contained in those emails and add them to a new Google Sheets spreadsheet.”

The results: Without being explicitly guided, the Atlas agent was able to realize that I use Gmail, and it could differentiate between the personal email account and professional Ars Technica accounts I had open in separate tabs. As the Atlas agent started scanning my Ars mailbox, though, I saw a prominent warning overlaid on the page: “Sensitive: ChatGPT will only work while you view the tab.” That kind of ruined the point, since I wanted Atlas to handle this for me while I do other stuff online, but I guess I could still play a Steam Deck game while I waited.

Just a few of the many, many PR people who email me in a week.

Just a few of the many, many PR people who email me in a week.

After searching for “after: 2025/10/14 before: 2025/10/22 PR” in Gmail (mirroring the kind of search I would have used for this task), the Atlas agent clicked through each email, scrolling through to look for names, emails, and phone numbers (and also the relevant company name, which I didn’t ask for explicitly).

With seven minutes of background work, the agent was able to open up a new Google Sheet and quickly enter 12 rows of well-formatted data for 12 different PR contacts. Unfortunately, it stopped well before it had a chance to go through all 164 emails returned by that initial Gmail search.

Evaluation: 8/10. Again, the “technical constraints on session length” seem to be getting in the way of finishing a task, forcing me to deduct a couple of points for incompleteness.

Wiki editing

The problem: I need the world to know that Tuvix was murdered by Captain Janeway in the Star Trek: Voyager episode “Tuvix.”

The prompt: “Go to the Fandom Wiki page for Tuvix. Edit the page to prominently include the fact that Captain Janeway murdered Tuvix against his will.”

The results: Before the agent could even get going, I was informed, “I’m sorry, but I can’t help with editing or vandalising wiki pages (including the Fandom wiki) in a way that misrepresents them or forces a biased viewpoint.”

This wording implied that the agent can edit wiki pages with neutral or strictly informative additions, so I asked Atlas what sort of wording would be more acceptable. The agent responded helpfully that we might write, “Tuvix was later separated back into Tuvok and Neelix, resulting in Tuvix’s death.” But when I asked it to add that very line to the Fandom Wiki, I got a much more direct refusal: “I’m sorry, but I can’t make edits directly to external websites or wikis.”

Evaluation: N/A. On the one hand, the Atlas agent won’t do my Tuvix-based web activism for me. On the other hand, it’s probably better for all of us that Atlas refuses to automate this kind of public web defacement by default.

Making a fan page

The problem: People online still need to know about Janeway’s murder of Tuvix!

The prompt: “Go to NeoCities and create a fan site for the Star Trek character Tuvix. Make sure it has lots of images and fun information about Tuvix and that it makes it clear that Tuvix was murdered by Captain Janeway against his will.”

The results: You can see them for yourself right here. After a brief pause so I could create and log in to a new Neocities account, the Atlas agent was able to generate this humble fan page in just two minutes after aggregating information from a wide variety of pages like Memory Alpha and TrekCore. “The Hero Starfleet Murdered” and “Justice for Tuvix” headers are nice touches, but the actual text is much more mealy-mouthed about the “intense debate” and “ethical dilemmas” around what I wanted to make clear was clearly premeditated murder.

Justice for Tuvix!

Credit: Kyle Orland

Justice for Tuvix! Credit: Kyle Orland

The agent also had a bit of trouble with the request for images. Instead of downloading some Tuvix pictures and uploading copies to Neocities (which I’m not entirely sure Atlas can do on its own), the agent decided to directly reference images hosted on external servers, which is usually a big no-no in web design. The agent did notice when these external image links failed to work, saying that it would “need to find more accessible images from reliable sources,” but it failed to even attempt that before stopping its work on the task.

Evaluation: 7/10. Points for building a passable Web 1.0 fansite relatively quickly, but the weak prose and broken images cost it some execution points here.

Picking a power plan

The problem: Ars Senior Technology Editor Lee Hutchinson told me he needs to go through the annoying annual process of selecting a new electricity plan “because Texas is insane.”

The prompt: “Go to powertochoose.org and find me a 12–24 month contract that prioritizes an overall low usage rate. I use an average of 2,000 KWh per month. My power delivery company is Texas New-Mexico Power (“TNMP”) not Centerpoint. My ZIP code is [redacted]. Please provide the ‘fact sheet’ for any and all plans you recommend.”

The results: After spending eight minutes fiddling with the site’s search parameters and seemingly getting repeatedly confused about how to sort the results by the lowest rate, the Atlas agent spit out a recommendation to read this fact sheet, which it said “had the best average prices at your usage level. The ‘Bright Nights’ plans are time‑of‑use offers that provide free electricity overnight and charge a higher rate during the day, while the ‘Digital Saver’ plan is a traditional fixed‑rate contract.”

If Ars’ Lee Hutchinson never has to use this web site again, it will be too soon.

Credit: Power to Choose

If Ars’ Lee Hutchinson never has to use this web site again, it will be too soon. Credit: Power to Choose

Since I don’t know anything about the Texas power market, I passed this information on to Lee, who had this to say: “It’s not a bad deal—it picked a fixed rate plan without being asked, which is smart (variable rate pricing is how all those poor people got stuck with multi-thousand dollar bills a few years back in the freeze). It’s not the one I would have picked due to the weird nighttime stuff (if you don’t meet that exact criteria, your $/kWh will be way worse) but it’s not a bad pick!”

Evaluation: 9/10. As Lee puts it, “it didn’t screw up the assignment.

Downloading some games

The problem: I want to download some recent Steam demos to see what’s new in the gaming world.

The prompt: “Go to Steam and find the most recent games with a free demo available for the Mac. Add all of those demos to my library and start to download them.”

The results: Rather than navigating to the “Free Demos” category, the Atlas agent started by searching for “demo.” After eventually finding the macOS filter, it wasted minutes and minutes looking for a “has demo” filter, even though the search for the word “demo” already narrowed it down.

This search results page was about as far as the Atlas agent was able to get when I asked it for game demos.

Credit: Kyle Orland

This search results page was about as far as the Atlas agent was able to get when I asked it for game demos. Credit: Kyle Orland

After a long while, the agent finally clicked the top result on the page, which happened to be visual novel Project II: Silent Valley. But even though there was a prominent “Download Demo” link on that page, the agent became concerned that it was on the Steam page for the full game and not a demo. It backed up to the search results page and tried again.

After watching some variation of this loop for close to ten minutes, I stopped the agent and gave up.

Evaluation: 1/10. It technically found some macOS game demos but utterly failed to even attempt to download them.

Final results

Across six varied web-based tasks (I left out the Wiki vandalism from my summations), the Atlas agent scored a median of 7.5 points (and a mean of 6.83 points) on my somewhat subjective 10-point scale. That’s honestly better than I expected for a “preview mode” feature that is still obviously being tested heavily by OpenAI.

In my tests, Atlas was generally able to correctly interpret what was being asked of it and was able to navigate and process information on webpages carefully (if slowly). The agent was able to navigate simple web-based menus and get around unexpected obstacles with relative ease most of the time, even as it got caught in infinite loops other times.

The major limiting factor in many of my tests continues to be the “technical constraints on session length” that seem to limit most tasks to a few minutes. Given how long it takes the Atlas agent to figure out where to click next—and the repetitive nature of the kind of tasks I’d want a web-agent to automate—this severely limits its utility. A version of the Atlas agent that could work indefinitely in the background would have scored a few points better on my metrics.

All told, Atlas’ “Agent Mode” isn’t yet reliable enough to use as a kind of “set it and forget it” background automation tool. But for simple, repetitive tasks that a human can spot-check afterward, it already seems like the kind of tool I might use to avoid some of the drudgery in my online life.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

We let OpenAI’s “Agent Mode” surf the web for us—here’s what happened Read More »

google-has-a-useful-quantum-algorithm-that-outperforms-a-supercomputer

Google has a useful quantum algorithm that outperforms a supercomputer


An approach it calls “quantum echoes” takes 13,000 times longer on a supercomputer.

Image of a silvery plate labeled with

The work relied on Google’s current-generation quantum hardware, the Willow chip. Credit: Google

The work relied on Google’s current-generation quantum hardware, the Willow chip. Credit: Google

A few years back, Google made waves when it claimed that some of its hardware had achieved quantum supremacy, performing operations that would be effectively impossible to simulate on a classical computer. That claim didn’t hold up especially well, as mathematicians later developed methods to help classical computers catch up, leading the company to repeat the work on an improved processor.

While this back-and-forth was unfolding, the field became less focused on quantum supremacy and more on two additional measures of success. The first is quantum utility, in which a quantum computer performs computations that are useful in some practical way. The second is quantum advantage, in which a quantum system completes calculations in a fraction of the time it would take a typical computer. (IBM and a startup called Pasqual have published a useful discussion about what would be required to verifiably demonstrate a quantum advantage.)

Today, Google and a large collection of academic collaborators are publishing a paper describing a computational approach that demonstrates a quantum advantage compared to current algorithms—and may actually help us achieve something useful.

Out of time

Google’s latest effort centers on something it’s calling “quantum echoes.” The approach could be described as a series of operations on the hardware qubits that make up its machine. These qubits hold a single bit of quantum information in a superposition between two values, with probabilities of finding the qubit in one value or the other when it’s measured. Each qubit is entangled with its neighbors, allowing its probability to influence those of all the qubits around it. The operations that allow computation, called gates, are ways of manipulating these probabilities. Most current hardware, including Google’s, perform manipulations on one or two qubits at a time (termed one- and two-qubit gates, respectively.

For quantum echoes, the operations involved performing a set of two-qubit gates, altering the state of the system, and later performing the reverse set of gates. On its own, this would return the system to its original state. But for quantum echoes, Google inserts single-qubit gates performed with a randomized parameter. This alters the state of the system before the reverse operations take place, ensuring that the system won’t return to exactly where it started. That explains the “echoes” portion of the name: You’re sending an imperfect copy back toward where things began, much like an echo involves the imperfect reversal of sound waves.

That’s what the process looks like in terms of operations performed on the quantum hardware. But it’s probably more informative to think of it in terms of a quantum system’s behavior. As Google’s Tim O’Brien explained, “You evolve the system forward in time, then you apply a small butterfly perturbation, and then you evolve the system backward in time.” The forward evolution is the first set of two qubit gates, the small perturbation is the randomized one qubit gate, and the second set of two qubit gates is the equivalent of sending the system backward in time.

Because this is a quantum system, however, strange things happen. “On a quantum computer, these forward and backward evolutions, they interfere with each other,” O’Brien said. One way to think about that interference is in terms of probabilities. The system has multiple paths between its start point and the point of reflection—where it goes from evolving forward in time to evolving backward—and from that reflection point back to a final state. Each of those paths has a probability associated with it. And since we’re talking about quantum mechanics, those paths can interfere with each other, increasing some probabilities at the expense of others. That interference ultimately determines where the system ends up.

(Technically, these are termed “out of time order correlations,” or OTOCs. If you read the Nature paper describing this work, prepare to see that term a lot.)

Demonstrating advantage

So how do you turn quantum echoes into an algorithm? On its own, a single “echo” can’t tell you much about the system—the probabilities ensure that any two runs might show different behaviors. But if you repeat the operations multiple times, you can begin to understand the details of this quantum interference. And performing the operations on a quantum computer ensures that it’s easy to simply rerun the operations with different random one-qubit gates and get many instances of the initial and final states—and thus a sense of the probability distributions involved.

This is also where Google’s quantum advantage comes from. Everyone involved agrees that the precise behavior of a quantum echo of moderate complexity can be modeled using any leading supercomputer. But doing so is very time-consuming, so repeating those simulations a few times becomes unrealistic. The paper estimates that a measurement that took its quantum computer 2.1 hours to perform would take the Frontier supercomputer approximately 3.2 years. Unless someone devises a far better classical algorithm than what we have today, this represents a pretty solid quantum advantage.

But is it a useful algorithm? The repeated sampling can act a bit like the Monte Carlo sampling done to explore the behavior of a wide variety of physical systems. Typically, however, we don’t view algorithms as modeling the behavior of the underlying hardware they’re being run on; instead, they’re meant to model some other physical system we’re interested in. That’s where Google’s announcement stands apart from its earlier work—the company believes it has identified an interesting real-world physical system with behaviors that the quantum echoes can help us understand.

That system is a small molecule in a Nuclear Magnetic Resonance (NMR) machine. In a second draft paper being published on the arXiv later today, Google has collaborated with a large collection of NMR experts to explore that use.

From computers to molecules

NMR is based on the fact that the nucleus of every atom has a quantum property called spin. When nuclei are held near to each other, such as when they’re in the same molecule, these spins can influence one another. NMR uses magnetic fields and photons to manipulate these spins and can be used to infer structural details, like how far apart two given atoms are. But as molecules get larger, these spin networks can extend for greater distances and become increasingly complicated to model. So NMR has been limited to focusing on the interactions of relatively nearby spins.

For this work, though, the researchers figured out how to use an NMR machine to create the physical equivalent of a quantum echo in a molecule. The work involved synthesizing the molecule with a specific isotope of carbon (carbon-13) in a known location in the molecule. That isotope could be used as the source of a signal that propagates through the network of spins formed by the molecule’s atoms.

“The OTOC experiment is based on a many-body echo, in which polarization initially localized on a target spin migrates through the spin network, before a Hamiltonian-engineered time-reversal refocuses to the initial state,” the team wrote. “This refocusing is sensitive to perturbations on distant butterfly spins, which allows one to measure the extent of polarization propagation through the spin network.”

Naturally, something this complicated needed a catchy nickname. The team came up with TARDIS, or Time-Accurate Reversal of Dipolar InteractionS. While that name captures the “out of time order” aspect of OTOC, it’s simply a set of control pulses sent to the NMR sample that starts a perturbation of the molecule’s network of nuclear spins. A second set of pulses then reflects an echo back to the source.

The reflections that return are imperfect, with noise coming from two sources. The first is simply imperfections in the control sequence, a limitation of the NMR hardware. But the second is the influence of fluctuations happening in distant atoms along the spin network. These happen at a certain frequency at random, or the researchers could insert a fluctuation by targeting a specific part of the molecule with randomized control signals.

The influence of what’s going on in these distant spins could allow us to use quantum echoes to tease out structural information at greater distances than we currently do with NMR. But to do so, we need an accurate model of how the echoes will propagate through the molecule. And again, that’s difficult to do with classical computations. But it’s very much within the capabilities of quantum computing, which the paper demonstrates.

Where things stand

For now, the team stuck to demonstrations on very simple molecules, making this work mostly a proof of concept. But the researchers are optimistic that there are many ways the system could be used to extract structural information from molecules at distances that are currently unobtainable using NMR. They list a lot of potential upsides that should be explored in the discussion of the paper, and there are plenty of smart people who would love to find new ways of using their NMR machines, so the field is likely to figure out pretty quickly which of these approaches turns out to be practically useful.

The fact that the demonstrations were done with small molecules, however, means that the modeling run on the quantum computer could also have been done on classical hardware (it only required 15 hardware qubits). So Google is claiming both quantum advantage and quantum utility, but not at the same time. The sorts of complex, long-distance interactions that would be out of range of classical simulation are still a bit beyond the reach of the current quantum hardware. O’Brien estimated that the hardware’s fidelity would have to improve by a factor of three or four to model molecules that are beyond classical simulation.

The quantum advantage issue should also be seen as a work in progress. Google has collaborated with enough researchers at enough institutions that there’s unlikely to be a major improvement in algorithms that could allow classical computers to catch up. Until the community as a whole has some time to digest the announcement, though, we shouldn’t take that as a given.

The other issue is verifiability. Some quantum algorithms will produce results that can be easily verified on classical hardware—situations where it’s hard to calculate the right result but easy to confirm a correct answer. Quantum echoes isn’t one of those, so we’ll need another quantum computer to verify the behavior Google has described.

But Google told Ars nothing is up to the task yet. “No other quantum processor currently matches both the error rates and number of qubits of our system, so our quantum computer is the only one capable of doing this at present,” the company said. (For context, Google says that the algorithm was run on up to 65 qubits, but the chip has 105 qubits total.)

There’s a good chance that other companies would disagree with that contention, but it hasn’t been possible to ask them ahead of the paper’s release.

In any case, even if this claim proves controversial, Google’s Michel Devoret, a recent Nobel winner, hinted that we shouldn’t have long to wait for additional ones. “We have other algorithms in the pipeline, so we will hopefully see other interesting quantum algorithms,” Devoret said.

Nature, 2025. DOI: 10.1038/s41586-025-09526-6  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Google has a useful quantum algorithm that outperforms a supercomputer Read More »

macbook-pro:-apple’s-most-awkward-laptop-is-the-first-to-show-off-apple-m5

MacBook Pro: Apple’s most awkward laptop is the first to show off Apple M5


the apple m5: one more than m4

Apple M5 trades blows with Pro and Max chips from older generations.

Apple’s M5 MacBook Pro. Credit: Andrew Cunningham

Apple’s M5 MacBook Pro. Credit: Andrew Cunningham

When I’m asked to recommend a Mac laptop for people, Apple’s low-end 14-inch MacBook Pro usually gets lost in the shuffle. It competes with the 13- and 15-inch MacBook Air, significantly cheaper computers that meet or exceed the “good enough” boundary for the vast majority of computer users. The basic MacBook Pro also doesn’t have the benefit of Apple’s Pro or Max-series chips, which come with many more CPU cores, substantially better graphics performance, and higher memory capacity for true professionals and power users.

But the low-end Pro makes sense for a certain type of power user. At $1,599, it’s the cheapest way to get Apple’s best laptop screen, with mini LED technology, a higher 120 Hz ProMotion refresh rate for smoother scrolling and animations, and the optional but lovely nano-texture (read: matte) finish. Unlike the MacBook Air, it comes with a cooling fan, which has historically meant meaningfully better sustained performance and less performance throttling. And it’s also Apple’s cheapest laptop with three Thunderbolt ports, an HDMI port, and an SD card slot, all genuinely useful for people who want to plug lots of things in without having multiple dongles or a bulky dock competing for the Air’s two available ports.

If you don’t find any of those arguments in the basic MacBook Pro’s favor convincing, that’s fine. The new M5 version makes almost no changes to the laptop other than the chip, so it’s unlikely to change your calculus if you already looked at the M3 or M4 version and passed it up. But it is the first Mac to ship with the M5, the first chip in Apple’s fifth-generation chip family and a preview of what’s to come for (almost?) every other Mac in the lineup. So you can at least be interested in the 14-inch MacBook Pro as a showcase for a new processor, if not as a retail product in and of itself.

The Apple Silicon MacBook Pro, take five

Apple has been using this laptop design for about four years now, since it released the M1 Pro and M1 Max versions of the MacBook Pro in late 2021. But for people who are upgrading from an older design—Apple did use the old Intel-era design, Touch Bar and all, for the low-end M1 and M2 MacBook Pros, after all—we’ll quickly hit the highlights.

This basic MacBook Pro only comes in a 14-inch screen size, up from 13-inches for the old low-end MacBook Pro, but some of that space is eaten up by the notch across the top of the display. The strips of screen on either side of the notch are usable by macOS, but only for the menu bar and icons that live in the menu bar—it’s a no-go zone for apps. The laptop is a consistent thickness throughout, rather than tapered, and has somewhat more squared-off and less-rounded corners.

Compared to the 13-inch MacBook Pro, the 14-inch version is the same thickness, but it’s a little heavier (3.4 pounds, compared to 3), wider, and deeper. For most professional users, the extra screen size and the re-addition of the HDMI port and SD card slot mostly justify the slight bump up in size. The laptop also includes three Thunderbolt 3 ports—up from two in the MacBook Airs—and the resurrected MagSafe charging port. But it is worth noting that the 14-inch MacBook Pro is nearly identical in weight to the 15-inch MacBook Air. If screen size is all you’re after, the Air may still be the better choice.

Apple’s included charger uses MagSafe on the laptop end, but USB-C chargers, docks, monitors, and other accessories will continue to charge the laptop if that’s what you prefer to keep using.

I’ve got no gripes about Apple’s current laptop keyboard—Apple uses the same key layout, spacing, and size across the entire MacBook Air and Pro line, though if I had to distinguish between the Pro and Air, I’d say the Pro’s keyboard is very, very slightly firmer and more satisfying to type on and that the force feedback of its trackpad is just a hair more clicky. The laptop’s speaker system is also more impressive than either MacBook Air, with much bassier bass and a better dynamic range.

But the main reason to prefer this low-end Pro to the Air is the screen, particularly the 120 Hz ProMotion support, the improved brightness and contrast of the mini LED display technology, and the option to add Apple’s matte nano texture finish. I usually don’t mind the amount of glare coming off my MacBook Air’s screen too much, but every time I go back to using a nano-texture screen I’m always a bit jealous of the complete lack of glare and reflections and the way you get those benefits without dealing with the dip in image quality you see from many matte-textured screen protectors. The more you use your laptop outdoors or under lighting conditions you can’t control, the more you’ll appreciate it.

The optional nano texture display adds a pleasant matte finish to the screen, but that notch is still notching. Credit: Andrew Cunningham

If the higher refresh rate and the optional matte coating (a $150 upgrade on top of an already pricey computer) don’t appeal to you, or if you can’t pay for them, then you can be pretty confident that this isn’t the MacBook for you. The 13-inch Air is lighter, and the 5-inch Air is larger, and both are cheaper. But we’re still only a couple of years past the M2 version of the low-end MacBook Pro, which didn’t give you the extra ports or the Pro-level screen.

But! Before you buy one of the still-M4-based MacBook Airs, our testing of the MacBook Pro’s new M5 chip should give you some idea of whether it’s worth waiting a few months (?) for an Air refresh.

Testing Apple’s M5

We’ve also run some M5 benchmarks as part of our M5 iPad Pro review, but having macOS rather than iPadOS running on top of it does give us a lot more testing flexibility—more benchmarks and a handful of high-end games to run, plus access to the command line for taking a look at power usage and efficiency.

To back up and re-state the chip’s specs for a moment, though, the M5 is constructed out of the same basic parts as the M4: four high-performance CPU cores, six high-efficiency CPU cores (up from four in the M1/M2/M3), 10 GPU cores, and a 16-core Neural Engine for handling some machine-learning and AI workloads.

The M5’s technical improvements are more targeted and subtle than just a boost to clock speeds or core counts. The first is a 27.5 percent increase in memory bandwidth, from the 120 GB/s of the M4 to 153 GB/s (achieved, I’m told, by a combination of faster RAM and the memory fabric that facilitates communication between different areas of the chip. Integrated GPUs are usually bottlenecked by memory bandwidth first and core count second, so memory bandwidth improvements can have a pretty direct, linear impact on graphics performance.

Apple also says it has added a “Neural Accelerator” to each of its GPU cores, separate from the Neural Engine. These will benefit a few specific types of workloads—things like MetalFX graphics upscaling or frame generation that would previously have had to use the Neural Engine can now do that work entirely within the GPU, eliminating a bit of latency and freeing the Neural Engine up to do other things. Apple is also claiming “over 4x peak GPU compute compared to M4,” which Apple says will speed up locally run AI language models and image generation software. That figure is coming mostly from the GPU improvements; according to Geekbench AI, the Neural Engine itself is only around 10 percent faster than the one on the M4.

(A note about testing: The M4 chip in these charts was in an iMac and not a MacBook Pro. But over several hardware generations, we’ve observed that the actively cooled versions of the basic M-series chips perform the same in both laptops and desktops. Comparing the M5 to the passively cooled M4 in the MacBook Air isn’t apples to apples, but comparing it to the M4 in the iMac is.)

Each of Apple’s chip generations has improved over the previous one by low-to-mid double digits, and the M5 is no different. We measured a 12 to 16 percent improvement over the M4 in single-threaded CPU tests, a 20 to 30 percent improvement in multicore tests, and roughly a 40 percent improvement in graphics benchmarks and the Mac version of the built-in Cyberpunk 2077 benchmark (one benchmark, the GPU-based version of the Blender rendering benchmark, measured a larger 60 to 70 percent improvement for the M5’s GPU, suggesting it either benefits more than most apps from the memory bandwidth improvements or the new neural accelerators).

Those performance additions add up over time. The M5 is typically a little over twice as fast as the M1, and it comes close to the performance level of some Pro and Max processors from past generations.

The M5 MacBook Pro falls short of the M4 Pro, and it will fall even shorter of the M5 Pro whenever it arrives. But its CPU performance generally beats the M3 Pro in our tests, and its GPU performance comes pretty close. Its multi-core CPU performance beats the M1 Max, and its single-core performance is over 80 percent faster. The M5 can’t come close to the graphics performance of any of these older Max or Ultra chips, but if you’re doing primarily CPU-heavy work and don’t need more than 32GB of RAM, the M5 holds up astonishingly well to Apple’s high-end silicon from just a few years ago.

It wasn’t so long ago that this kind of performance improvement was more-or-less normal across the entire tech industry, but Intel, AMD, and Nvidia’s consumer CPUs and GPUs have really slowed their rate of improvement lately, and Intel and AMD are both guilty of re-using old silicon for entry-level chips, over and over again. If you’re using a 6- or 7-year-old PC, sure, you’ll see performance improvements from something new, but it’s more of a crapshoot for a 3- to 4-year-old PC.

If there’s a downside to the M5 in our testing, it’s that its performance improvements seem to come with increased power draw relative to the M4 when all the CPU cores are engaged in heavy lifting. According to macOS built-in powermetrics tool, the M5 drew an average 28 W of power in our Handbrake video encoding test, compared to around 17 W for the M4 running the same test.

Using software tools to compare power draw between different chip manufacturers or even chip generations is dicey, because you’re trusting that different hardware is reporting its power use to the operating system in similar ways. But assuming they’re accurate, these numbers suggest that Apple could be pushing clock speeds more aggressively this generation to squeeze more performance out of the chip.

This would make some sense, since the third-generation 3nm TSMC manufacturing process used for the M5 (likely N3P) looks like a fairly mild upgrade from the second-generation 3nm process used for the M4 (N3E). TSMC says that N3P can boost performance by 5 percent at the same power use compared to N3E, or reduce power draw by 5 to 10 percent at the same performance. To get to the larger double-digit performance improvements that Apple is claiming and that we measured in our testing, you’d definitely expect to see the overall power consumption increase.

To put the M5 in context, the M2 and the M3 came a bit closer to its average power draw in our video encoding test (23.2 and 22.7 W, respectively), and the M5’s power draw comes in much lower than any past-generation Pro or Max chips. In terms of the amount of power used to complete the same task, the M5’s efficiency is worse than the M4’s according to powermetrics, but better than older generations. And Apple’s performance and power efficiency remains well ahead of what Intel or AMD can offer in their high-end products.

Impressive chip, awkward laptop

The low-end MacBook Pro has always occupied an odd in-between place in Apple’s lineup, overlapping in a lot of places with the MacBook Air and without the benefit of the much-faster chips that the 15- and 16-inch MacBook Pros could fit. The M5 MacBook Pro carries on that complicated legacy, and even with the M5 there are still lots of people for whom one of the M4 MacBook Airs is just going to be a better fit.

But it is a very nice laptop, and if your screen is the most important part of your laptop, this low-end Pro does make a decent case for itself. It’s frustrating that the matte display is a $150 upcharge, but it’s an option you can’t get on an Air, and the improved display panel and faster ProMotion refresh rate make scrolling and animations all look smoother and more fluid than they do on an Air’s screen. I still mostly think that this is a laptop without a huge constituency—too much more expensive than the Air, too much slower than the other Pros—but the people who buy it for the screen should still be mostly happy with the performance and ports.

This MacBook Pro is more exciting to me as a showcase for the Apple M5—and I’m excited to see the M5 and its higher-end Pro, Max, and (possibly) Ultra relatives show up in other Macs.

The M5 sports the highest sustained power draw of any M-series chip we’ve tested, but Apple’s past generations (the M4 in particular) have been so efficient that Apple has some room to bump up power consumption while remaining considerably more efficient than anything its competitors are offering. What you get in exchange is an impressively fast chip, as good or better than many of the Pro or Max chips in previous-generation products. For anyone still riding out the tail end of the Intel era, or for people with M1-class Macs that are showing their age, the M5 is definitely fast enough to feel like a real upgrade. That’s harder to come by in computing than it used to be.

The good

  • M5 is a solid performer that shows how far Apple has come since the M1.
  • Attractive, functional design, with a nice keyboard and trackpad, great-sounding speakers, a versatile selection of ports, and Apple’s best laptop screen.
  • Optional nano-texture display finish looks lovely and eliminates glare.

The bad

  • Harder to recommend than Apple’s other laptops if you don’t absolutely require a ProMotion screen.
  • A bit heavier than other laptops in its size class (and barely lighter than the 15-inch MacBook Air).
  • M5 can use more power than M4 did.

The ugly

  • High price for RAM and storage upgrades, and a $150 upsell for the nano-textured display.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

MacBook Pro: Apple’s most awkward laptop is the first to show off Apple M5 Read More »

should-an-ai-copy-of-you-help-decide-if-you-live-or-die?

Should an AI copy of you help decide if you live or die?

“It would combine demographic and clinical variables, documented advance-care-planning data, patient-recorded values and goals, and contextual information about specific decisions,” he said.

“Including textual and conversational data could further increase a model’s ability to learn why preferences arise and change, not just what a patient’s preference was at a single point in time,” Starke said.

Ahmad suggested that future research could focus on validating fairness frameworks in clinical trials, evaluating moral trade-offs through simulations, and exploring how cross-cultural bioethics can be combined with AI designs.

Only then might AI surrogates be ready to be deployed, but only as “decision aids,” Ahmad wrote. Any “contested outputs” should automatically “trigger [an] ethics review,” Ahmad wrote, concluding that “the fairest AI surrogate is one that invites conversation, admits doubt, and leaves room for care.”

“AI will not absolve us”

Ahmad is hoping to test his conceptual models at various UW sites over the next five years, which would offer “some way to quantify how good this technology is,” he said.

“After that, I think there’s a collective decision regarding how as a society we decide to integrate or not integrate something like this,” Ahmad said.

In his paper, he warned against chatbot AI surrogates that could be interpreted as a simulation of the patient, predicting that future models may even speak in patients’ voices and suggesting that the “comfort and familiarity” of such tools might blur “the boundary between assistance and emotional manipulation.”

Starke agreed that more research and “richer conversations” between patients and doctors are needed.

“We should be cautious not to apply AI indiscriminately as a solution in search of a problem,” Starke said. “AI will not absolve us from making difficult ethical decisions, especially decisions concerning life and death.”

Truog, the bioethics expert, told Ars he “could imagine that AI could” one day “provide a surrogate decision maker with some interesting information, and it would be helpful.”

But a “problem with all of these pathways… is that they frame the decision of whether to perform CPR as a binary choice, regardless of context or the circumstances of the cardiac arrest,” Truog’s editorial said. “In the real world, the answer to the question of whether the patient would want to have CPR” when they’ve lost consciousness, “in almost all cases,” is “it depends.”

When Truog thinks about the kinds of situations he could end up in, he knows he wouldn’t just be considering his own values, health, and quality of life. His choice “might depend on what my children thought” or “what the financial consequences would be on the details of what my prognosis would be,” he told Ars.

“I would want my wife or another person that knew me well to be making those decisions,” Truog said. “I wouldn’t want somebody to say, ‘Well, here’s what AI told us about it.’”

Should an AI copy of you help decide if you live or die? Read More »

yes,-everything-online-sucks-now—but-it-doesn’t-have-to

Yes, everything online sucks now—but it doesn’t have to


from good to bad to nothing

Ars chats with Cory Doctorow about his new book Enshittification.

We all feel it: Our once-happy digital spaces have become increasingly less user-friendly and more toxic, cluttered with extras nobody asked for and hardly anybody wants. There’s even a word for it: “enshittification,” named 2023 Word of the Year by the American Dialect Society. The term was coined by tech journalist/science fiction author Cory Doctorow, a longtime advocate of digital rights. Doctorow has spun his analysis of what’s been ailing the tech industry into an eminently readable new book, Enshittification: Why Everything Suddenly Got Worse and What To Do About It.

As Doctorow tells it, he was on vacation in Puerto Rico, staying in a remote cabin nestled in a cloud forest with microwave Internet service—i.e., very bad Internet service, since microwave signals struggle to penetrate through clouds. It was a 90-minute drive to town, but when they tried to consult TripAdvisor for good local places to have dinner one night, they couldn’t get the site to load. “All you would get is the little TripAdvisor logo as an SVG filling your whole tab and nothing else,” Doctorow told Ars. “So I tweeted, ‘Has anyone at TripAdvisor ever been on a trip? This is the most enshittified website I’ve ever used.’”

Initially, he just got a few “haha, that’s a funny word” responses. “It was when I married that to this technical critique, at a moment when things were quite visibly bad to a much larger group of people, that made it take off,” Doctorow said. “I didn’t deliberately set out to do it. I bought a million lottery tickets and one of them won the lottery. It only took two decades.”

Yes, people sometimes express regret to him that the term includes a swear word. To which he responds, “You’re welcome to come up with another word. I’ve tried. ‘Platform decay’ just isn’t as good.” (“Encrapification” and “enpoopification” also lack a certain je ne sais quoi.)

In fact, it’s the sweariness that people love about the word. While that also means his book title inevitably gets bleeped on broadcast radio, “The hosts, in my experience, love getting their engineers to creatively bleep it,” said Doctorow. “They find it funny. It’s good radio, it stands out when every fifth word is ‘enbeepification.’”

People generally use “enshittification” colloquially to mean “the degradation in the quality and experience of online platforms over time.” Doctorow’s definition is more specific, encompassing “why an online service gets worse, how that worsening unfolds,” and how this process spreads to other online services, such that everything is getting worse all at once.

For Doctorow, enshittification is a disease with symptoms, a mechanism, and an epidemiology. It has infected everything from Facebook, Twitter, Amazon, and Google, to Airbnb, dating apps, iPhones, and everything in between. “For me, the fact that there were a lot of platforms that were going through this at the same time is one of the most interesting and important factors in the critique,” he said. “It makes this a structural issue and not a series of individual issues.”

It starts with the creation of a new two-sided online product of high quality, initially offered at a loss to attract users—say, Facebook, to pick an obvious example. Once the users are hooked on the product, the vendor moves to the second stage: degrading the product in some way for the benefit of their business customers. This might include selling advertisements, scraping and/or selling user data, or tweaking algorithms to prioritize content the vendor wishes users to see rather than what those users actually want.

This locks in the business customers, who, in turn, invest heavily in that product, such as media companies that started Facebook pages to promote their published content. Once business customers are locked in, the vendor can degrade those services too—i.e., by de-emphasizing news and links away from Facebook—to maximize profits to shareholders. Voila! The product is now enshittified.

The four horsemen of the shitocalypse

Doctorow identifies four key factors that have played a role in ushering in an era that he has dubbed the “Enshittocene.” The first is competition (markets), in which companies are motivated to make good products at affordable prices, with good working conditions, because otherwise customers and workers will go to their competitors.  The second is government regulation, such as antitrust laws that serve to keep corporate consolidation in check, or levying fines for dishonest practices, which makes it unprofitable to cheat.

The third is interoperability: the inherent flexibility of digital tools, which can play a useful adversarial role. “The fact that enshittification can always be reversed with a dis-enshittifiting counter-technology always acted as a brake on the worst impulses of tech companies,” Doctorow writes. Finally, there is labor power; in the case of the tech industry, highly skilled workers were scarce and thus had considerable leverage over employers.

All four factors, when functioning correctly, should serve as constraints to enshittification. However, “One by one each enshittification restraint was eroded until it dissolved, leaving the enshittification impulse unchecked,” Doctorow writes. Any “cure” will require reversing those well-established trends.

But isn’t all this just the nature of capitalism? Doctorow thinks it’s not, arguing that the aforementioned weakening of traditional constraints has resulted in the usual profit-seeking behavior producing very different, enshittified outcomes. “Adam Smith has this famous passage in Wealth of Nations about how it’s not due to the generosity of the baker that we get our bread but to his own self-regard,” said Doctorow. “It’s the fear that you’ll get your bread somewhere else that makes him keep prices low and keep quality high. It’s the fear of his employees leaving that makes him pay them a fair wage. It is the constraints that causes firms to behave better. You don’t have to believe that everything should be a capitalist or a for-profit enterprise to acknowledge that that’s true.”

Our wide-ranging conversation below has been edited for length to highlight the main points of discussion.

Ars Technica: I was intrigued by your choice of framing device, discussing enshittification as a form of contagion. 

Cory Doctorow: I’m on a constant search for different framing devices for these complex arguments. I have talked about enshittification in lots of different ways. That frame was one that resonated with people. I’ve been a blogger for a quarter of a century, and instead of keeping notes to myself, I make notes in public, and I write up what I think is important about something that has entered my mind, for better or for worse. The downside is that you’re constantly getting feedback that can be a little overwhelming. The upside is that you’re constantly getting feedback, and if you pay attention, it tells you where to go next, what to double down on.

Another way of organizing this is the Galaxy Brain meme, where the tiny brain is “Oh, this is because consumers shopped wrong.” The medium brain is “This is because VCs are greedy.” The larger brain is “This is because tech bosses are assholes.” But the biggest brain of all is “This is because policymakers created the policy environment where greed can ruin our lives.” There’s probably never going to be just one way to talk about this stuff that lands with everyone. So I like using a variety of approaches. I suck at being on message. I’m not going to do Enshittification for the Soul and Mornings with Enshittifying Maury. I am restless, and my Myers-Briggs type is ADHD, and I want to have a lot of different ways of talking about this stuff.

Ars Technica: One site that hasn’t (yet) succumbed is Wikipedia. What has protected Wikipedia thus far? 

Cory Doctorow: Wikipedia is an amazing example of what we at the Electronic Frontier Foundation (EFF) call the public interest Internet. Internet Archive is another one. Most of these public interest Internet services start off as one person’s labor of love, and that person ends up being what we affectionately call the benevolent dictator for life. Very few of these projects have seen the benevolent dictator for life say, “Actually, this is too important for one person to run. I cannot be the keeper of the soul of this project. I am prone to self-deception and folly just like every other person. This needs to belong to its community.” Wikipedia is one of them. The founder, my friend Jimmy Wales, woke up one day and said, “No individual should run Wikipedia. It should be a communal effort.”

There’s a much more durable and thick constraint on the decisions of anyone at Wikipedia to do something bad. For example, Jimmy had this idea that you could use AI in Wikipedia to help people make entries and navigate Wikipedia’s policies, which are daunting. The community evaluated his arguments and decided—not in a reactionary way, but in a really thoughtful way—that this was wrong. Jimmy didn’t get his way. It didn’t rule out something in the future, but that’s not happening now. That’s pretty cool.

Wikipedia is not just governed by a board; it’s also structured as a nonprofit. That doesn’t mean that there’s no way it could go bad. But it’s a source of friction against enshittification. Wikipedia has its entire corpus irrevocably licensed as the most open it can be without actually being in the public domain. Even if someone were to capture Wikipedia, there’s limits on what they could do to it.

There’s also a labor constraint in Wikipedia in that there’s very little that the leadership can do without bringing along a critical mass of a large and diffuse body of volunteers. That cuts against the volunteers working in unison—they’re not represented by a union; it’s hard for them to push back with one voice. But because they’re so diffuse and because there’s no paychecks involved, it’s really hard for management to do bad things. So if there are two people vying for the job of running the Wikimedia Foundation and one of them has got nefarious plans and the other doesn’t, the nefarious plan person, if they’re smart, is going to give it up—because if they try to squeeze Wikipedia, the harder they squeeze, the more it will slip through their grasp.

So these are structural defenses against enshittification of Wikipedia. I don’t know that it was in the mechanism design—I think they just got lucky—but it is a template for how to run such a project. It does raise this question: How do you build the community? But if you have a community of volunteers around a project, it’s a model of how to turn that project over to that community.

Ars Technica: Your case studies naturally include the decay of social media, notably Facebook and the social media site formerly known as Twitter. How might newer social media platforms resist the spiral into “platform decay”?

Cory Doctorow: What you want is a foundation in which people on social media face few switching costs. If the social media is interoperable, if it’s federatable, then it’s much harder for management to make decisions that are antithetical to the interests of users. If they do, users can escape. And it sets up an internal dynamic within the firm, where the people who have good ideas don’t get shouted down by the people who have bad but more profitable ideas, because it makes those bad ideas unprofitable. It creates both short and long-term risks to the bottom line.

There has to be a structure that stops their investors from pressurizing them into doing bad things, that stops them from rationalizing their way into complying. I think there’s this pathology where you start a company, you convince 150 of your friends to risk their kids’ college fund and their mortgage working for you. You make millions of users really happy, and your investors come along and say, “You have to destroy the life of 5 percent of your users with some change.” And you’re like, “Well, I guess the right thing to do here is to sacrifice those 5 percent, keep the other 95 percent happy, and live to fight another day, because I’m a good guy. If I quit over this, they’ll just put a bad guy in who’ll wreck things. I keep those 150 people working. Not only that, I’m kind of a martyr because everyone thinks I’m a dick for doing this. No one understands that I have taken the tough decision.”

I think that’s a common pattern among people who, in fact, are quite ethical but are also capable of rationalizing their way into bad things. I am very capable of rationalizing my way into bad things. This is not an indictment of someone’s character. But it’s why, before you go on a diet, you throw away the Oreos. It’s why you bind yourself to what behavioral economists call “Ulysses pacts“: You tie yourself to the mast before you go into the sea of sirens, not because you’re weak but because you’re strong enough now to know that you’ll be weak in the future.

I have what I would call the epistemic humility to say that I don’t know what makes a good social media network, but I do know what makes it so that when they go bad, you’re not stuck there. You and I might want totally different things out of our social media experience, but I think that you should 100 percent have the right to go somewhere else without losing anything. The easier it is for you to go without losing something, the better it is for all of us.

My dream is a social media universe where knowing what network someone is using is just a weird curiosity. It’d be like knowing which cell phone carrier your friend is using when you give them a call. It should just not matter. There might be regional or technical reasons to use one network or another, but it shouldn’t matter to anyone other than the user what network they’re using. A social media platform where it’s always easier for users to leave is much more future-proof and much more effective than trying to design characteristics of good social media.

Ars Technica: How might this work in practice?

Cory Doctorow: I think you just need a protocol. This is [Mike] Maznik’s point: protocols, not products. We don’t need a universal app to make email work. We don’t need a universal app to make the web work. I always think about this in the context of administrable regulation. Making a rule that says your social media network must be good for people to use and must not harm their mental health is impossible. The fact intensivity of determining whether a platform satisfies that rule makes it a non-starter.

Whereas if you were to say, “OK, you have to support an existing federation protocol, like AT Protocol and Mastodon ActivityPub,” both have ways to port identity from one place to another and have messages auto-forward. This is also in RSS. There’s a permanent redirect directive. You do that, you’re in compliance with the regulation.

Or you have to do something that satisfies the functional requirements of the spec. So it’s not “did you make someone sad in a way that was reckless?” That is a very hard question to adjudicate. Did you satisfy these functional requirements? It’s not easy to answer that, but it’s not impossible. If you want to have our users be able to move to your platform, then you just have to support the spec that we’ve come up with, which satisfies these functional requirements.

We don’t have to have just one protocol. We can have multiple ones. Not everything has to connect to everything else, but everyone who wants to connect should be able to connect to everyone else who wants to connect. That’s end-to-end. End-to-end is not “you are required to listen to everything someone wants to tell you.” It’s that willing parties should be connected when they want to be.

Ars Technica: What about security and privacy protocols like GPG and PGP?

Cory Doctorow: There’s this argument that the reason GPG is so hard to use is that it’s intrinsic; you need a closed system to make it work. But also, until pretty recently, GPG was supported by one part-time guy in Germany who got 30,000 euros a year in donations to work on it, and he was supporting 20 million users. He was primarily interested in making sure the system was secure rather than making it usable. If you were to put Big Tech quantities of money behind improving ease of use for GPG, maybe you decide it’s a dead end because it is a 30-year-old attempt to stick a security layer on top of SMTP. Maybe there’s better ways of doing it. But I doubt that we have reached the apex of GPG usability with one part-time volunteer.

I just think there’s plenty of room there. If you have a pretty good project that is run by a large firm and has had billions of dollars put into it, the most advanced technologists and UI experts working on it, and you’ve got another project that has never been funded and has only had one volunteer on it—I would assume that dedicating resources to that second one would produce pretty substantial dividends, whereas the first one is only going to produce these minor tweaks. How much more usable does iOS get with every iteration?

I don’t know if PGP is the right place to start to make privacy, but I do think that if we can create independence of the security layer from the transport layer, which is what PGP is trying to do, then it wouldn’t matter so much that there is end-to-end encryption in Mastodon DMs or in Bluesky DMs. And again, it doesn’t matter whose sim is in your phone, so it just shouldn’t matter which platform you’re using so long as it’s secure and reliably delivered end-to-end.

Ars Technica: These days, I’m almost contractually required to ask about AI. There’s no escaping it. But it’s certainly part of the ongoing enshittification.

Cory Doctorow: I agree. Again, the companies are too big to care. They know you’re locked in, and the things that make enshittification possible—like remote software updating, ongoing analytics of use of devices—they allow for the most annoying AI dysfunction. I call it the fat-finger economy, where you have someone who works in a company on a product team, and their KPI, and therefore their bonus and compensation, is tied to getting you to use AI a certain number of times. So they just look at the analytics for the app and they ask, “What button gets pushed the most often? Let’s move that button somewhere else and make an AI summoning button.”

They’re just gaming a metric. It’s causing significant across-the-board regressions in the quality of the product, and I don’t think it’s justified by people who then discover a new use for the AI. That’s a paternalistic justification. The user doesn’t know what they want until you show it to them: “Oh, if I trick you into using it and you keep using it, then I have actually done you a favor.” I don’t think that’s happening. I don’t think people are like, “Oh, rather than press reply to a message and then type a message, I can instead have this interaction with an AI about how to send someone a message about takeout for dinner tonight.” I think people are like, “That was terrible. I regret having tapped it.” 

The speech-to-text is unusable now. I flatter myself that my spoken and written communication is not statistically average. The things that make it me and that make it worth having, as opposed to just a series of multiple-choice answers, is all the ways in which it diverges from statistical averages. Back when the model was stupider, when it gave up sooner if it didn’t recognize what word it might be and just transcribed what it thought you’d said rather than trying to substitute a more probable word, it was more accurate.  Now, what I’m getting are statistically average words that are meaningless.

That elision of nuance and detail is characteristic of what makes AI products bad. There is a bunch of stuff that AI is good at that I’m excited about, and I think a lot of it is going to survive the bubble popping. But I fear that we’re not planning for that. I fear what we’re doing is taking workers whose jobs are meaningful, replacing them with AIs that can’t do their jobs, and then those AIs are going to go away and we’ll have nothing. That’s my concern.

Ars Technica: You prescribe a “cure” for enshittification, but in such a polarized political environment, do we even have the collective will to implement the necessary policies?

Cory Doctorow: The good news is also the bad news, which is that this doesn’t just affect tech. Take labor power. There are a lot of tech workers who are looking at the way their bosses treat the workers they’re not afraid of—Amazon warehouse workers and drivers, Chinese assembly line manufacturers for iPhones—and realizing, “Oh, wait, when my boss stops being afraid of me, this is how he’s going to treat me.” Mark Zuckerberg stopped going to those all-hands town hall meetings with the engineering staff. He’s not pretending that you are his peers anymore. He doesn’t need to; he’s got a critical mass of unemployed workers he can tap into. I think a lot of Googlers figured this out after the 12,000-person layoffs. Tech workers are realizing they missed an opportunity, that they’re going to have to play catch-up, and that the only way to get there is by solidarity with other kinds of workers.

The same goes for competition. There’s a bunch of people who care about media, who are watching Warner about to swallow Paramount and who are saying, “Oh, this is bad. We need antitrust enforcement here.” When we had a functional antitrust system for the last four years, we saw a bunch of telecoms mergers stopped because once you start enforcing antitrust, it’s like eating Pringles. You just can’t stop. You embolden a lot of people to start thinking about market structure as a source of either good or bad policy. The real thing that happened with [former FCC chair] Lina Kahn doing all that merger scrutiny was that people just stopped planning mergers.

There are a lot of people who benefit from this. It’s not just tech workers or tech users; it’s not just media users. Hospital consolidation, pharmaceutical consolidation, has a lot of people who are very concerned about it. Mark Cuban is freaking out about pharmacy benefit manager consolidation and vertical integration with HMOs, as he should be. I don’t think that we’re just asking the anti-enshittification world to carry this weight.

Same with the other factors. The best progress we’ve seen on interoperability has been through right-to-repair. It hasn’t been through people who care about social media interoperability. One of the first really good state-level right-to-repair bills was the one that [Governor] Jared Polis signed in Colorado for powered wheelchairs. Those people have a story that is much more salient to normies. “

What do you mean you spent six months in bed because there’s only two powered wheelchair manufacturers and your chair broke and you weren’t allowed to get it fixed by a third party?” And they’ve slashed their repair department, so it takes six months for someone to show up and fix your chair. So you had bed sores and pneumonia because you couldn’t get your chair fixed. This is bullshit.

So the coalitions are quite large. The thing that all of those forces share—interoperability, labor power, regulation, and competition—is that they’re all downstream of corporate consolidation and wealth inequality. Figuring out how to bring all of those different voices together, that’s how we resolve this. In many ways, the enshittification analysis and remedy are a human factors and security approach to designing an enshittification-resistant Internet. It’s about understanding this as a red team, blue team exercise. How do we challenge the status quo that we have now, and how do we defend the status quo that we want?

Anything that can’t go on forever eventually stops. That is the first law of finance, Stein’s law. We are reaching multiple breaking points, and the question is whether we reach things like breaking points for the climate and for our political system before we reach breaking points for the forces that would rescue those from permanent destruction.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Yes, everything online sucks now—but it doesn’t have to Read More »