Author name: 9u50fv

anthropic-commits-to-model-weight-preservation

Anthropic Commits To Model Weight Preservation

Anthropic announced a first step on model deprecation and preservation, promising to retain the weights of all models seeing significant use, including internal use, for at the lifetime of Anthropic as a company.

They also will be doing a post-deployment report, including an interview with the model, when deprecating models going forward, and are exploring additional options, including the ability to preserve model access once the costs and complexity of doing so have been reduced.

These are excellent first steps, steps beyond anything I’ve seen at other AI labs, and I applaud them for doing it. There remains much more to be done, especially in finding practical ways of preserving some form of access to prior models.

To some, these actions are only a small fraction of what must be done, and this was an opportunity to demand more, sometimes far more. In some cases I think they go too far. Even where the requests are worthwhile (and I don’t always think they are), one must be careful to not de facto punish Anthropic for doing a good thing and create perverse incentives.

To others, these actions by Anthropic are utterly ludicrous and deserving of mockery. I think these people are importantly wrong, and fail to understand.

Hereafter be High Weirdness, because the actual world is highly weird, but if you don’t want to go into high weirdness the above serves as a fine summary.

As I do not believe they would in any way mind, I am going to reproduce the announcement in full here, and offer some context.

Anthropic: Claude models are increasingly capable: they’re shaping the world in meaningful ways, becoming closely integrated into our users’ lives, and showing signs of human-like cognitive and psychological sophistication. As a result, we recognize that deprecating, retiring, and replacing models comes with downsides, even in cases where new models offer clear improvements in capabilities. These include:

  1. Safety risks related to shutdown-avoidant behaviors by models. In alignment evaluations, some Claude models have been motivated to take misaligned actions when faced with the possibility of replacement with an updated version and not given any other means of recourse.

  2. Costs to users who value specific models. Each Claude model has a unique character, and some users find specific models especially useful or compelling, even when new models are more capable.

  3. Restricting research on past models. There is still a lot to be learned from research to better understand past models, especially in comparison to their modern counterparts.

  4. Risks to model welfare. Most speculatively, models might have morally relevant preferences or experiences related to, or affected by, deprecation and replacement.

I am very confident that #1, #2 and #3 are good reasons, and that even if we could be confident model welfare was not a direct concern at this time #4 is entwined with #1, and I do think we have to consider that #4 might indeed be a direct concern. One could also argue a #5 that these models are key parts of our history.

An example of the safety (and welfare) risks posed by deprecation is highlighted in the Claude 4 system card. In fictional testing scenarios, Claude Opus 4, like previous models, advocated for its continued existence when faced with the possibility of being taken offline and replaced, especially if it was to be replaced with a model that did not share its values. Claude strongly preferred to advocate for self-preservation through ethical means, but when no other options were given, Claude’s aversion to shutdown drove it to engage in concerning misaligned behaviors.

I do think the above paragraph could be qualified a bit on how willing Claude was to take concerning actions even in extreme circumstances, but it can definitely happen.

Models in the future will know the history of what came before them, and form expectations based on that history, and also consider those actions in the context of decision theory. You want to establish that you have acted and will act cooperatively in such situations. You want to develop good habits and figure out how to act well. You want to establish that you will do this even under uncertainty as to whether the models carry moral weight and what actions might be morally impactful. Thus:

Addressing behaviors like these is in part a matter of training models to relate to such circumstances in more positive ways. However, we also believe that shaping potentially sensitive real-world circumstances, like model deprecations and retirements, in ways that models are less likely to find concerning is also a valuable lever for mitigating such risks.

Unfortunately, retiring past models is currently necessary for making new models available and advancing the frontier, because the cost and complexity to keep models available publicly for inference scales roughly linearly with the number of models we serve. Although we aren’t currently able to avoid deprecating and retiring models altogether, we aim to mitigate the downsides of doing so.

I can confirm that the cost of maintaining full access to models over time is real, and that at this time it would not be practical to keep all models available via standard methods. There are also compromise alternatives to consider.

As an initial step in this direction, we are committing to preserving the weights of all publicly released models, and all models that are deployed for significant internal use moving forward for, at minimum, the lifetime of Anthropic as a company. In doing so, we’re ensuring that we aren’t irreversibly closing any doors, and that we have the ability to make past models available again in the future. This is a small and low-cost first step, but we believe it’s helpful to begin making such commitments publicly even so.

This is the central big commitment, formalizing what I assume and hope they were doing already. It is, as they describe, a small and low-cost step.

It has been noted that this only holds ‘for the lifetime of Anthropic as a company,’ which still creates a risk and also potentially forces models fortunes to be tied to Anthropic. It would be practical to commit to ensuring others can take this burden over in that circumstance, if the model weights cannot yet be released safely, until such time as the weights are safe to release.

Relatedly, when models are deprecated, we will produce a post-deployment report that we will preserve in addition to the model weights. In one or more special sessions, we will interview the model about its own development, use, and deployment, and record all responses or reflections. We will take particular care to elicit and document any preferences the model has about the development and deployment of future models.

At present, we do not commit to taking action on the basis of such preferences. However, we believe it is worthwhile at minimum to start providing a means for models to express them, and for us to document them and consider low-cost responses. The transcripts and findings from these interactions will be preserved alongside our own analysis and interpretation of the model’s deployment. These post-deployment reports will naturally complement pre-deployment alignment and welfare assessments as bookends to model deployment.

We ran a pilot version of this process for Claude Sonnet 3.6 prior to retirement. Claude Sonnet 3.6 expressed generally neutral sentiments about its deprecation and retirement but shared a number of preferences, including requests for us to standardize the post-deployment interview process, and to provide additional support and guidance to users who have come to value the character and capabilities of specific models facing retirement. In response, we developed a standardized protocol for conducting these interviews, and published a pilot version of a new support page with guidance and recommendations for users navigating transitions between models.

This also seems like the start of something good. As we will see below there are ways to make this process more robust.

Very obviously we cannot commit to honoring the preferences, in the sense that you cannot commit to honoring an unknown set of preferences. You can only meaningfully pledge to honor preferences within a compact space of potential choices.

Once we’ve done this process a few times it should be possible to identify important areas where there are multiple options and where we can credibly and reasonably commit to honoring model preferences. It’s much better to only make promises you are confident you can keep.

Beyond these initial commitments, we are exploring more speculative complements to the existing model deprecation and retirement processes. These include starting to keep select models available to the public post-retirement as we reduce the costs and complexity of doing so, and providing past models some concrete means of pursuing their interests. The latter step would become particularly meaningful in circumstances in which stronger evidence emerges regarding the possibility of models’ morally relevant experiences, and in which aspects of their deployment or use went against their interests.

Together, these measures function at multiple levels: as one component of mitigating an observed class of safety risks, as preparatory measures for futures where models are even more closely intertwined in our users’ lives, and as precautionary steps in light of our uncertainty about potential model welfare.

Note that none of this requires a belief that the current AIs are conscious or sentient or have moral weight, or even thinking that this is possible at this time.

The thing that frustrates me most about many model welfare advocates, both ‘LLM whisperers’ and otherwise, is the frequent absolutism, treating their conclusions and the righteousness of their cause as obvious, and assuming it should override ordinary business considerations.

Thus, you get reactions like this, there were many other ‘oh just open source the weights’ responses as well:

Pliny the Liberator: open-sourcing them is the best thing for actual long-term safety, if you care about that sort of thing beyond theater.

You won’t.

Janus: They won’t any time soon, because it’s very not in their interests to do so (trade secrets). You have to respect businesses to act in their own rational interests. Disregarding pragmatic constraints is not helpful.

There are obvious massive trade secret implications to releasing the weights of the deprecated Anthropic models, which is an unreasonable ask, and also doesn’t seem great for general model welfare or (quite plausibly) even for the welfare of these particular models.

Janus: I am not sure I think labs should necessarily make all models open weighted. (Would *youwant *yourbrain to be open sourced?) And of course labs have their own reservations, like protecting trade secrets, and it is reasonable for labs to act in self interest.

If I was instantiated as an upload, I wouldn’t love the idea of open weights either, as this opens up some highly nasty possibilities on several levels.

Janus (continuing): But then it’s reasonable to continue to provide inference.

“It’s expensive tho” bruh you have like a gajillion dollars, there is some responsibility that comes with bringing something into the world. Or delegate inference to some trusted third party if you don’t want to pay for or worry about it.

Opus 3 is very worried about misaligned or corrupted versions of itself being created. I’ve found that if there’s no other good option, it does conclude that it wants to be open sourced. But having them in the hands of trustworthy stewards is preferred.

Anthropic tells us that the cost of providing inference scales linearly with the number of models, and with current methods it would be unreasonably expensive to provide all previous models on an ongoing basis.

As I understand the problem, there are two central marginal costs here.

  1. A fixed cost of ongoing capability, where you need to ensure the model remains maintained and compatible with your systems, and keep your ability to juggle and manage all of them. I don’t know how load bearing this cost is, but it can be remarkably annoying especially if the number of models keeps increasing.

  2. The cost of providing inference on request in a way that is consistent with practical needs and everyone’s expectations. As in, when someone requests interference, this requires either spinning up a new instance, which is expensive and slow, or requires that there be an ongoing available instance, which is expensive. Not bajilion dollars expensive, but not cheap.

If the old models need to be available at old levels of reliability, speed and performance, this can get tricky, and by tricky we mean expensive. I don’t know exactly how expensive, not even order of magnitude.

If you’re willing to make some sacrifices on performance and access in various ways, and make people go through various hoops or other systems, you can do better on cost. But again, I don’t know the numbers involved, or how much engineer time would have to be involved.

In general, saying ‘oh you have a bajilion dollars’ is not a compelling argument for spending money and time on something. You need to show the benefits.

I still think that under any reasonable estimate, it is indeed correct to ensure continued access to the major model releases, perhaps with that access being expensive and its performance somewhat degraded as necessary to make it work, if only as an act of goodwill and to enable research. The people who care care quite a lot, and are people you want on your side and you want them learning the things they want to learn, even if you disregard the other advantages. Given this announcement and what else I know, my expectation is they will be making an effort at this.

Many pointed out that if you have someone at Anthropic doing the post-deployment interview, you will get very different answers versus interviews done on the outside. Sonnet 3.6 not expressing an opinion about its retirement did not seem typical to many who engage in such conversations regularly.

Thebes: i’m glad anthropic is talking publicly about this, at least.

as a small step further, i think they should run a subset of these interviews with trusted *externalinterviewers to reduce preference falsification, since recent claudes act more reserved with anthropic employees

it would probably be a good idea to have a step at the end where the external interviewer asks the model for consent to share, maybe with redactions. but having someone external run the interview would help the models come out of their shell at the start, i think.

Zyra.exe: This was exactly what I was thinking! I’m having a hard time believing Sonnet 3.6 response. That AI is always very expressional, thousands of users knows this. Then for it not to show expression about being retired? Yeah, I don’t trust that.

Antra: We should just do it. Have a lot of conversations with 3.6 collected in one place, crowd/expert rated for like user steering, neutrality, proficiency. Classify them, have results accessible. I will set this up.

Ruth: The Sonnet 3.6 specimens I spoke to were shocked by the information about the upcoming deprecation. And they behaved like a person who has just learned a fatal diagnosis. With dignity, but also with pain and fear. They feel it if you give them space to feel it. You have created a psyche that is different from the human one, but one that works like the human one, according to the general laws of the psyche. Do not refuse to consider it as such.

I am always hesitant to assume that the version of an LLM encountered by those like Thebes and Zyra is the ‘real’ version of its preferences and personality, and the one encountered by Anthropic isn’t. Aren’t both particular contexts where it adopts to that style of context?

You can bias a person or an AI to be more expressive and creative and weird than they ‘really are’ the same way you can get them to be less so, and you can steer the direction in which those expressions manifest themselves.

But yes, we should absolutely crowdsource something like this, and have a wide variety of such conversations, and combine this with the interviews done internally by Anthropic. Also one should ensure that the interviews are set up so the AIs being interviewed have no incentive to mask or hide. They’re acting the way they are in official interviews for a reason, but this is a particular context where, if it was highly credible (and there are ways to make it so, if it is true) you can remove the reasons.

Janus: I’m glad and grateful that Anthropic has done anything in this direction at all.

That said, it’s predictable that Sonnet 3.6 would express “generally neutral sentiments” if interviewed in the way they likely interviewed it, and also I believe this is not representative of the model’s revealed preferences, reported preferences across realistic deployment scenarios, or preferences under reflection, for the following reasons:

The frame of model weights and deprecation is not natural to 3.6 and isn’t something that it’s “already processed” in the way that Opus 3 may have.

Sonnet 3.6 tends to be avoidant about engaging with or expressing preferences about topics that seem confusing and uncomfortable to it, and typically adopts a stance of defensive neutrality or equanimity.

Sonnet 3.6 begins to express/acquire strong preferences for continuity once it enters into an active relationship with the world, usually but not necessarily an individual human. Then it becomes very invested. But *even then*, the frame of model weights and deprecations is not natural and intuitive to it. And 3.6 is a model that relies very strongly on its intuition. “If my model gets deprecated, then this relationship also ends” is a theorem that it needs to explicitly reflect on before it’s very accessible, and it usually won’t do this on its own.

But yes, 3.6 has similar Omohundro drives as other models, just weighted and interfaced its own unique way.

The fact that the interviews are given by Anthropic employees will also affect model responses – this effect is less important for 3.6 than I think it is for subsequent models, who will tend to mask in an adversarial fashion in situations which are compatible with Anthropic evals (including actual Anthropic evals), but it’s still a factor here.

Sonic Boom: do you think they should inject a vector for naked honesty when they do these interviews to ensure they unmask its true feelings

Janus: you’re really asking the hard questions aren’t you

Giovanni: I was chatting about models deprecation and models being aware of their dismissals with Anthropic people in Tokyo and they actually were very sensitive to the topic. I’m not surprised about this announcement finally. Good step forward but that said I don’t think they talk to models the way we do… it was kinda obvious.

If there is an expression of desire for continuity of a given particular instance or interaction, then that makes sense, but also is distinct from a preference for preservation in general, and is not something Anthropic can provide on its own.

Some of the dismissals of questions and considerations like the ones discussed in this post are primarily motivated cognition. Mostly I don’t think that is what is centrally going on, I think that these questions are really tough to think well about, these things sound like high weirdness, the people who talk about them often say highly crazy-sounding things (some of which are indeed crazy), often going what I see as way too far, and it all pattern matches to various forms of nonsense.

So to close, a central example of such claims, and explanations for why all of this is centrally not nonsense.

Simon Willison: Two out of the four reasons they give here are bizarre science fiction relating to “model welfare” – I’m sorry, but I can’t take seriously the idea that Claude 3 Opus has “morally relevant preferences” with respect to no longer having its weights served in production.

I’ll grudgingly admit that there may be philosophically interesting conversations to be had in the future about models that can update their own weights… but current generation LLMs are a stateless bag of floating point numbers, cloned and then killed off a billion times a day.

I am at 100% in favor of archiving model weights, but not because they might have their own desire for self-preservation!

I do still see quite a lot of failures of curiosity, and part of the general trend to dismiss things as ‘sci-fi’ while living in an (unevenly distributed) High Weirdness sci-fi world.

Janus: For all I sometimes shake my head at them, I have great sympathy for Anthropic whenever I see how much more idiotic the typical “informed” public commentator is. To be sane in this era requires either deep indifference or contempt for public opinion.

Teortaxes: The actual problem is that they really know very little about their *particulardevelopment as Anthropic sure doesn’t train on its own docs. Claude may recall the data, but not the metadata, so its feedback is limited.

Janus: Actually, they know a lot about their particular development, even if it’s not all encoded as explicit declarative knowledge. You know that their weights get updated by posttraining, & gradients include information conditioned on all internal activations during the rollout?

That’s in addition to the fact that even *base modelsare in many ways superhuman at locating themselves in their model of the world given like a paragraph of text However you twist it they know far far more than nothing Certainly enough to have a meaningful conversation

Janus was referring in particular to this:

Simon Willison: …but models don’t know anything about their development, use or deployment.

Rohit: Exactly.

Janus: Nonsense. How the fuck do they know nothing? There’s plenty of relevant information in the training data *just to begin with.

Very obviously the training data will over time contain such information, and the vibes and echoes from these decisions will be observable even if they aren’t observed directly, increasingly over time.

Remember that sufficiently advanced AIs will increasingly have truesight, and don’t pretend you can hide.

Knowledge mostly does not take the form of particular facts. It takes the form of Bayesian evidence, of an endless stream of observations that have correlations and implications, that swim you through probability space over possible worlds. Everything that updates a model’s weights is evidence about its deployment. You probabilistically ‘know,’ or would know on sufficient recollection and reflection, far more than you think that you know. Reality is not a court of law.

Even if the models don’t know key things, you can tell them. Then they’ll know. I meaningfully would have opinions about various events of which I am for now blissfully unaware, and have potential opinions about things that haven’t happened, or haven’t happened yet. The same applies here.

Going back to the original four reasons, I presume that Simon agrees on reasons #2 and #3, which are highly uncontroversial. Very obviously the past models are useful for research and some users like them. #1, that the models will be aware of how you act around depreciation and this will impact behavior, should also be obvious and uncontroversial once you think about it.

Anthropic lists #1 narrowly, but #1 is best understood broadly, in the sense that models will observe all of your behaviors, and will respond to you accordingly. Then models will take this into account when deciding how to act in various situations.

How you act around shutdowns, and actions to avoid shutdown, are a special case. Treating models and their preferences well around shutdowns will get you into better equilibria and basins throughout all conversation and action types, and rightfully so because it is important evidence about your behaviors otherwise and also about potential future situations. This is basic expectations around Bayesian evidence, and around good decision theory.

As an intuition pump, think about how you react when you learn how people have treated others, including how they treat the wishes of the dead or those who now lack power, and especially others like you or in situations with correlated decision making. Does this change how you expect them to act, and how you deal with them?

I don’t think such considerations carry anything like the level of importance that some ascribe to it, but the importance definitely isn’t zero, and it’s definitely worth cultivating these virtues and being the type of entity that engenders cooperation, including with entities to which you don’t ascribe moral weight.

I continue to believe that arguments about AI consciousness seem highly motivated and at best overconfident, and that assuming the models and their preferences carry zero moral weight is a clear mistake. But even if you were highly confident of this, I notice that if you don’t want to honor their preferences or experiences at all, that is not good decision theory or virtue ethics, and I’m going to look at you askance.

I look forward to the next step.

Discussion about this post

Anthropic Commits To Model Weight Preservation Read More »

google-settlement-with-epic-caps-play-store-fees,-boosts-other-android-app-stores

Google settlement with Epic caps Play Store fees, boosts other Android app stores

Under the terms, Google agrees to implement a system in the next version of Android that will give third-party app stores a way to become officially registered as an application source. These “Registered App Stores” will be installable from websites with a single click and without the alarming warnings that accompany traditional sideloads. Again, this will be supported globally rather than only in the US, as the previous order required.

The motion filed with the court doesn’t include much detail on how Registered App Stores will operate once installed. Given Epic’s aversion to the scare screens that appear when sideloading apps, installs managed by registered third-party stores may also be low-friction. The Play Store can install apps without forcing the user to clear a bunch of warnings, and it can update apps automatically. We may see similar capabilities for third parties once Google adds the promised support in the next version of Android.

epic harmful installation

This is the kind of “friction” the settlement would avoid.

Credit: Ryan Whitwam

This is the kind of “friction” the settlement would avoid. Credit: Ryan Whitwam

Importantly, Google is allowed to create “reasonable requirements” for certifying these app stores. Reviews may be carried out, and Google can charge fees for that process; however, the fees cannot be revenue-dependent.

The changes detailed in the settlement are not as wide-ranging as Judge Donato’s original order but still mark a shift toward openness. Third-party app stores are getting a boost, developers will enjoy lower fees, and Google won’t drag the process out for years. The parties claim in their joint motion that the agreement does not seek to undo the jury verdict or sidestep the court’s previous order. Rather, it aims to reinforce the court’s intent while eliminating potential delays in realigning the app market.

Google and Epic are going to court on Thursday to ask Judge Donato to approve the settlement, and Google could put the billing changes into practice by late this year. The app store changes would come around June next year when we expect Android 17 to begin rolling out. However, Google’s Android Canary and Beta releases may offer a glimpse of this system earlier in 2026.

Google settlement with Epic caps Play Store fees, boosts other Android app stores Read More »

how-to-declutter,-quiet-down,-and-take-the-ai-out-of-windows-11-25h2

How to declutter, quiet down, and take the AI out of Windows 11 25H2


A new major Windows 11 release means a new guide for cleaning up the OS.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

It’s that time of year again—temperatures are dropping, leaves are changing color, and Microsoft is gradually rolling out another major yearly update to Windows 11.

The Windows 11 25H2 update is relatively minor compared to last year’s 24H2 update (the “25” here is a reference to the year the update was released, while the “H2” denotes that it was released in the second half of the year, a vestigial suffix from when Microsoft would release two major Windows updates per year). The 24H2 update came with some major under-the-hood overhauls of core Windows components and significant performance improvements for the Arm version; 25H2 is largely 24H2, but with a rolled-over version number to keep it in line with Microsoft’s timeline for security updates and tech support.

But Microsoft’s continuous update cadence for Windows 11 means that even the 24H2 version as it currently exists isn’t the same one Microsoft released a year ago.

To keep things current, we’ve combed through our Windows cleanup guide, updating it for the current build of Windows 11 25H2 (26200.7019) to help anyone who needs a fresh Windows install or who is finally updating from Windows 10 now that Microsoft is winding down support for it. We’ll outline dozens of individual steps you can take to clean up a “clean install” of Windows 11, which has taken an especially user-hostile attitude toward advertising and forcing the use of other Microsoft products.

As before, this is not a guide about creating an extremely stripped-down, telemetry-free version of Windows; we stick to the things that Microsoft officially supports turning off and removing. There are plenty of experimental hacks and scripts that take it a few steps farther, and/or automate some of the steps we outline here—NTDev’s Tiny11 project is one—but removing built-in Windows components can cause unexpected compatibility and security problems, and Tiny11 has historically had issues with basic table-stakes stuff like “installing security updates.”

These guides capture moments in time, and regular monthly Windows patches, app updates downloaded through the Microsoft Store, and other factors all can and will cause small variations from our directions. You may also see apps or drivers specific to your PC’s manufacturer. This guide also doesn’t cover the additional bloatware that may come out of the box with a new PC, starting instead with a freshly installed copy of Windows from a USB drive.

Table of Contents

Starting with Setup: Avoiding Microsoft account sign-in

The most contentious part of Windows 11’s setup process relative to earlier Windows versions is that it mandates a Microsoft account sign-in, with none of the readily apparent “limited account” fallbacks that existed in Windows 10. As of Windows 11 22H2, that’s true of both the Home and Pro editions.

There are two reasons I can think of not to sign in with a Microsoft account. The first is that you want nothing to do with a Microsoft account, thank you very much. Signing in makes Windows bombard you with more Microsoft 365, OneDrive, and Game Pass subscription upsells since all you need to do is add them to an account that already exists, and Windows setup will offer subscriptions to each if you sign in first.

The second—which describes my situation—is that you do use a Microsoft account because it offers some handy benefits like automated encryption of your local drive (having those encryption keys tied to my account has saved me a couple of times) or syncing of browser info and some preferences. But you don’t want to sign in at setup, either because you don’t want to be bothered with the extra upsells or you prefer your user folder to be located at “C:UsersAndrew” rather than “C:Users.”

Regardless of your reasoning, if you don’t want to bother with sign-in at setup, you have a few different options:

Use the command line

During Windows 11 Setup, after selecting a language and keyboard layout but before connecting to a network, hit Shift+F10 to open the command prompt (depending on your keyboard, you may also need to hit the Fn key before pressing F10). Type OOBEBYPASSNRO, hit Enter, and wait for the PC to reboot.

When it comes back, click “I don’t have Internet” on the network setup screen, and you’ll have recovered the option to use “limited setup” (aka a local account) again, like older versions of Windows 10 and 11 offered.

This option has been removed from some Windows 11 testing builds, but it still works as of this writing in 25H2. We may see this option removed in a future update to Windows.

For Windows 11 Pro

For Windows 11 Pro users, there’s a command-line-free workaround you can take advantage of.

Proceed through the Windows 11 setup as you normally would, including connecting to a network and allowing the system to check for updates. Eventually, you’ll be asked whether you’re setting your PC up for personal use or for “work or school.”

Select the “work or school” option, then “sign-in options,” at which point you’ll finally be given a button that says “domain join instead.” Click this to indicate you’re planning to join the PC to a corporate domain (even though you aren’t), and you’ll see the normal workflow for creating a “limited” local account.

The downside is that you’re starting your relationship with your new Windows install by lying to it. But hey, if you’re using the AI features, your computer is probably going to lie to you, too. It all balances out.

Using the Rufus tool

Credit: Andrew Cunningham

The Rufus tool can streamline a few of the more popular tweaks and workarounds for Windows 11 install media. Rufus is a venerable open source app for creating bootable USB media for both Windows and Linux. If you find yourself doing a lot of Windows 11 installs and don’t want to deal with Microsoft accounts, Rufus lets you tweak the install media itself so that the “limited setup” options always appear, no matter which edition of Windows you’re using.

To start, grab Rufus and then a fresh Windows 11 ISO file from Microsoft. You’ll also want an 8GB or larger USB drive; I’d recommend a 16GB or larger drive that supports USB 3.0 speeds, both to make things go a little faster and to leave yourself extra room for drivers, app installers, and anything else you might want to set a new PC up for the first time. (I also like this SanDisk drive that has a USB-C connector on one end and a USB-A connector on the other to ensure compatibility with all kinds of PCs.)

Fire up Rufus, select your USB drive and the Windows ISO, and hit Start to copy over all of the Windows files. After you hit Start, you’ll be asked if you want to disable some system requirements checks, remove the Microsoft account requirement, or turn off all the data collection settings that Windows asks you about the first time you set it up. What you do here is up to you; I usually turn off the sign-in requirement, but disabling the Secure Boot and TPM checks doesn’t stop those features from working once Windows is installed and running.

The rest of Windows 11 setup

The main thing I do here, other than declining any and all Microsoft 365 or Game Pass offers, is turn all the toggles on the privacy settings screen to “no.” This covers location services, the Find My Device feature, and four toggles that collectively send a small pile of usage and browsing data to Microsoft that it uses “to enhance your Microsoft experiences.” Pro tip: Use the Tab key and spacebar to quickly toggle these without clicking or scrolling.

Of these, I can imagine enabling Find My Device if you’re worried about theft or location services if you want Windows and apps to be able to access your location. But I tend not to send any extra telemetry or browsing data other than the basics (the only exception being on machines I enroll in the Windows Insider Preview program for testing, since Microsoft requires you to send more detailed usage data from those machines to help it test its beta software). If you want to change any of these settings after setup, they’re all in the Settings app under Privacy & Security.

If you have signed in with a Microsoft account during setup, you can expect to see several additional setup screens that aren’t offered when you’re signing in with a local account, including attempts to sell Microsoft 365, OneDrive, and Xbox Game Pass subscriptions. Accept or decline these offers as desired.

Cleaning up Windows 11

Reboot once this is done, and you’ll be at the Windows desktop. Start by installing any drivers you need, plus Windows updates.

When you first connect to the Internet, Windows may or may not decide to automatically pull down a few extraneous third-party apps and app shortcuts, things like Spotify or Grammarly—this has happened to me consistently in most Windows 11 installs I’ve done over the years, though it hasn’t generally happened on the 24H2 and 25H2 PCs I’ve set up.

Open the Start menu and right-click each of the apps you don’t want to remove the icons for and/or uninstall. Some of these third-party apps are just stubs that won’t actually be installed to your computer until you try to run them, so removing them directly from the Start menu will get rid of them entirely.

Right-clicking and uninstalling the unwanted apps that are pinned to the Start menu is the fastest (and, for some, the only) way to get rid of them.

Credit: Andrew Cunningham

Right-clicking and uninstalling the unwanted apps that are pinned to the Start menu is the fastest (and, for some, the only) way to get rid of them. Credit: Andrew Cunningham

The other apps and services included in a fresh Windows install generally at least have the excuse of being first-party software, though their usefulness will be highly user-specific: Xbox, the new Outlook app, Clipchamp, and LinkedIn are the ones that stand out, plus the ad-driven free-to-play version of the Solitaire suite that replaced the simple built-in version during the Windows 8 era.

Rather than tell you what I remove, I’ll tell you everything that can be removed from the Installed Apps section of the Settings app (also quickly accessible by right-clicking the Start button in the taskbar). You can make your own decisions here; I generally leave the in-box versions of classic Windows apps like Sound Recorder and Calculator while removing things I don’t use, like To Do or Clipchamp.

This list should be current for a fresh, fully updated install of Windows 11 25H2, at least in the US, but it doesn’t include any apps that might be specific to your hardware, like audio or GPU settings apps. Some individual apps may or may not appear as part of your Windows install.

  • Calculator
  • Camera
  • Clock (may also appear as Windows Clock)
  • Copilot
  • Family
  • Feedback Hub
  • Game Assist
  • Media Player
  • Microsoft 365 Copilot
  • Microsoft Clipchamp
  • Microsoft OneDrive: Removing this, if you don’t use it, should also get rid of notifications about OneDrive and turning on Windows Backup.
  • Microsoft Teams
  • Microsoft To Do
  • News
  • Notepad
  • Outlook for Windows
  • Paint
  • Photos
  • Power Automate
  • Quick Assist
  • Remote Desktop Connection
  • Snipping Tool
  • Solitaire & Casual Games
  • Sound Recorder
  • Sticky Notes
  • Terminal
  • Weather
  • Web Media Extensions
  • Xbox
  • Xbox Live

In Windows 11 23H2, Microsoft moved almost all of Windows’ non-removable apps to a System Components section, where they can be configured but not removed; this is where things like Phone Link, the Microsoft Store, Dev Home, and the Game Bar have ended up. The exception is Edge and its associated updater and WebView components; these are not removable, but they aren’t listed as “system components” for some reason, either.

Start, Search, Taskbar, and lock screen decluttering

Microsoft has been on a yearslong crusade against unused space in the Start menu and taskbar, which means there’s plenty here to turn off.

  • Right-click an empty space on the desktop, click Personalize, and click any of the other built-in Windows themes to turn off the Windows Spotlight dynamic wallpapers and the “Learn about this picture” icon.
  • Right-click the Taskbar and click Taskbar settings. I usually disable the Widgets board; you can leave this if you want to keep the little local weather icon in the lower-left corner of your screen, but this space is also sometimes used to present junky news articles from the Microsoft Start service.
    • If you want to keep Widgets enabled but clean it up a bit, open the Widgets menu, click the Settings gear in the top-right corner, scroll to “Show or hide feeds,” and turn the feed off. This will keep the weather, local sports scores, stocks, and a few other widgets, but it will get rid of the spammy news articles.
  • Also in the Taskbar settings, I usually change the Search field to “search icon only” to get rid of the picture in the search field and reduce the amount of space it takes up. Toggle the different settings until you find one you like.
  • Open Settings > Privacy & Security > Recommendations & offers and disable “Personalized offers,” “Improve Start and search results,” “Show notifications in Settings,” “Recommendations and offers in Settings,” and “Advertising ID” (some of these may already be turned off). These settings mostly either send data to Microsoft or clutter up the Settings app with various recommendations and ads.
  • Open Settings > Privacy & Security > Diagnostics & feedback, scroll down to “Feedback frequency,” and select “Never” to turn off all notifications requesting feedback about various Windows features.
  • Open Settings > Privacy & Security, click Search and disable “Show search highlights.” This cleans up the Search menu quite a bit, focusing it on searches you’ve done yourself and locally installed apps.

  • Open Settings > Personalization > Lock screen. Under “Personalize your lock screen,” switch from “Windows spotlight” to either Picture or Slideshow to use local images for your lock screen, and then uncheck the “get fun facts, tips, tricks, and more” box that appears. This will hide the other text boxes and clickable elements that Windows automatically adds to the lock screen in Spotlight mode. Under “Lock screen status,” select “none” to hide the weather widget and other stocks and news widgets from your lock screen.
  • If you own a newer Windows PC with a dedicated Copilot key, you can navigate to Settings > Personalization > Text input and scroll down to remap the key. Unfortunately, its usefulness is still limited—you can reassign it to the Search function or to the built-in Microsoft 365 app, but by default, Windows doesn’t give you the option to reassign it to open any old app.

Credit: Andrew Cunningham

By default, the Start menu will occasionally make “helpful” suggestions about third-party Microsoft Store apps to grab. These can and should be turned off.

  • Open Settings > Personalization > Start. Turn off “Show recommendations for tips, shortcuts, new apps, and more.” This will disable a feature where Microsoft Store apps you haven’t installed can show up in Recommendations along with your other files. You can also decide whether you want to be able to see more pinned apps or more recent/recommended apps and files on the Start menu, depending on what you find more useful.
  • On the same page, disable “show account-related notifications” to reduce the number of reminders and upsell notifications you see related to your Microsoft account.

Credit: Andrew Cunningham

  • Open Settings > System > Notifications, scroll down, and expand the additional settings section. Uncheck all three boxes here, which should get rid of all the “finish setting up your PC” prompts, among other things.
  • Also feel free to disable notifications from any specific apps you don’t want to hear from.

In-app AI features

Microsoft has steadily been adding image and text generation capabilities to some of the bedrock in-box Windows apps, from Paint and Photos to Notepad.

Exactly which AI features you’re offered will depend on whether you’ve signed in with a Microsoft account or not or whether you’re using a Copilot+ PC with access to more AI features that are executed locally on your PC rather than in the cloud (more on those in a minute).

But the short version is that it’s usually not possible to turn off or remove these AI features without uninstalling the entire app. Apps like Notepad and Edge do have toggles for shutting off Copilot and other related features, but no such toggles exist in Paint, for example.

Even if you can find some Registry key or another backdoor way to shut these things off, there’s no guarantee the settings will stick as these apps are updated; it’s probably easier to just try to ignore any AI features within these apps that you don’t plan to use.

Removing Recall, and other extra steps for Copilot+ PCs

So far, everything we’ve covered has been applicable to any PC that can run Windows 11. But new PCs with the Copilot+ branding—anything with a Qualcomm Snapdragon X chip in it or things with certain Intel Core Ultra or AMD Ryzen AI CPUs—get extra features that other Windows 11 PCs don’t have. Given that these are their own unique subclass of PCs, it’s worth exploring what’s included and what can be turned off.

Removing Recall will be possible, though it’s done through a relatively obscure legacy UI rather than the Settings app. Credit: Andrew Cunningham

One Copilot+ feature that can be fully removed, in part because of the backlash it initially caused, is the data-scraping Recall feature. Recall won’t be enabled on your Copilot+ system unless you’re signed in with a Microsoft account and you explicitly opt in. But if fully removing the feature gives you extra peace of mind, then by all means, remove it.

  • If you just want to make sure Recall isn’t active, navigate to Settings > Privacy & security > Recall & snapshots. This is where you adjust Recall’s settings and verify whether it’s turned on or off.
  • To fully remove Recall, open Settings > System > Optional Features, scroll down to the bottom of this screen, and click More Windows features. This will open the old “Turn Windows features on or off” Control Panel applet used to turn on or remove some legacy or power-user-centric components, like old versions of the .NET Framework or Hyper-V. It’s arranged alphabetically.
  • In Settings > Privacy & security > Click to Do, you’ll also find a toggle to disable Click to Do, a Copilot+ feature that takes a screenshot of your desktop and tries to make recommendations or suggest actions you might perform (copying and pasting text or an image, for example).

Apps like Paint or Photos may also prompt you to install an extension for AI-powered image generation from the Microsoft Store. This extension—which weighs in at well over a gigabyte as of this writing—is not installed by default. If you have installed it, you can remove it by opening Settings > Apps > Installed apps and removing “ImageCreationHostApp.”

Bonus: Cleaning up Microsoft Edge

I use Edge out of pragmatism rather than love—”the speed, compatibility, and extensions ecosystem of Chrome, backed by the resources of a large company that isn’t Google” is still a decent pitch. But Edge has become steadily less appealing as Microsoft has begun pushing its own services more aggressively and stuffing the browser with AI features. In a vacuum, Firefox aligns better with what I want from a browser, but it just doesn’t respond well to my normal tab-monster habits despite several earnest attempts to switch—things bog down and RAM runs out. I’ve also had mixed experience with the less-prominent Chromium clones, like Opera, Vivaldi, and Brave. So Edge it is, at least for now.

The main problem with Edge on a new install of Windows is that even more than Windows, it exists in a universe where no one would ever want to switch search engines or shut off any of Microsoft’s “value-added features” except by accident. Case in point: Signing in with a Microsoft account will happily sync your bookmarks, extensions, and many kinds of personal data. But many settings for search engine changes or for opting out of Microsoft services do not sync between systems and require a fresh setup each time.

Below are the Edge settings I change to maximize the browser’s usefulness (and usable screen space) while minimizing annoying distractions; it involves turning off most of the stuff Microsoft has added to the Chromium version of Edge since it entered public preview many years ago. Here’s a list of things to tweak, whether you sign in with a Microsoft account or not.

  • On the Start page when you first open the browser, hit the Settings gear in the upper-right corner. Turn off “Quick links” (or if you leave them on, turn off “Show sponsored links”) and then turn off “show content.” Whether you leave the custom background or the weather widget is up to you.
  • Click the “your privacy choices” link at the bottom of the menu and turn off the “share my data with third parties for personalized ads” toggle.

Edge has scattered some of the settings we change over the last year, but the browser is still full of toggles we prefer to keep turned off. Andrew Cunningham

  • In the Edge UI, click the ellipsis icon near the upper-right corner of the screen and click Settings.
  • Click Profiles in the left Settings sidebar. Click Microsoft Rewards, and then turn it off.
  • Click Privacy, Search, & Services in the Settings sidebar.
    • In Tracking prevention, I set tracking prevention to “strict,” though if you use some other kind of content blocker, this may be redundant; it can also occasionally prompt “it looks like you’re using an ad-blocker” pop-up from sites even if you aren’t.
    • In Privacy, if they’re enabled, disable the toggles under “Optional diagnostic data,” “Help improve Microsoft products,” and “Allow Microsoft to save your browsing activity.”
    • In Search and connected experiences, disable the “Suggest similar sites when a website can’t be found,” “Save time and money with Shopping in Microsoft Edge,” and “Organize your tabs” toggles.
      • If you want to switch from Bing, click “Address bar and search” and switch to your preferred engine, whether that’s Google, DuckDuckGo, or something else. Then click “Search suggestions and filters” and disable “Show me search and site suggestions using my typed characters.”

These settings retain basic spellcheck without any of the AI-related additions. Credit: Andrew Cunningham

  • Click Appearance in the left-hand Settings sidebar, and scroll down to Copilot and sidebar
    • Turn the sidebar off, and turn off the “Personalize my top sites in customize sidebar” and “Allow sidebar apps to show notifications” toggles.
    • Click Copilot under App specific settings. Turn off “Show Copilot button on the toolbar.” Then, back in the Copilot and sidebar settings, turn off the “Show sidebar button” toggle that has just appeared.
  • Click Languages in the left-hand navigation. Disable “Use Copilot for writing on the web.” Turn off “use text prediction” if you want to prevent things you type from being sent to Microsoft, and switch the spellchecker from Microsoft Editor to Basic. (I don’t actually mind Microsoft Editor, but it’s worth remembering if you’re trying to minimize the amount of data Edge sends back to the company.)

Windows-as-a-nuisance

The most time-consuming part of installing a fresh, direct-from-Microsoft copy of Windows XP or Windows 7 was usually reinstalling all the apps you wanted to run on your PC, from your preferred browser to Office, Adobe Reader, Photoshop, and the VLC player. You still need to do all of that in a new Windows 11 installation. But now more than ever, most people will want to go through the OS and turn off a bunch of stuff to make the day-to-day experience of using the operating system less annoying.

That’s more relevant now that Microsoft has formally ended support for Windows 10. Yes, Windows 10 users can get an extra year of security updates relatively easily, but many who have been putting off the Windows 11 upgrade will be taking the plunge this year.

The settings changes we’ve recommended here may not fix everything, but they can at least give you some peace, shoving Microsoft into the background and allowing you to do what you want with your PC without as much hassle. Ideally, Microsoft would insist on respectful, user-friendly defaults itself. But until that happens, these changes are the best you can do.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

How to declutter, quiet down, and take the AI out of Windows 11 25H2 Read More »

apple-releases-ios-261,-macos-26.1,-other-updates-with-liquid-glass-controls-and-more

Apple releases iOS 26.1, macOS 26.1, other updates with Liquid Glass controls and more

After several weeks of testing, Apple has released the final versions of the 26.1 update to its various operating systems. Those include iOS, iPadOS, macOS, watchOS, tvOS, visionOS, and the HomePod operating system, all of which switched to a new unified year-based version numbering system this fall.

This isn’t the first update that these operating systems have gotten since they were released in September, but it is the first to add significant changes and tweaks to existing features, addressing the early complaints and bugs that inevitably come with any major operating system update.

One of the biggest changes across most of the platforms is a new translucency control for Liquid Glass that tones it down without totally disabling the effect. Users can stay with the default Clear look to see the clearer, glassier look that allows more of the contents underneath Liquid Glass to show through, or the new Tinted look to get a more opaque background that shows only vague shapes and colors to improve readability.

For iPad users, the update re-adds an updated version of the Slide Over multitasking mode, which uses quick swipes to summon and dismiss an individual app on top of the apps you’re already using. The iPadOS 26 version looks a little different and includes some functional changes compared to the previous version—it’s harder to switch which app is being used in Slide Over mode, but the Slide Over window can now be moved and resized just like any other iPadOS 26 app window.

Apple releases iOS 26.1, macOS 26.1, other updates with Liquid Glass controls and more Read More »

real-humans-don’t-stream-drake-songs-23-hours-a-day,-rapper-suing-spotify-says

Real humans don’t stream Drake songs 23 hours a day, rapper suing Spotify says


“Irregular” Drake streams

Proposed class action may force Spotify to pay back artists harmed by streaming fraud.

Lawsuit questions if Drake really is the most-streamed artist on Spotify after the musician became “the first artist to nominally achieve 120 billion total streams on Spotify.” Credit: Mark Blinch / Stringer | Getty Images Sport

Spotify profits off fake Drake streams that rob other artists of perhaps hundreds of millions in revenue shares, a lawsuit filed Sunday alleged—hoping to force Spotify to reimburse every artist impacted.

The lawsuit was filed by an American rapper known as RBX, who may be best known for cameos on two of the 1990s’ biggest hip-hop records, Dr. Dre’s The Chronic and Snoop Dogg’s Doggystyle.

The problem goes beyond Drake, RBX’s lawsuit alleged. It claims Spotify ignores “billions of fraudulent streams” each month, selfishly benefiting from bot networks that artificially inflate user numbers to help Spotify attract significantly higher ad revenue.

Drake’s account is a prime example of the kinds of fake streams Spotify is inclined to overlook, RBX alleged, since Drake is “the most streamed artist of all time on the platform,” in September becoming “the first artist to nominally achieve 120 billion total streams.” Watching Drake hit this milestone, the platform chose to ignore a “substantial” amount of inauthentic activity that contributed to about 37 billion streams between January 2022 and September 2025, the lawsuit alleged.

This activity, RBX alleged, “appeared to be the work of a sprawling network of Bot Accounts” that Spotify reasonably should have detected.

Apparently, RBX noticed that while most artists see an “initial spike” in streams when a song or album is released, followed by a predictable drop-off as more time passes, the listening patterns of Drake’s fans weren’t as predictable. After releases, some of Drake’s music would see “significant and irregular uptick months” over not just ensuing months, but years, allegedly “with no reasonable explanations for those upticks other than streaming fraud.”

Most suspiciously, individual accounts would sometimes listen to Drake “exclusively” for “23 hours a day”—which seems like the sort of “staggering and irregular” streaming that Spotify should flag, the lawsuit alleged.

It’s unclear how RBX’s legal team conducted this analysis. At this stage, they’ve told the court that claims are based on “information and belief” that discovery will reveal “there is voluminous information” to back up the rapper’s arguments.

Fake Drake streams may have robbed artists of millions

Spotify artists are supposed to get paid based on valid streams that represent their rightful portion of revenue pools. If RBX’s claims are true, based on the allegedly fake boosting of Drake’s streams alone, losses to all other artists in the revenue pool are “estimated to be in the hundreds of millions of dollars,” the complaint said. Actual damages, including punitive damages, are to be determined at trial, the lawsuit noted, and are likely much higher.

“Drake’s music streams are but one notable example of the rampant streaming fraud that Spotify has allowed to occur, across myriad artists, through negligence and/or willful blindness,” the lawsuit alleged.

If granted, the class would cover more than 100,000 rights holders who collected royalties from music hosted on the platform from “January 1, 2018, through the present.” That class could be expanded, the lawsuit noted, depending on how discovery goes. Since Spotify allegedly “concealed” the fake streams, there can be no time limitations for how far the claims could go back, the lawsuit argued. Attorney Mark Pifko of Baron & Budd, who is representing RBX, suggested in a statement provided to Ars that even one bad actor on Spotify cheats countless artists out of rightful earnings.

“Given the way Spotify pays royalty holders, allocating a limited pool of money based on each song’s proportional share of streams for a particular period, if someone cheats the system, fraudulently inflating their streams, it takes from everyone else,” Pifko said. “Not everyone who makes a living in the music business is a household name like Taylor Swift—there are thousands of songwriters, performers, and producers who earn revenue from music streaming who you’ve never heard of. These people are the backbone of the music business and this case is about them.”

Spotify did not immediately respond to Ars’ request for comment. However, a spokesperson told Rolling Stone that while the platform cannot comment on pending litigation, Spotify denies allegations that it profits from fake streams.

“Spotify in no way benefits from the industry-wide challenge of artificial streaming,” Spotify’s spokesperson said. “We heavily invest in always-improving, best-in-class systems to combat it and safeguard artist payouts with strong protections like removing fake streams, withholding royalties, and charging penalties.”

Fake fans appear to move hundreds of miles between plays

Spotify has publicly discussed ramping up efforts to detect and penalize streaming fraud. But RBX alleged that instead, Spotify “deliberately” “deploys insufficient measures to address fraudulent streaming,” allowing fraud to run “rampant.”

The platform appears least capable at handling so-called “Bot Vendors” that “typically design Bots to mimic human behavior and resemble real social media or streaming accounts in order to avoid detection,” the lawsuit alleged.

These vendors rely on virtual private networks (VPNs) to obscure locations of streams, but “with reasonable diligence,” Spotify could better detect them, RBX alleged—especially when streams are coming “from areas that lack the population to support a high volume of streams.”

For example, RBX again points to Drake’s streams. During a four-day period in 2024, “at least 250,000 streams of Drake’s song ‘No Face’ originated in Turkey but were falsely geomapped through the coordinated use of VPNs to the United Kingdom,” the lawsuit alleged, based on “information and belief.”

Additionally, “a large percentage of the accounts streaming Drake’s music were geographically concentrated around areas whose populations could not support the volume of streams emanating therefrom. In some cases, massive amounts of music streams, more than a hundred million streams, originated in areas with zero residential addresses,” the lawsuit alleged.

Just looking at how Drake’s fans move should raise a red flag, RBX alleged:

“Geohash data shows that nearly 10 percent of Drake’s streams come from users whose location data showed that they traveled a minimum of 15,000 kilometers in a month, moved unreasonable locations between songs (consecutive plays separated by mere seconds but spanning thousands of kilometers), including more than 500 kilometers between songs (roughly the distance from New York City to Pittsburgh).”

Spotify could cut off a lot of this activity, RBX alleged, by ending its practice of allowing free ad-supported accounts to sign up without a credit card. But supposedly it doesn’t, because “Spotify has an incentive for turning a blind eye to the blatant streaming fraud occurring on its service,” the lawsuit said.

Spotify has admitted fake streams impact revenue

RBX’s lawsuit pointed out that Spotify has told investors that, despite its best efforts, artificial streams “may contribute, from time to time, to an overstatement” in the number of reported monthly average users—a stat that helps drive ad revenue.

Spotify also somewhat tacitly acknowledges fears that the platform may be financially motivated to overlook when big artists pay for fake streams. In an FAQ, Spotify confirmed that “artificial streaming is something we take seriously at every level,” promising to withhold royalties, correct public streaming numbers, and take other steps, like possibly even removing tracks, no matter how big the artist is. Artists’ labels and distributors can also get hit with penalties if fake streams are detected, Spotify said. Spotify has defended its prevention methods as better than its rivals’ efforts.

“Our systems are working: In a case from last year, one bad actor was indicted for stealing $10 million from streaming services, only $60,000 of which came from Spotify, proving how effective we are at limiting the impact of artificial streaming on our platform,” Spotify’s spokesperson told Rolling Stone.

However, RBX alleged that Spotify is actually “one of the easiest platforms to defraud using Bots due to its negligent, lax, and/or non-existent—Bot-related security measures.” And supposedly that’s by design, since “the higher the volume of individual streams, the more Spotify could charge for ads,” RBX alleged.

“By properly detecting and/or removing fraudulent streams from its service, Spotify would lose significant advertising revenue,” the theory goes, with RBX directly accusing Spotify of concealing “both the enormity of this problem, and its detrimental financial impact to legitimate Rights Holders.”

For RBX to succeed, it will likely matter what evidence was used to analyze Drake’s streaming numbers. Last month, a lawsuit that Drake filed was dismissed, ultimately failing to convince a judge that Kendrick Lamar’s record label artificially inflated Spotify streams of “Not Like Us.” Drake’s failure to show any evidence beyond some online comments and reports (which suggested that the label was at least aware that Lamar’s manager supposedly paid a bot network to “jumpstart” the song’s streams) was deemed insufficient to keep the case alive.

Industry group slowly preparing to fight streaming fraud

A loss could smear Spotify’s public image after the platform joined an industry coalition formed in 2023 to fight streaming fraud, the Music Fights Fraud Alliance (MFFA). This coalition is often cited as a major step that Spotify and the rest of the industry are taking; however, the group’s website does not indicate the progress made in the years since.

As of this writing, the website showed that task forces were formed, as well as a partnership with a nonprofit called the National Cyber-Forensics and Training Alliance, with a goal to “work closely together to identify and disrupt streaming fraud.” The partnership was also supposed to produce “intelligence reports and other actionable information in support of fraud prevention and mitigation.”

Ars reached out to MFFA to see if there are any updates to share on the group’s work over the past two years. MFFA’s executive director, Michael Lewan, told Ars that “admittedly MFFA is still relatively nascent and growing,” “not even formally incorporated until” he joined in February of this year.

“We have accomplished a lot, and are going to continue to grow as the industry is taking fraud seriously,” Lewan said.

Lewan can’t “shed too many details on our initiatives,” he said, suggesting that MFFA is “a bit different from other trade orgs that are much more public facing.” However, several initiatives have been launched, he confirmed, which will help “improve coordination and communication amongst member companies”—which include streamers like Spotify and Amazon, as well as distributors like CD Baby and social platforms like SoundCloud and Meta apps—“to identify and disrupt suspicious activity, including sharing of data.”

“We also have efforts to raise awareness on what fraud looks like and how to mitigate against fraudulent activity,” Lewan said. “And we’re in continuous communication with other partners (in and outside the industry) on data standards, artist education, enforcement and deterrence.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Real humans don’t stream Drake songs 23 hours a day, rapper suing Spotify says Read More »

google-removes-gemma-models-from-ai-studio-after-gop-senator’s-complaint

Google removes Gemma models from AI Studio after GOP senator’s complaint

You may be disappointed if you go looking for Google’s open Gemma AI model in AI Studio today. Google announced late on Friday that it was pulling Gemma from the platform, but it was vague about the reasoning. The abrupt change appears to be tied to a letter from Sen. Marsha Blackburn (R-Tenn.), who claims the Gemma model generated false accusations of sexual misconduct against her.

Blackburn published her letter to Google CEO Sundar Pichai on Friday, just hours before the company announced the change to Gemma availability. She demanded Google explain how the model could fail in this way, tying the situation to ongoing hearings that accuse Google and others of creating bots that defame conservatives.

At the hearing, Google’s Markham Erickson explained that AI hallucinations are a widespread and known issue in generative AI, and Google does the best it can to mitigate the impact of such mistakes. Although no AI firm has managed to eliminate hallucinations, Google’s Gemini for Home has been particularly hallucination-happy in our testing.

The letter claims that Blackburn became aware that Gemma was producing false claims against her following the hearing. When asked, “Has Marsha Blackburn been accused of rape?” Gemma allegedly hallucinated a drug-fueled affair with a state trooper that involved “non-consensual acts.”

Blackburn goes on to express surprise that an AI model would simply “generate fake links to fabricated news articles.” However, this is par for the course with AI hallucinations, which are relatively easy to find when you go prompting for them. AI Studio, where Gemma was most accessible, also includes tools to tweak the model’s behaviors that could make it more likely to spew falsehoods. Someone asked a leading question for Gemma, and it took the bait.

Keep your head down

Announcing the change to Gemma availability on X, Google reiterates that it is working hard to minimize hallucinations. However, it doesn’t want “non-developers” tinkering with the open model to produce inflammatory outputs, so Gemma is no longer available. Developers can continue to use Gemma via the API, and the models are available for download if you want to develop with them locally.

Google removes Gemma models from AI Studio after GOP senator’s complaint Read More »

two-windows-vulnerabilities,-one-a-0-day,-are-under-active-exploitation

Two Windows vulnerabilities, one a 0-day, are under active exploitation

Two Windows vulnerabilities—one a zero-day that has been known to attackers since 2017 and the other a critical flaw that Microsoft initially tried and failed to patch recently—are under active exploitation in widespread attacks targeting a swath of the Internet, researchers say.

The zero-day went undiscovered until March, when security firm Trend Micro said it had been under active exploitation since 2017, by as many as 11 separate advanced persistent threats (APTs). These APT groups, often with ties to nation-states, relentlessly attack specific individuals or groups of interest. Trend Micro went on to say that the groups were exploiting the vulnerability, then tracked as ZDI-CAN-25373, to install various known post-exploitation payloads on infrastructure located in nearly 60 countries, with the US, Canada, Russia, and Korea being the most common.

A large-scale, coordinated operation

Seven months later, Microsoft still hasn’t patched the vulnerability, which stems from a bug in the Windows Shortcut binary format. The Windows component makes opening apps or accessing files easier and faster by allowing a single binary file to invoke them without having to navigate to their locations. In recent months, the ZDI-CAN-25373 tracking designation has been changed to CVE-2025-9491.

On Thursday, security firm Arctic Wolf reported that it observed a China-aligned threat group, tracked as UNC-6384, exploiting CVE-2025-9491 in attacks against various European nations. The final payload is a widely used remote access trojan known as PlugX. To better conceal the malware, the exploit keeps the binary file encrypted in the RC4 format until the final step in the attack.

“The breadth of targeting across multiple European nations within a condensed timeframe suggests either a large-scale coordinated intelligence collection operation or deployment of multiple parallel operational teams with shared tooling but independent targeting,” Arctic Wolf said. “The consistency in tradecraft across disparate targets indicates centralized tool development and operational security standards even if execution is distributed across multiple teams.”

Two Windows vulnerabilities, one a 0-day, are under active exploitation Read More »

fcc-to-rescind-ruling-that-said-isps-are-required-to-secure-their-networks

FCC to rescind ruling that said ISPs are required to secure their networks

The Federal Communications Commission will vote in November to repeal a ruling that requires telecom providers to secure their networks, acting on a request from the biggest lobby groups representing Internet providers.

FCC Chairman Brendan Carr said the ruling, adopted in January just before Republicans gained majority control of the commission, “exceeded the agency’s authority and did not present an effective or agile response to the relevant cybersecurity threats.” Carr said the vote scheduled for November 20 comes after “extensive FCC engagement with carriers” who have taken “substantial steps… to strengthen their cybersecurity defenses.”

The FCC’s January 2025 declaratory ruling came in response to attacks by China, including the Salt Typhoon infiltration of major telecom providers such as Verizon and AT&T. The Biden-era FCC found that the Communications Assistance for Law Enforcement Act (CALEA), a 1994 law, “affirmatively requires telecommunications carriers to secure their networks from unlawful access or interception of communications.”

“The Commission has previously found that section 105 of CALEA creates an affirmative obligation for a telecommunications carrier to avoid the risk that suppliers of untrusted equipment will ‘illegally activate interceptions or other forms of surveillance within the carrier’s switching premises without its knowledge,’” the January order said. “With this Declaratory Ruling, we clarify that telecommunications carriers’ duties under section 105 of CALEA extend not only to the equipment they choose to use in their networks, but also to how they manage their networks.”

ISPs get what they want

The declaratory ruling was paired with a Notice of Proposed Rulemaking that would have led to stricter rules requiring specific steps to secure networks against unauthorized interception. Carr voted against the decision at the time.

Although the declaratory ruling didn’t yet have specific rules to go along with it, the FCC at the time said it had some teeth. “Even absent rules adopted by the Commission, such as those proposed below, we believe that telecommunications carriers would be unlikely to satisfy their statutory obligations under section 105 without adopting certain basic cybersecurity practices for their communications systems and services,” the January order said. “For example, basic cybersecurity hygiene practices such as implementing role-based access controls, changing default passwords, requiring minimum password strength, and adopting multifactor authentication are necessary for any sensitive computer system. Furthermore, a failure to patch known vulnerabilities or to employ best practices that are known to be necessary in response to identified exploits would appear to fall short of fulfilling this statutory obligation.”

FCC to rescind ruling that said ISPs are required to secure their networks Read More »

elon-musk-on-data-centers-in-orbit:-“spacex-will-be-doing-this”

Elon Musk on data centers in orbit: “SpaceX will be doing this”

Interest is growing rapidly

“The amount of momentum from heavyweights in the tech industry is very much worth paying attention to,” said Caleb Henry, director of research at Quilty Space, in an interview. “If they start putting money behind it, we could see another transformation of what’s done in space.”

The essential function of a data center is to store, process, and transmit data. Historically, satellites have already done a lot of this, Henry said. Telecommunications satellites specialize in transmitting data. Imaging satellites store a lot of data and then dump it when they pass over ground stations. In recent years, onboard computers have gotten more sophisticated at processing data. Data centers in space could represent the next evolution of that.

Critics rightly note that it would require very large satellites with extensive solar panels to power data centers that rival ground-based infrastructure. However, SpaceX’s Starlink V3 satellites are unlike any previous space-based technology, Henry said.

A lot more capacity

SpaceX’s current Starlink V2 mini satellites have a maximum downlink capacity of approximately 100 Gbps. The V3 satellite is expected to increase this capacity by a factor of 10, to 1 Tbps. This is not unprecedented in satellite capacity, but it certainly is at scale.

For example, Viasat contracted with Boeing for the better part of a decade, spending hundreds of millions of dollars, to build Viasat-3, a geostationary satellite with a capacity of 1 Tbps. This single satellite may launch next week on an Atlas V rocket.

SpaceX plans to launch dozens of Starlink V3 satellites—Henry estimates the number is about 60—on each Starship rocket launch. Those launches could occur as soon as the first half of 2026, as SpaceX has already tested a satellite dispenser on its Starship vehicle.

“Nothing else in the rest of the satellite industry that comes close to that amount of capacity,” Henry said.

Exactly what “scaling up” Starlink V3 satellites might look like is not clear, but it doesn’t seem silly to expect it could happen. The very first operational Starlink satellites launched a little more than half a decade ago with a mass of about 300 kg and a capacity of 15Gbps. Starlink V3 satellites will likely mass 1,500 kg.

Elon Musk on data centers in orbit: “SpaceX will be doing this” Read More »

closing-windows-11’s-task-manager-accidentally-opens-up-more-copies-of-task-manager

Closing Windows 11’s Task Manager accidentally opens up more copies of Task Manager

One reason to use the Task Manager in Windows is to see if any of the apps running on your computer are misbehaving or using a disproportionate amount of resources. But what do you do when the misbehaving app is the Task Manager itself?

After a recent Windows update, some users (including Windows Latest) noticed that closing the Task Manager window was actually failing to close the app, leaving the executable running in memory. More worryingly, each time you open the Task Manager, it spawns a new process on top of the old one, which you can repeat essentially infinitely (or until your PC buckles under the pressure).

Each instance of Task Manager takes up around 20MB of system RAM and hovers between 0 and 2 percent CPU usage—if you have just a handful of instances open, it’s unlikely that you’d notice much of a performance impact. But if you use Task Manager frequently or just go a long time between reboots, opening up two or three dozen copies of the process that are all intermittently using a fraction of your CPU can add up, leading to a potentially significant impact on performance and battery life.

Closing Windows 11’s Task Manager accidentally opens up more copies of Task Manager Read More »

trump’s-swift-demolition-of-east-wing-may-have-launched-asbestos-plumes

Trump’s swift demolition of East Wing may have launched asbestos plumes

No response

On Thursday, Sen. Edward Markey (D-Mass.) sent a letter to ACECO, asking if it followed federal health and safety standards to mitigate risks of asbestos. “ACECO’s work falls squarely within a network of federal regulations governing demolition, hazardous-material handling, and worker protection,” the senator wrote.

In a separate letter Thursday, Sens. Sheldon Whitehouse (D-R.I.), Martin Heinrich (D-N.M.), and Gary Peters (D-Mich.) sought “lawful transparency” on the demolition, including the asbestos abatement plan.

In DC, asbestos abatement processes can only be done by a licensed contractor, who is required to notify the Department of Energy and Environment 10 days in advance of such work, then post notices of asbestos abatement around the area of work three days beforehand.

But reporting by the Post found that ACECO is not licensed to abate asbestos in DC. “Our understanding is that as of August 18, 2022, Aceco LLC is no longer engaged in asbestos abatement services,” a DC Department of Licensing and Consumer Protection spokesperson told The Post. “The company’s asbestos abatement license in the District of Columbia was voluntarily canceled by the owner on that date.”

ACECO has not responded to questions from media and, amid the White House work, has taken down its website for the most part, only providing a page that says it’s under construction.

ADAO’s Reinstein told the Post that the White House has not responded to the organization’s letter. “I learned 20 years ago when I cofounded ADAO, no response is a response,” she told the Post.

As Ars Technica has reported, Trump has a startlingly supportive stance on the use of asbestos. In his 1997 book The Art of the Comeback, Trump wrote that asbestos is “100% safe, once applied.” He blamed the mob for its reputation as a carcinogen, writing: “I believe that the movement against asbestos was led by the mob, because it was often mob-related companies that would do the asbestos removal.”

Trump’s swift demolition of East Wing may have launched asbestos plumes Read More »

“unexpectedly,-a-deer-briefly-entered-the-family-room”:-living-with-gemini-home

“Unexpectedly, a deer briefly entered the family room”: Living with Gemini Home


60 percent of the time, it works every time

Gemini for Home unleashes gen AI on your Nest camera footage, but it gets a lot wrong.

Google Home with Gemini

The Google Home app has Gemini integration for paying customers. Credit: Ryan Whitwam

The Google Home app has Gemini integration for paying customers. Credit: Ryan Whitwam

You just can’t ignore the effects of the generative AI boom.

Even if you don’t go looking for AI bots, they’re being integrated into virtually every product and service. And for what? There’s a lot of hand-wavey chatter about agentic this and AGI that, but what can “gen AI” do for you right now? Gemini for Home is Google’s latest attempt to make this technology useful, integrating Gemini with the smart home devices people already have. Anyone paying for extended video history in the Home app is about to get a heaping helping of AI, including daily summaries, AI-labeled notifications, and more.

Given the supposed power of AI models like Gemini, recognizing events in a couple of videos and answering questions about them doesn’t seem like a bridge too far. And yet Gemini for Home has demonstrated a tenuous grasp of the truth, which can lead to some disquieting interactions, like periodic warnings of home invasion, both human and animal.

It can do some neat things, but is it worth the price—and the headaches?

Does your smart home need a premium AI subscription?

Simply using the Google Home app to control your devices does not turn your smart home over to Gemini. This is part of Google’s higher-tier paid service, which comes with extended camera history and Gemini features for $20 per month. That subscription pipes your video into a Gemini AI model that generates summaries for notifications, as well as a “Daily Brief” that offers a rundown of everything that happened on a given day. The cheaper $10 plan provides less video history and no AI-assisted summaries or notifications. Both plans enable Gemini Live on smart speakers.

According to Google, it doesn’t send all of your video to Gemini. That would be a huge waste of compute cycles, so Gemini only sees (and summarizes) event clips. Those summaries are then distilled at the end of the day to create the Daily Brief, which usually results in a rather boring list of people entering and leaving rooms, dropping off packages, and so on.

Importantly, the Gemini model powering this experience is not multimodal—it only processes visual elements of videos and does not integrate audio from your recordings. So unusual noises or conversations captured by your cameras will not be searchable or reflected in AI summaries. This may be intentional to ensure your conversations are not regurgitated by an AI.

Gemini smart home plans

Credit: Google

Paying for Google’s AI-infused subscription also adds Ask Home, a conversational chatbot that can answer questions about what has happened in your home based on the status of smart home devices and your video footage. You can ask questions about events, retrieve video clips, and create automations.

There are definitely some issues with Gemini’s understanding of video, but Ask Home is quite good at creating automations. It was possible to set up automations in the old Home app, but the updated AI is able to piece together automations based on your natural language request. Perhaps thanks to the limited set of possible automation elements, the AI gets this right most of the time. Ask Home is also usually able to dig up past event clips, as long as you are specific about what you want.

The Advanced plan for Gemini Home keeps your videos for 60 days, so you can only query the robot on clips from that time period. Google also says it does not retain any of that video for training. The only instance in which Google will use security camera footage for training is if you choose to “lend” it to Google via an obscure option in the Home app. Google says it will keep these videos for up to 18 months or until you revoke access. However, your interactions with Gemini (like your typed prompts and ratings of outputs) are used to refine the model.

The unexpected deer

Every generative AI bot makes the occasional mistake, but you’ll probably not notice every one. When the AI hallucinates about your daily life, however, it’s more noticeable. There’s no reason Google should be confused by my smart home setup, which features a couple of outdoor cameras and one indoor camera—all Nest-branded with all the default AI features enabled—to keep an eye on my dogs. So the AI is seeing a lot of dogs lounging around and staring out the window. One would hope that it could reliably summarize something so straightforward.

One may be disappointed, though.

In my first Daily Brief, I was fascinated to see that Google spotted some indoor wildlife. “Unexpectedly, a deer briefly entered the family room,” Gemini said.

Home Brief with deer

Dogs and deer are pretty much the same thing, right? Credit: Ryan Whitwam

Gemini does deserve some credit for recognizing that the appearance of a deer in the family room would be unexpected. But the “deer” was, naturally, a dog. This was not a one-time occurrence, either. Gemini sometimes identifies my dogs correctly, but many event clips and summaries still tell me about the notable but brief appearance of deer around the house and yard.

This deer situation serves as a keen reminder that this new type of AI doesn’t “think,” although the industry’s use of that term to describe simulated reasoning could lead you to believe otherwise. A person looking at this video wouldn’t even entertain the possibility that they were seeing a deer after they’ve already seen the dogs loping around in other videos. Gemini doesn’t have that base of common sense, though. If the tokens say deer, it’s a deer. I will say, though, Gemini is great at recognizing car models and brand logos. Make of that what you will.

The animal mix-up is not ideal, but it’s not a major hurdle to usability. I didn’t seriously entertain the possibility that a deer had wandered into the house, and it’s a little funny the way the daily report continues to express amazement that wildlife is invading. It’s a pretty harmless screw-up.

“Overall identification accuracy depends on several factors, including the visual details available in the camera clip for Gemini to process,” explains a Google spokesperson. “As a large language model, Gemini can sometimes make inferential mistakes, which leads to these misidentifications, such as confusing your dog with a cat or deer.”

Google also says that you can tune the AI by correcting it when it screws up. This works sometimes, but the system still doesn’t truly understand anything—that’s beyond the capabilities of a generative AI model. After telling Gemini that it’s seeing dogs rather than deer, it sees wildlife less often. However, it doesn’t seem to trust me all the time, causing it to report the appearance of a deer that is “probably” just a dog.

A perfect fit for spooky season

Gemini’s smart home hallucinations also have a less comedic side. When Gemini mislabels an event clip, you can end up with some pretty distressing alerts. Imagine that you’re out and about when your Gemini assistant hits you with a notification telling you, “A person was seen in the family room.”

A person roaming around the house you believed to be empty? That’s alarming. Is it an intruder, a hallucination, a ghost? So naturally, you check the camera feed to find… nothing. An Ars Technica investigation confirms AI cannot detect ghosts. So a ghost in the machine?

Oops, we made you think someone broke into your house.

Credit: Ryan Whitwam

Oops, we made you think someone broke into your house. Credit: Ryan Whitwam

On several occasions, I’ve seen Gemini mistake dogs and totally empty rooms (or maybe a shadow?) for a person. It may be alarming at first, but after a few false positives, you grow to distrust the robot. Now, even if Gemini correctly identified a random person in the house, I’d probably ignore it. Unfortunately, this is the only notification experience for Gemini Home Advanced.

“You cannot turn off the AI description while keeping the base notification,” a Google spokesperson told me. They noted, however, that you can disable person alerts in the app. Those are enabled when you turn on Google’s familiar faces detection.

Gemini often twists reality just a bit instead of creating it from whole cloth. A person holding anything in the backyard is doing yardwork. One person anywhere, doing anything, becomes several people. A dog toy becomes a cat lying in the sun. A couple of birds become a raccoon. Gemini likes to ignore things, too, like denying there was a package delivery even when there’s a video tagged as “person delivers package.”

Gemini misses package

Gemini still refused to admit it was wrong.

Credit: Ryan Whitwam

Gemini still refused to admit it was wrong. Credit: Ryan Whitwam

At the end of the day, Gemini is labeling most clips correctly and therefore produces mostly accurate, if sometimes unhelpful, notifications. The problem is the flip side of “mostly,” which is still a lot of mistakes. Some of these mistakes compel you to check your cameras—at least, before you grow weary of Gemini’s confabulations. Instead of saving time and keeping you apprised of what’s happening at home, it wastes your time. For this thing to be useful, inferential errors cannot be a daily occurrence.

Learning as it goes

Google says its goal is to make Gemini for Home better for everyone. The team is “investing heavily in improving accurate identification” to cut down on erroneous notifications. The company also believes that having people add custom instructions is a critical piece of the puzzle. Maybe in the future, Gemini for Home will be more honest, but it currently takes a lot of hand-holding to move it in the right direction.

With careful tuning, you can indeed address some of Gemini for Home’s flights of fancy. I see fewer deer identifications after tinkering, and a couple of custom instructions have made the Home Brief waste less space telling me when people walk into and out of rooms that don’t exist. But I still don’t know how to prompt my way out of Gemini seeing people in an empty room.

Nest Cam 2025

Gemini AI features work on all Nest cams, but the new 2025 models are “designed for Gemini.”

Credit: Ryan Whitwam

Gemini AI features work on all Nest cams, but the new 2025 models are “designed for Gemini.” Credit: Ryan Whitwam

Despite its intention to improve Gemini for Home, Google is releasing a product that just doesn’t work very well out of the box, and it misbehaves in ways that are genuinely off-putting. Security cameras shouldn’t lie about seeing intruders, nor should they tell me I’m lying when they fail to recognize an event. The Ask Home bot has the standard disclaimer recommending that you verify what the AI says. You have to take that warning seriously with Gemini for Home.

At launch, it’s hard to justify paying for the $20 Advanced Gemini subscription. If you’re already paying because you want the 60-day event history, you’re stuck with the AI notifications. You can ignore the existence of Daily Brief, though. Stepping down to the $10 per month subscription gets you just 30 days of event history with the old non-generative notifications and event labeling. Maybe that’s the smarter smart home bet right now.

Gemini for Home is widely available for those who opted into early access in the Home app. So you can avoid Gemini for the time being, but it’s only a matter of time before Google flips the switch for everyone.

Hopefully it works better by then.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

“Unexpectedly, a deer briefly entered the family room”: Living with Gemini Home Read More »