microsoft

microsoft’s-new-10,000-year-data-storage-medium:-glass

Microsoft’s new 10,000-year data storage medium: glass


Femtosecond lasers etch data into a very stable medium.

Right now, Silica hardware isn’t quite ready for commercialization. Credit: Microsoft Research

Archival storage poses lots of challenges. We want media that is extremely dense and stable for centuries or more, and, ideally, doesn’t consume any energy when not being accessed. Lots of ideas have floated around—even DNA has been considered—but one of the simplest is to etch data into glass. Many forms of glass are very physically and chemically stable, and it’s relatively easy to etch things into it.

There’s been a lot of preliminary work demonstrating different aspects of a glass-based storage system. But in Wednesday’s issue of Nature, Microsoft Research announced Project Silica, a working demonstration of a system that can read and write data into small slabs of glass with a density of over a Gigabit per cubic millimeter.

Writing on glass

We tend to think of glass as fragile, prone to shattering, and capable of flowing downward over centuries, although the last claim is a myth. Glass is a category of material, and a variety of chemicals can form glasses. With the right starting chemical, it’s possible to make a glass that is, as the researchers put it, “thermally and chemically stable and is resistant to moisture ingress, temperature fluctuations and electromagnetic interference.” While it would still need to be handled in a way to minimize damage, glass provides the sort of stability we’d want for long-term storage.

Putting data into glass is as simple as etching it. But that’s been one of the challenges, as etching is typically a slow process. However, the development of femtosecond lasers—lasers that emit pulses that only last 10-15 seconds and can emit millions of them per second—can significantly cut down write times and allow etching to be focused on a very small area, increasing potential data density.

To read the data back, there are several options. We’ve already had great success using lasers to read data from optical disks, albeit slowly. But anything that can pick up the small features etched into the glass could conceivably work.

With the above considerations in mind, everything was in place on a theoretical level for Project Silica. The big question is how to put them together into a functional system. Microsoft decided that, just to be cautious, it would answer that question twice.

A real-world system

The difference between these two answers comes down to how an individual unit of data (called a voxel) is written to the glass. One type of voxel they tried was based on birefringence, where refraction of photons depends on their polarization. It’s possible to etch voxels into glass to create birefringence using polarized laser light, producing features smaller than the diffraction limit. In practice, this involved using one laser pulse to create an oval-shaped void, followed by a second, polarized pulse to induce birefringence. The identity of a voxel is based on the orientation of the oval; since we can resolve multiple orientations, it’s possible to save more than one bit in each voxel.

The alternative approach involves changing the magnitude of refractive effects by varying the amount of energy in the laser pulse. Again, it’s possible to discern more than two states in these voxels, allowing multiple data bits to be stored in each voxel.

The map data from Microsoft Flight Simulator etched onto the Silica storage medium.

Credit: Microsoft Research

The map data from Microsoft Flight Simulator etched onto the Silica storage medium. Credit: Microsoft Research

Reading these in Silica involves using a microscope that can pick up differences in refractive index. (For microscopy geeks, this is a way of saying “they used phase contrast microscopy.”) The microscopy sets the limits on how many layers of voxels can be placed in a single piece of glass. During etching, the layers were separated by enough distance so only a single layer would be in the microscope’s plane of focus at a time. The etching process also incorporates symbols that allow the automated microscope system to position the lens above specific points on the glass. From there, the system slowly changes its focal plane, moving through the stack and capturing images that include different layers of voxels.

To interpret these microscope images, Microsoft used a convolutional neural network that combines data from images that are both in and near the plane of focus for a given layer of voxels. This is effective because the influence of nearby voxels changes how a given voxel appears in a subtle way that the AI system can pick up on if given enough training data.

The final piece of the puzzle is data encoding. The Silica system takes the raw bitstream of the data it’s storing and adds error correction using a low-density parity-check code (the same error correction used in 5G networks). Neighboring bits are then combined to create symbols that take advantage of the voxels’ ability to store more than one bit. Once a stream of symbols is made, it’s ready to be written to glass.

Performance

Writing remains a bottleneck in the system, so Microsoft developed hardware that can write a single glass slab with four lasers simultaneously without generating too much heat. That is enough to enable writing at 66 megabits per second, and the team behind the work thinks that it would be possible to add up to a dozen additional lasers. That may be needed, given that it’s possible to store up to 4.84TB in a single slab of glass (the slabs are 12 cm x 12 cm and 0.2 cm thick). That works out to be over 150 hours to fully write a slab.

The “up to” aspect of the storage system has to do with the density of data that’s possible with the two different ways of writing data. The method that relies on birefringence requires more optical hardware and only works in high-quality glasses, but can squeeze more voxels into the same volume, and so has a considerably higher data density. The alternative approach can only put a bit over two terabytes into the same slab of glass, but can be done with simpler hardware and can work on any sort of transparent material.

Borosilicate glass offers extreme stability; Microsoft’s accelerated aging experiments suggest the data would be stable for over 10,000 years at room temperature. That led Microsoft to declare, “Our results demonstrate that Silica could become the archival storage solution for the digital age.”

That may be overselling it just a bit. The Square Kilometer Array telescope, for example, is expected to need to archive 700 petabytes of data each year. That would mean over 140,000 glass slabs would be needed to store the data from this one telescope. Even assuming that the write speed could be boosted by adding significantly more lasers, you’d need over 600 Silica machines operating in parallel to keep up. And the Square Kilometer Array is far from the only project generating enormous amounts of data.

That said, there are some features that make Silica a great match for this sort of thing, most notably the complete absence of energy needed to preserve the data, and the fact that it can be retrieved rapidly if needed (a sharp contrast to the days needed to retrieve information from DNA, for example). Plus, I’m admittedly drawn to a system with a storage medium that looks like something right out of science fiction.

Nature, 2026. DOI: 10.1038/s41586-025-10042-w (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Microsoft’s new 10,000-year data storage medium: glass Read More »

windows’-original-secure-boot-certificates-expire-in-june—here’s-what-you-need-to-do

Windows’ original Secure Boot certificates expire in June—here’s what you need to do

The second thing to check is the “default db,” which shows whether the new Secure Boot certificates are baked into your PC’s firmware. If they are, even resetting Secure Boot settings to the defaults in your PC’s BIOS will still allow you to boot operating systems that use the new certificates.

To check this, open PowerShell or Terminal again and type ([System.Text.Encoding]::ASCII.GetString((Get-SecureBootUEFI dbdefault).bytes) -match 'Windows UEFI CA 2023'). If this command returns “true,” your system is running an updated BIOS with the new Secure Boot certificates built in. Older PCs and systems without a BIOS update installed will return “false” here.

Microsoft’s Costa says that “many newer PCs built since 2024, and almost all the devices shipped in 2025, already include the certificates” and won’t need to be updated at all. And PCs several years older than that may be able to get the certificates via a BIOS update.

In the US, Dell, HP, Lenovo, and Microsoft all have lists of specific systems and firmware versions, while Asus provides more general information about how to get the new certificates via Windows Update, the MyAsus app, or the Asus website. The oldest of the PCs listed generally date back to 2019 or 2020. If your PC shipped with Windows 11 out of the box, there should be a BIOS update with the new certificates available, though that may not be true of every system that meets the requirements for upgrading to Windows 11.

Microsoft encourages home users who can’t install the new certificates to use its customer support services for help. Detailed documentation is also available for IT shops and other large organizations that manage their own updates.

“The Secure Boot certificate update marks a generational refresh of the trust foundation that modern PCs rely on at startup,” writes Costa. “By renewing these certificates, the Windows ecosystem is ensuring that future innovations in hardware, firmware, and operating systems can continue to build on a secure, industry‐aligned boot process.”

Windows’ original Secure Boot certificates expire in June—here’s what you need to do Read More »

why-$700-could-be-a-“death-sentence”-for-the-steam-machine

Why $700 could be a “death sentence” for the Steam Machine

Bad news for Valve in particular?

On the surface, it might seem like every company making gaming hardware would be similarly affected by increasing component costs. In practice, though, analysts suggested that Valve might be in a uniquely bad position to absorb this ongoing market disruption.

Large console makers like Sony and Microsoft “can commit to tens of millions of orders, and have strong negotiating power,” Niko Partners analyst Daniel Ahmad pointed out. The Steam Machine, on the other hand, is “a niche product that cannot benefit in the same way when it comes to procurement,” meaning Valve has to shoulder higher component cost increases.

F-Squared’s Futter echoed that Valve is “not an enormous player in the hardware space, even with the Steam Deck’s success. So they likely don’t have the same kind of priority as a Nintendo, Sony, or Microsoft when it comes to suppliers.”

PlayStation 5 in horizontal orientation, compared to Xbox Series X in horizontal orientation

Sony and Microsoft might have an advantage when negotiating volume discounts with suppliers.

Credit: Sam Machkovech

Sony and Microsoft might have an advantage when negotiating volume discounts with suppliers. Credit: Sam Machkovech

The size of the Steam Machine price adjustment also might depend on when Valve made its supply chain commitments. “It’s not clear when or if Valve locked in supply contracts for the Steam Machine, or if supply can be diverted from the Steam Deck for the new product,” Tech Insights analyst James Sanders noted. On the other hand, “Sony and Microsoft likely will have locked in more favorable component pricing before the current spike,” Van Dreunen said.

That said, some other aspects of the Steam Machine design could give Valve some greater pricing flexibility. Sanders noted that the Steam Machine’s smaller physical size could mean smaller packaging and reduced shipping costs for Valve. And selling the system primarily through direct sales via the web and Steam itself eliminates the usual retailer markups console makers have to take into account, he added.

“I think Valve was hoping for a much lower price and that the component issue would be short-term,” Cole said. “Obviously it is looking more like a long-term issue.”

Why $700 could be a “death sentence” for the Steam Machine Read More »

neocities-founder-stuck-in-chatbot-hell-after-bing-blocked-1.5-million-sites

Neocities founder stuck in chatbot hell after Bing blocked 1.5 million sites


Microsoft won’t explain why Bing blocked 1.5 million Neocities websites.

Credit: Aurich Lawson | NeoCities

One of the weirdest corners of the Internet is suddenly hard to find on Bing, after the search engine inexplicably started blocking approximately 1.5 million independent websites hosted on Neocities.

Founded in 2013 to archive the “aesthetic awesomeness” of GeoCities websites, Neocities keeps the spirit of the 1990s Internet alive. It lets users design free websites without relying on standardized templates devoid of personality. For hundreds of thousands of people building websites around art, niche fandoms, and special expertise—or simply seeking a place to get a little weird online—Neocities provides a blank canvas that can be endlessly personalized when compared to a Facebook page. Delighted visitors discovering these sites are more likely to navigate by hovering flashing pointers over a web of spinning GIFs than clicking a hamburger menu or infinitely scrolling.

That’s the style of Internet that Kyle Drake, Neocities’ founder, strives to maintain. So he was surprised when he noticed that Bing was curiously blocking Neocities sites last summer. At first, the issue seemed resolved by contacting Microsoft, but after receiving more recent reports that users were struggling to log in, Drake discovered that another complete block was implemented in January. Even more concerning, he saw that after delisting the front page, Bing had started pointing users to a copycat site where he was alarmed to learn they were providing their login credentials.

Monitoring stats, Drake was stunned to see that Bing traffic had suddenly dropped from about half a million daily visitors to zero. He immediately reported the issue using Bing webmaster tools. Concerned that Bing was not just disrupting traffic but possibly also putting Neocities users at risk if bad actors were gaming search results, he hoped for a prompt resolution.

“This one site that was just a copy of our front page, I didn’t know if it was a phishing attack or what it was, I was just like, ‘whoa, what the heck?’” Drake told Ars.

However, weeks went by as Drake hit wall after wall, submitting nearly a dozen tickets while trying to get past the Bing chatbot to find a support member to fix the issue. Frustrated, he tried other internal channels as well, including offering to buy ads to see if an ads team member could help.

“I tried everything,” Drake said, but nothing worked. Neocities sites remained unlisted on Bing.

Although Bing only holds about 4.5 percent of the global search engine market, Drake said it was “embarrassing” that Neocities sites can’t be discovered using the default Windows search engine. He also noted that many other search engines license Bing data, further compounding the issue.

Ultimately, it’s affecting a lot of people, Drake said, but he suspects that his support tickets are being buried in probably trillions of requests each day from people wanting to improve their Bing indexing.

“There’s probably an actual human being at Bing that actually could fix this,” Drake told Ars, but “when you go to the webmaster tools,” you’re stuck talking to an AI chatbot, and “it’s all kind of automated.”

Ars reached Microsoft for comment, and the company took action to remove some inappropriate blocks.

Within 24 hours, the Neocities front page appeared in search results, but Drake ran tests over the next few days that showed that most subdomains are still being blocked, including popular Neocities sites that should garner high rankings.

Pressed to investigate further, Microsoft confirmed that some Neocities sites were delisted for violating policies designed to keep low-quality sites out of search results.

However, Microsoft would not identify which sites were problematic or directly connect with Neocities to resolve a seemingly significant amount of ongoing site blocks that do not appear to be linked to violations. Instead, Microsoft recommended that Neocities find a way to work directly with Microsoft, despite Ars confirming that Microsoft is currently ignoring an open ticket.

For Drake, “the current state of things is unknown.” It’s hard to tell if popular Neocities sites are still being blocked or if possibly Bing’s reindexing process is slow. Microsoft declined to clarify.

He’s still hoping that Microsoft will eventually resolve all the improper blocks, making it possible for Bing users to use the search engine not just to find businesses or information but also to discover creative people making websites just for fun. With so much AI slop invading social networks and search engines, Drake sees Neocities as “one of the last bastions of human content.”

“I hope we can resolve this amicably for both of us and that this doesn’t happen again in the future,” Drake said. “It’s really important for the future of the small web, and for quality content for web surfers in an increasingly generative AI world, that creative sites made by real humans are able to get a fair shot in search engine results.”

Bing deranked suspected phishing site

After Drake failed to quietly resolve the issue with Bing, he felt that he had no choice but to alert users to the potential risks from Bing’s delisting.

In a blog post in late January, Drake warned that Bing had “completely blocked” all Neocities subdomains from its search index. Even worse, “Bing was also placing what appeared to be a phishing attack against Neocities on the first page of search results,” Drake said.

“This is not only bad for search results, it’s very possible that it is actively dangerous,” Drake said.

After “several” complaints, Bing eventually deranked the suspected phishing site, Drake confirmed. But Bing “declined to reverse the block or provide a clear, actionable explanation for it,” which leaves Neocities users vulnerable, he said.

Since “it’s easy to get higher pagerank than a blocked site,” Drake warned that “it is possibly only a matter of time before another concerning site appears on Bing searches for Neocities.”

The blog emphasized that Google, the platform’s biggest traffic driver, was not blocking Neocities, nor was any search engine unlinked to Bing data. Urging a boycott that may force a resolution, Drake wrote, “we are recommending that Neocities users, and the broader Internet in general, not use Bing or search engines that source their results from Bing until this issue is resolved.

“If you use Bing or Bing-powered search engines, Neocities sites will not appear in your search results, regardless of content quality, originality, or compliance with webmaster guidelines,” Drake said. “If any Neocities-like sites appear on these results, they may be active phishing attacks against Neocities and should be treated with caution.”

Bing still blocking popular Neocities sites

Drake doesn’t want to boycott Bing, but in his blog, he said that Microsoft left him no choice but public disclosure:

“We did not want to write this post. We try very hard to have a good relationship with search engine providers. We would much rather quietly resolve this issue with Bing staff and move on. But after months of attempting to engage constructively through multiple channels, it became clear that silence only harms our users. Especially those who don’t realize their sites are invisible on some search engines.”

Drake told Ars that he thinks most people don’t realize how big Neocities has gotten since its early days reviving GeoCities’ spunk. The platform hosts 1,459,700 websites that have drawn in 13 billion visitors. Over the years, it has been profiled in Wired and The New York Times, and more recently, it has become a popular hub for gaming communities, Polygon reported.

As Neocities grew, Drake told Ars that much of his focus has been on improving content moderation. He works closely with a full-time dedicated content moderation staffer to quickly take down any problematic sites within 24 hours, he said. That effort includes reviewing reports and proactively screening new sites, with Drake noting that “our name domain provider requires us to take them down within 48 hours.”

Microsoft prohibits things like scraping content that could be considered copyright infringement or automatically generating content using “garbage text” to game the rankings. It also monitors for malicious behavior like phishing, as well as for prompt injection attacks on Bing’s large language model.

It’s unclear what kind of violations Microsoft found ahead of instituting the complete block; however, Drake told Ars that he has yet to identify any content that may have triggered it. He said he would promptly remove any websites flagged by Microsoft, if he could only talk to someone who could share that information.

“Naturally, we still don’t catch 100 percent of the sites with proactive moderation, and occasionally some problematic sites do get missed,” Drake said.

Although Drake is curious to learn more about what triggered the blocks, he told Ars that it’s clear that non-violative sites are still invisible on Bing.

One of the longest-running and most popular Neocities sites, Wired Sound for Wired People, is a perfect example. The bizarre, somewhat creepy anime fanpage is “very popular” and “has a lot of links to it all over the web,” Drake said. Yet if you search for its subdomain, “fauux,” the site no longer appears in Bing search results, as of this writing, while Google reliably spits it out as the top result.

Drake said that he still believes that Bing is blocking content by mistake, but Bing’s automated support tools aren’t making it easy to defend creators who are randomly blocked by one of the world’s biggest search engines.

“We have one of the lowest ratios of crap to legitimate content, human-made content, on the Internet,” Drake said. “And it’s really frustrating to see that all these human beings making really cool sites that people want to go to are just not available on the default Windows search engine.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Neocities founder stuck in chatbot hell after Bing blocked 1.5 million sites Read More »

developers-say-ai-coding-tools-work—and-that’s-precisely-what-worries-them

Developers say AI coding tools work—and that’s precisely what worries them


Ars spoke to several software devs about AI and found enthusiasm tempered by unease.

Credit: Aurich Lawson | Getty Images

Software developers have spent the past two years watching AI coding tools evolve from advanced autocomplete into something that can, in some cases, build entire applications from a text prompt. Tools like Anthropic’s Claude Code and OpenAI’s Codex can now work on software projects for hours at a time, writing code, running tests, and, with human supervision, fixing bugs. OpenAI says it now uses Codex to build Codex itself, and the company recently published technical details about how the tool works under the hood. It has caused many to wonder: Is this just more AI industry hype, or are things actually different this time?

To find out, Ars reached out to several professional developers on Bluesky to ask how they feel about these tools in practice, and the responses revealed a workforce that largely agrees the technology works, but remains divided on whether that’s entirely good news. It’s a small sample size that was self-selected by those who wanted to participate, but their views are still instructive as working professionals in the space.

David Hagerty, a developer who works on point-of-sale systems, told Ars Technica up front that he is skeptical of the marketing. “All of the AI companies are hyping up the capabilities so much,” he said. “Don’t get me wrong—LLMs are revolutionary and will have an immense impact, but don’t expect them to ever write the next great American novel or anything. It’s not how they work.”

Roland Dreier, a software engineer who has contributed extensively to the Linux kernel in the past, told Ars Technica that he acknowledges the presence of hype but has watched the progression of the AI space closely. “It sounds like implausible hype, but state-of-the-art agents are just staggeringly good right now,” he said. Dreier described a “step-change” in the past six months, particularly after Anthropic released Claude Opus 4.5. Where he once used AI for autocomplete and asking the occasional question, he now expects to tell an agent “this test is failing, debug it and fix it for me” and have it work. He estimated a 10x speed improvement for complex tasks like building a Rust backend service with Terraform deployment configuration and a Svelte frontend.

A huge question on developers’ minds right now is whether what you might call “syntax programming,” that is, the act of manually writing code in the syntax of an established programming language (as opposed to conversing with an AI agent in English), will become extinct in the near future due to AI coding agents handling the syntax for them. Dreier believes syntax programming is largely finished for many tasks. “I still need to be able to read and review code,” he said, “but very little of my typing is actual Rust or whatever language I’m working in.”

When asked if developers will ever return to manual syntax coding, Tim Kellogg, a developer who actively posts about AI on social media and builds autonomous agents, was blunt: “It’s over. AI coding tools easily take care of the surface level of detail.” Admittedly, Kellogg represents developers who have fully embraced agentic AI and now spend their days directing AI models rather than typing code. He said he can now “build, then rebuild 3 times in less time than it would have taken to build manually,” and ends up with cleaner architecture as a result.

One software architect at a pricing management SaaS company, who asked to remain anonymous due to company communications policies, told Ars that AI tools have transformed his work after 30 years of traditional coding. “I was able to deliver a feature at work in about 2 weeks that probably would have taken us a year if we did it the traditional way,” he said. And for side projects, he said he can now “spin up a prototype in like an hour and figure out if it’s worth taking further or abandoning.”

Dreier said the lowered effort has unlocked projects he’d put off for years: “I’ve had ‘rewrite that janky shell script for copying photos off a camera SD card’ on my to-do list for literal years.” Coding agents finally lowered the barrier to entry, so to speak, low enough that he spent a few hours building a full released package with a text UI, written in Rust with unit tests. “Nothing profound there, but I never would have had the energy to type all that code out by hand,” he told Ars.

Of vibe coding and technical debt

Not everyone shares the same enthusiasm as Dreier. Concerns about AI coding agents building up technical debt, that is, making poor design choices early in a development process that snowball into worse problems over time, originated soon after the first debates around “vibe coding” emerged in early 2025. Former OpenAI researcher Andrej Karpathy coined the term to describe programming by conversing with AI without fully understanding the resulting code, which many see as a clear hazard of AI coding agents.

Darren Mart, a senior software development engineer at Microsoft who has worked there since 2006, shared similar concerns with Ars. Mart, who emphasizes he is speaking in a personal capacity and not on behalf of Microsoft, recently used Claude in a terminal to build a Next.js application integrating with Azure Functions. The AI model “successfully built roughly 95% of it according to my spec,” he said. Yet he remains cautious. “I’m only comfortable using them for completing tasks that I already fully understand,” Mart said, “otherwise there’s no way to know if I’m being led down a perilous path and setting myself (and/or my team) up for a mountain of future debt.”

A data scientist working in real estate analytics, who asked to remain anonymous due to the sensitive nature of his work, described keeping AI on a very short leash for similar reasons. He uses GitHub Copilot for line-by-line completions, which he finds useful about 75 percent of the time, but restricts agentic features to narrow use cases: language conversion for legacy code, debugging with explicit read-only instructions, and standardization tasks where he forbids direct edits. “Since I am data-first, I’m extremely risk averse to bad manipulation of the data,” he said, “and the next and current line completions are way too often too wrong for me to let the LLMs have freer rein.”

Speaking of free rein, Nike backend engineer Brian Westby, who uses Cursor daily, told Ars that he sees the tools as “50/50 good/bad.” They cut down time on well-defined problems, he said, but “hallucinations are still too prevalent if I give it too much room to work.”

The legacy code lifeline and the enterprise AI gap

For developers working with older systems, AI tools have become something like a translator and an archaeologist rolled into one. Nate Hashem, a staff engineer at First American Financial, told Ars Technica that he spends his days updating older codebases where “the original developers are gone and documentation is often unclear on why the code was written the way it was.” That’s important because previously “there used to be no bandwidth to improve any of this,” Hashem said. “The business was not going to give you 2-4 weeks to figure out how everything actually works.”

In that high-pressure, relatively low-resource environment, AI has made the job “a lot more pleasant,” in his words, by speeding up the process of identifying where and how obsolete code can be deleted, diagnosing errors, and ultimately modernizing the codebase.

Hashem also offered a theory about why AI adoption looks so different inside large corporations than it does on social media. Executives demand their companies become “AI oriented,” he said, but the logistics of deploying AI tools with proprietary data can take months of legal review. Meanwhile, the AI features that Microsoft and Google bolt onto products like Gmail and Excel, the tools that actually reach most workers, tend to run on more limited AI models. “That modal white-collar employee is being told by management to use AI,” Hashem said, “but is given crappy AI tools because the good tools require a lot of overhead in cost and legal agreements.”

Speaking of management, the question of what these new AI coding tools mean for software development jobs drew a range of responses. Does it threaten anyone’s job? Kellogg, who has embraced agentic coding enthusiastically, was blunt: “Yes, massively so. Today it’s the act of writing code, then it’ll be architecture, then it’ll be tiers of product management. Those who can’t adapt to operate at a higher level won’t keep their jobs.”

Dreier, while feeling secure in his own position, worried about the path for newcomers. “There are going to have to be changes to education and training to get junior developers the experience and judgment they need,” he said, “when it’s just a waste to make them implement small pieces of a system like I came up doing.”

Hagerty put it in economic terms: “It’s going to get harder for junior-level positions to get filled when I can get junior-quality code for less than minimum wage using a model like Sonnet 4.5.”

Mart, the Microsoft engineer, put it more personally. The software development role is “abruptly pivoting from creation/construction to supervision,” he said, “and while some may welcome that pivot, others certainly do not. I’m firmly in the latter category.”

Even with this ongoing uncertainty on a macro level, some people are really enjoying the tools for personal reasons, regardless of larger implications. “I absolutely love using AI coding tools,” the anonymous software architect at a pricing management SaaS company told Ars. “I did traditional coding for my entire adult life (about 30 years) and I have way more fun now than I ever did doing traditional coding.”

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Developers say AI coding tools work—and that’s precisely what worries them Read More »

people-complaining-about-windows-11-hasn’t-stopped-it-from-hitting-1-billion-users

People complaining about Windows 11 hasn’t stopped it from hitting 1 billion users

Complaining about Windows 11 is a popular sport among tech enthusiasts on the Internet, whether you’re publicly switching to Linux, publishing guides about the dozens of things you need to do to make the OS less annoying, or getting upset because you were asked to sign in to an app after clicking a sign-in button.

Despite the negativity surrounding the current version of Windows, it remains the most widely used operating system on the world’s desktop and laptop computers, and people usually prefer to stick to what they’re used to. As a result, Windows 11 has just cleared a big milestone—Microsoft CEO Satya Nadella said on the company’s most recent earnings call (via The Verge) that Windows 11 now has over 1 billion users worldwide.

Windows 11 also reached that milestone just a few months quicker than Windows 10 did—1,576 days after its initial public launch on October 5, 2021. Windows 10 took 1,692 days to reach the same milestone, based on its July 29, 2015, general availability date and Microsoft’s announcement on March 16, 2020.

That’s especially notable because Windows 10 was initially offered as a free upgrade to all users of Windows 7 and Windows 8, with no change in system requirements relative to those older versions. Windows 11 was (and still is) a free upgrade to Windows 10, but its relatively high system requirements mean there are plenty of Windows 10 PCs that aren’t eligible to run Windows 11.

Windows 10’s long goodbye

It’s hard to gauge how many PCs are still running Windows 10 because public data on the matter is unreliable. But we can still make educated guesses—and it’s clear that the software is still running on hundreds of millions of PCs, despite hitting its official end-of-support date last October.

Statcounter, one popularly referenced source that collects OS and browser usage stats from web analytics data, reports that between 50 and 55 percent of Windows PCs worldwide are running Windows 11, and between 40 and 45 percent of them run Windows 10. Statcounter also reports that Windows 10 and Windows 7 usage have risen slightly over the last few months, which highlights the noisiness of the data. But as of late 2025, Dell COO Jeffrey Clarke said that there were still roughly 1 billion active Windows 10 PCs in use, around 500 million of which weren’t eligible for an upgrade because of hardware requirements. If Windows 11 just cleared the 1 billion user mark, that suggests Statcounter’s reporting of a nearly evenly split user base isn’t too far from the truth.

People complaining about Windows 11 hasn’t stopped it from hitting 1 billion users Read More »

there’s-a-rash-of-scam-spam-coming-from-a-real-microsoft-address

There’s a rash of scam spam coming from a real Microsoft address

There are reports that a legitimate Microsoft email address—which Microsoft explicitly says customers should add to their allow list—is delivering scam spam.

The emails originate from [email protected], an address tied to Power BI. The Microsoft platform provides analytics and business intelligence from various sources that can be integrated into a single dashboard. Microsoft documentation says that the address is used to send subscription emails to mail-enabled security groups. To prevent spam filters from blocking the address, the company advises users to add it to allow lists.

From Microsoft, with malice

According to an Ars reader, the address on Tuesday sent her an email claiming (falsely) that a $399 charge had been made to her. It provided a phone number to call to dispute the transaction. A man who answered a call asking to cancel the sale directed me to download and install a remote access application, presumably so he could then take control of my Mac or Windows machine (Linux wasn’t allowed). The email, captured in the two screenshots below, looked like this:

Online searches returned a dozen or so accounts of other people reporting receiving the same email. Some of the spam was reported on Microsoft’s own website.

Sarah Sabotka, a threat researcher at security firm Proofpoint, said the scammers are abusing a Power Bi function that allows external email addresses to be added as subscribers for the Power Bi reports. The mention of the subscription is buried at the very bottom of the message, where it’s easy to miss. The researcher explained:

There’s a rash of scam spam coming from a real Microsoft address Read More »

why-has-microsoft-been-routing-example.com-traffic-to-a-company-in-japan?

Why has Microsoft been routing example.com traffic to a company in Japan?

From the Department of Bizarre Anomalies: Microsoft has suppressed an unexplained anomaly on its network that was routing traffic destined to example.com—a domain reserved for testing purposes—to a maker of electronics cables located in Japan.

Under the RFC2606—an official standard maintained by the Internet Engineering Task Force—example.com isn’t obtainable by any party. Instead it resolves to IP addresses assigned to Internet Assiged Names Authority. The designation is intended to prevent third parties from being bombarded with traffic when developers, penetration testers, and others need a domain for testing or discussing technical issues. Instead of naming an Internet-routable domain, they are to choose example.com or two others, example.net and example.org.

Misconfig gone, but is it fixed?

Output from the terminal command cURL shows that devices inside Azure and other Microsoft networks have been routing some traffic to subdomains of sei.co.jp, a domain belonging to Sumitomo Electric. Most of the resulting text is exactly what’s expected. The exception is the JSON-based response. Here’s the JSON output from Friday:

"email":"[email protected]","services": [],"protocols": [{"protocol":"imap","hostname":"imapgms.jnet.sei.co.jp","port":993,"encryption":"ssl","username":"[email protected]","validated":false},{"protocol":"smtp","hostname":"smtpgms.jnet.sei.co.jp","port":465,"encryption":"ssl","username":"[email protected]","validated":false}]

Similarly, results when adding a new account for [email protected] in Outlook looked like this:

In both cases, the results show that Microsoft was routing email traffic to two sei.co.jp subdomains: imapgms.jnet.sei.co.jp and smtpgms.jnet.sei.co.jp. The behavior was the result of Microsoft’s autodiscover service.

“I’m admittedly not an expert in Microsoft’s internal workings, but this appears to be a simple misconfiguration,” Michael Taggart, a senior cybersecurity researcher at UCLA Health, said. “The result is that anyone who tries to set up an Outlook account on an example.com domain might accidentally send test credentials to those sei.co.jp subdomains.”

When asked early Friday afternoon why Microsoft was doing this, a representative had no answer and asked for more time. By Monday morning, the improper routing was no longer occurring, but the representative still had no answer.

Why has Microsoft been routing example.com traffic to a company in Japan? Read More »

elon-musk-accused-of-making-up-math-to-squeeze-$134b-from-openai,-microsoft

Elon Musk accused of making up math to squeeze $134B from OpenAI, Microsoft


Musk’s math reduced ChatGPT inventors’ contributions to “zero,” OpenAI argued.

Elon Musk is going for some substantial damages in his lawsuit accusing OpenAI of abandoning its nonprofit mission and “making a fool out of him” as an early investor.

On Friday, Musk filed a notice on remedies sought in the lawsuit, confirming that he’s seeking damages between $79 billion and $134 billion from OpenAI and its largest backer, co-defendant Microsoft.

Musk hired an expert he has never used before, C. Paul Wazzan, who reached this estimate by concluding that Musk’s early contributions to OpenAI generated 50 to 75 percent of the nonprofit’s current value. He got there by analyzing four factors: Musk’s total financial contributions before he left OpenAI in 2018, Musk’s proposed equity stake in OpenAI in 2017, Musk’s current equity stake in xAI, and Musk’s nonmonetary contributions to OpenAI (like investing time or lending his reputation).

The eye-popping damage claim shocked OpenAI and Microsoft, which could also face punitive damages in a loss.

The tech giants immediately filed a motion to exclude Wazzan’s opinions, alleging that step was necessary to avoid prejudicing a jury. Their filing claimed that Wazzan’s math seemed “made up,” based on calculations the economics expert testified he’d never used before and allegedly “conjured” just to satisfy Musk.

For example, Wazzan allegedly ignored that Musk left OpenAI after leadership did not agree on how to value Musk’s contributions to the nonprofit. Problematically, Wazzan’s math depends on an imaginary timeline where OpenAI agreed to Musk’s 2017 bid to control 51.2 percent of a new for-profit entity that was then being considered. But that never happened, so it’s unclear why Musk would be owed damages based on a deal that was never struck, OpenAI argues.

It’s also unclear why Musk’s stake in xAI is relevant, since OpenAI is a completely different company not bound to match xAI’s offerings. Wazzan allegedly wasn’t even given access to xAI’s actual numbers to help him with his estimate, only referring to public reporting estimating that Musk owns 53 percent of xAI’s equity. OpenAI accused Wazzan of including the xAI numbers to inflate the total damages to please Musk.

“By all appearances, what Wazzan has done is cherry-pick convenient factors that correspond roughly to the size of the ‘economic interest’ Musk wants to claim, and declare that those factors support Musk’s claim,” OpenAI’s filing said.

Further frustrating OpenAI and Microsoft, Wazzan opined that Musk and xAI should receive the exact same total damages whether they succeed on just one or all of the four claims raised in the lawsuit.

OpenAI and Microsoft are hoping the court will agree that Wazzan’s math is an “unreliable… black box” and exclude his opinions as improperly reliant on calculations that cannot be independently tested.

Microsoft could not be reached for comment, but OpenAI has alleged that Musk’s suit is a harassment campaign aimed at stalling a competitor so that his rival AI firm, xAI, can catch up.

“Musk’s lawsuit continues to be baseless and a part of his ongoing pattern of harassment, and we look forward to demonstrating this at trial,” an OpenAI spokesperson said in a statement provided to Ars. “This latest unserious demand is aimed solely at furthering this harassment campaign. We remain focused on empowering the OpenAI Foundation, which is already one of the best resourced nonprofits ever.”

Only Musk’s contributions counted

Wazzan is “a financial economist with decades of professional and academic experience who has managed his own successful venture capital firm that provided seed-level funding to technology startups,” Musk’s filing said.

OpenAI explained how Musk got connected with Wazzan, who testified that he had never been hired by any of Musk’s companies before. Instead, three months before he submitted his opinions, Wazzan said that Musk’s legal team had reached out to his consulting firm, BRG, and the call was routed to him.

Wazzan’s task was to figure out how much Musk should be owed after investing $38 million in OpenAI—roughly 60 percent of its seed funding. Musk also made nonmonetary contributions Wazzan had to weigh, like “recruiting key employees, introducing business contacts, teaching his cofounders everything he knew about running a successful startup, and lending his prestige and reputation to the venture,” Musk’s filing said.

The “fact pattern” was “pretty unique,” Wazzan testified, while admitting that his calculations weren’t something you’d find “in a textbook.”

Additionally, Wazzan had to factor in Microsoft’s alleged wrongful gains, by deducing how much of Microsoft’s profits went back into funding the nonprofit. Microsoft alleged Wazzan got this estimate wrong after assuming that “some portion of Microsoft’s stake in the OpenAI for-profit entity should flow back to the OpenAI nonprofit” and arbitrarily decided that the portion must be “equal” to “the nonprofit’s stake in the for-profit entity.” With this odd math, Wazzan double-counted value of the nonprofit and inflated Musk’s damages estimate, Microsoft alleged.

“Wazzan offers no rationale—contractual, governance, economic, or otherwise—for reallocating any portion of Microsoft’s negotiated interest to the nonprofit,” OpenAI’s and Microsoft’s filing said.

Perhaps most glaringly, Wazzan reached his opinions without ever weighing the contributions of anyone but Musk, OpenAI alleged. That means that Wazzan’s analysis did not just discount efforts of co-founders and investors like Microsoft, which “invested billions of dollars into OpenAI’s for-profit affiliate in the years after Musk quit.” It also dismissed scientists and programmers who invented ChatGPT as having “contributed zero percent of the nonprofit’s current value,” OpenAI alleged.

“I don’t need to know all the other people,” Wazzan testified.

Musk’s legal team contradicted expert

Wazzan supposedly also did not bother to quantify Musk’s nonmonetary contributions, which could be in the thousands, millions, or billions based on his vague math, OpenAI argued.

Even Musk’s legal team seemed to contradict Wazzan, OpenAI’s filing noted. In Musk’s filing on remedies, it’s acknowledged that the jury may have to adjust the total damages. Because Wazzan does not break down damages by claims and merely assigns the same damages to each individual claim, OpenAI argued it will be impossible for a jury to adjust any of Wazzan’s black box calculations.

“Wazzan’s methodology is made up; his results unverifiable; his approach admittedly unprecedented; and his proposed outcome—the transfer of billions of dollars from a nonprofit corporation to a donor-turned competitor—implausible on its face,” OpenAI argued.

At a trial starting in April, Musk will strive to convince a court that such extraordinary damages are owed. OpenAI hopes he’ll fail, in part since “it is legally impossible for private individuals to hold economic interests in nonprofits” and “Wazzan conceded at deposition that he had no reason to believe Musk ‘expected a financial return when he donated… to OpenAI nonprofit.’”

“Allowing a jury to hear a disgorgement number—particularly one that is untethered to specific alleged wrongful conduct and results in Musk being paid amounts thousands of times greater than his actual donations—risks misleading the jury as to what relief is recoverable and renders the challenged opinions inadmissible,” OpenAI’s filing said.

Wazzan declined to comment. xAI did not immediately respond to Ars’ request to comment.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Elon Musk accused of making up math to squeeze $134B from OpenAI, Microsoft Read More »

tsmc-says-ai-demand-is-“endless”-after-record-q4-earnings

TSMC says AI demand is “endless” after record Q4 earnings

TSMC posted net income of NT$505.7 billion (about $16 billion) for the quarter, up 35 percent year over year and above analyst expectations. Revenue hit $33.7 billion, a 25.5 percent increase from the same period last year. The company expects nearly 30 percent revenue growth in 2026 and plans to spend between $52 billion and $56 billion on capital expenditures this year, up from $40.9 billion in 2025.

Checking with the customers’ customers

Wei’s optimism stands in contrast to months of speculation about whether the AI industry is in a bubble. In November, Google CEO Sundar Pichai warned of “irrationality” in the AI market and said no company would be immune if a potential bubble bursts. OpenAI’s Sam Altman acknowledged in August that investors are “overexcited” and that “someone” will lose a “phenomenal amount of money.”

But TSMC, which manufactures the chips that power the AI boom, is betting the opposite way, with Wei telling analysts he spoke directly to cloud providers to verify that demand is real before committing to the spending increase.

“I want to make sure that my customers’ demand are real. So I talked to those cloud service providers, all of them,” Wei said. “The answer is that I’m quite satisfied with the answer. Actually, they show me the evidence that the AI really helps their business.”

The earnings report landed the same day the US and Taiwan finalized a trade agreement that cuts tariffs on Taiwanese goods to 15 percent, down from 20 percent. The deal commits Taiwanese companies to $250 billion in direct US investment, and TSMC is accelerating the expansion of its Arizona chip fabrication facilities to match.

TSMC says AI demand is “endless” after record Q4 earnings Read More »

microsoft-vows-to-cover-full-power-costs-for-energy-hungry-ai-data-centers

Microsoft vows to cover full power costs for energy-hungry AI data centers

Taking responsibility for power usage

In the Microsoft blog post, Smith acknowledged that residential electricity rates have recently risen in dozens of states, driven partly by inflation, supply chain constraints, and grid upgrades. He wrote that communities “value new jobs and property tax revenue, but not if they come with higher power bills or tighter water supplies.”

Microsoft says it will ask utilities and public commissions to set rates high enough to cover the full electricity costs for its data centers, including infrastructure additions. In Wisconsin, the company is supporting a new rate structure that would charge “Very Large Customers,” including data centers, the cost of the electricity required to serve them.

Smith wrote that while some have suggested the public should help pay for the added electricity needed for AI, Microsoft disagrees. He stated, “Especially when tech companies are so profitable, we believe that it’s both unfair and politically unrealistic for our industry to ask the public to shoulder added electricity costs for AI.”

On water usage for cooling, Microsoft plans a 40 percent improvement in data center water-use intensity by 2030. A recent environmental audit from AI model-maker Mistral found that training and running its Large 2 model over 18 months produced 20.4 kilotons of CO2 emissions and evaporated enough water to fill 112 Olympic-size swimming pools, illustrating the aggregate environmental impact of AI operations at scale.

To solve some of these issues, Microsoft says it has launched a new AI data center design using a closed-loop system that constantly recirculates cooling liquid, dramatically cutting water usage. In this design, already deployed in Wisconsin and Georgia, potable water is no longer needed for cooling.

On property taxes, Smith stated in the blog post that the company will not ask local municipalities to reduce their rates. The company says it will pay its full share of local property taxes. Smith wrote that Microsoft’s goal is to bring these commitments to life in the first half of 2026. Of course, these are PR-aligned company goals and not realities yet, so we’ll have to check back in later to see whether Microsoft has been following through on its promises.

Microsoft vows to cover full power costs for energy-hungry AI data centers Read More »

news-orgs-win-fight-to-access-20m-chatgpt-logs-now-they-want-more.

News orgs win fight to access 20M ChatGPT logs. Now they want more.

Describing OpenAI’s alleged “playbook” to dodge copyright claims, news groups accused OpenAI of failing to “take any steps to suspend its routine destruction practices.” There were also “two spikes in mass deletion” that OpenAI attributed to “technical issues.”

However, OpenAI made sure to retain outputs that could help its defense, the court filing alleged, including data from accounts cited in news organizations’ complaints.

OpenAI did not take the same care to preserve chats that could be used as evidence against it, news groups alleged, citing testimony from Mike Trinh, OpenAI’s associate general counsel. “In other words, OpenAI preserved evidence of the News Plaintiffs eliciting their own works from OpenAI’s products but deleted evidence of third-party users doing so,” the filing said.

It’s unclear how much data was deleted, plaintiffs alleged, since OpenAI won’t share “the most basic information” on its deletion practices. But it’s allegedly very clear that OpenAI could have done more to preserve the data, since Microsoft apparently had no trouble doing so with Copilot, the filing said.

News plaintiffs are hoping the court will agree that OpenAI and Microsoft aren’t fighting fair by delaying sharing logs, which they said prevents them from building their strongest case.

They’ve asked the court to order Microsoft to “immediately” produce Copilot logs “in a readily searchable remotely-accessible format,” proposing a deadline of January 9 or “within a day of the Court ruling on this motion.”

Microsoft declined Ars’ request for comment.

And as for OpenAI, it wants to know if the deleted logs, including “mass deletions,” can be retrieved, perhaps bringing millions more ChatGPT conversations into the litigation that users likely expected would never see the light of day again.

On top of possible sanctions, news plaintiffs asked the court to keep in place a preservation order blocking OpenAI from permanently deleting users’ temporary and deleted chats. They also want the court to order OpenAI to explain “the full scope of destroyed output log data for all of its products at issue” in the litigation and whether those deleted chats can be restored, so that news plaintiffs can examine them as evidence, too.

News orgs win fight to access 20M ChatGPT logs. Now they want more. Read More »