Features

a-history-of-the-internet,-part-2:-the-high-tech-gold-rush-begins

A history of the Internet, part 2: The high-tech gold rush begins


The Web Era arrives, the browser wars flare, and a bubble bursts.

Welcome to the second article in our three-part series on the history of the Internet. If you haven’t already, read part one here.

As a refresher, here’s the story so far:

The ARPANET was a project started by the Defense Department’s Advanced Research Project Agency in 1969 to network different mainframe computers together across the country.  Later, it evolved into the Internet, connecting multiple global networks together using a common TCP/IP protocol.

By the late 1980s, investments from the National Science Foundation (NSF) had established an “Internet backbone” supporting hundreds of thousands of users worldwide. These users were mostly professors, researchers, and graduate students.

In the meantime, commercial online services like CompuServe were growing rapidly. These systems connected personal computer users, using dial-up modems, to a mainframe running proprietary software. Once online, people could read news articles and message other users. In 1989, CompuServe added the ability to send email to anyone on the Internet.

In 1965, Ted Nelson submitted a paper to the Association for Computing Machinery. He wrote: “Let me introduce the word ‘hypertext’ to mean a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.” The paper was part of a grand vision he called Xanadu, after the poem by Samuel Coleridge.

A decade later, in his book “Dream Machines/Computer Lib,” he described Xanadu thusly: “To give you a screen in your home from which you can see into the world’s hypertext libraries.” He admitted that the world didn’t have any hypertext libraries yet, but that wasn’t the point. One day, maybe soon, it would. And he was going to dedicate his life to making it happen.

As the Internet grew, it became more and more difficult to find things on it. There were lots of cool documents like the Hitchhiker’s Guide To The Internet, but to read them, you first had to know where they were.

The community of helpful programmers on the Internet leapt to the challenge. Alan Emtage at McGill University in Montreal wrote a tool called Archie. It searched a list of public file transfer protocol (FTP) servers. You still had to know the file name you were looking for, but Archie would let you download it no matter what server it was on.

An improved search engine was Gopher, written by a team headed by Mark McCahill at the University of Minnesota. It used a text-based menu system so that users didn’t have to remember file names or locations. Gopher servers could display a customized collection of links inside nested menus, and they integrated with other services like Archie and Veronica to help users search for more resources.

Gopher is a text-based Internet search and retrieval system. It’s still running in 2025! Jeremy Reimer

A Gopher server could provide many of the things we take for granted today: search engines, personal pages that could contain links, and downloadable files. But this wasn’t enough for a British computer scientist who was working at CERN, an intergovernmental institute that operated the world’s largest particle physics lab.

The World Wide Web

Hypertext had come a long way since Ted Nelson had coined the word in 1965. Bill Atkinson, a member of the original Macintosh development team, released HyperCard in 1987. It used the Mac’s graphical interface to let anyone develop “stacks,” collections of text, graphics, and sounds that could be connected together with clickable links. There was no networking, but stacks could be shared with other users by sending the files on a floppy disk.

The home screen of HyperCard 1.0 for Macintosh. Jeremy Reimer

Hypertext was so big that conferences were held just to discuss it in 1987 and 1988. Even Ted Nelson had finally found a sponsor for his personal dream: Autodesk founder John Walker had agreed to spin up a subsidiary to create a commercial version of Xanadu.

It was in this environment that CERN fellow Tim Berners-Lee drew up his own proposal in March 1989 for a new hypertext environment. His goal was to make it easier for researchers at CERN to collaborate and share information about new projects.

The proposal (which he called “Mesh”) had several objectives. It would provide a system for connecting information about people, projects, documents, and hardware being developed at CERN. It would be decentralized and distributed over many computers. Not all the computers at CERN were the same—there were Digital Equipment minis running VMS, some Macintoshes, and an increasing number of Unix workstations. Each of them should be able to view the information in the same way.

As Berners-Lee described it, “There are few products which take Ted Nelson’s idea of a wide ‘docuverse’ literally by allowing links between nodes in different databases. In order to do this, some standardization would be necessary.”

The original proposal document for the web, written in Microsoft Word for Macintosh 4.0, downloaded from Tim Berners-Lee’s website. Credit: Jeremy Reimer

The document ended by describing the project as “practical” and estimating that it might take two people six to 12 months to complete. Berners-Lee’s manager called it “vague, but exciting.” Robert Cailliau, who had independently proposed a hypertext system for CERN, joined Berners-Lee to start designing the project.

The computer Berners-Lee used was a NeXT cube, from the company Steve Jobs started after he was kicked out of Apple. NeXT workstations were expensive, but they came with a software development environment that was years ahead of its time. If you could afford one, it was like a coding accelerator. John Carmack would later write DOOM on a NeXT.

The NeXT workstation that Tim Berners-Lee used to create the World Wide Web. Please do not power down the World Wide Web. Credit: Coolcaesar (CC BY-SA 3.0)

Berners-Lee called his application “WorldWideWeb.” The software consisted of a server, which delivered pages of text over a new protocol called “Hypertext Transport Protocol,” or HTTP, and a browser that rendered the text. The browser translated markup code like “h1” to indicate a larger header font or “a” to indicate a link. There was also a graphical webpage editor, but it didn’t work very well and was abandoned.

The very first website was published, running on the development NeXT cube, on December 20, 1990. Anyone who had a NeXT machine and access to the Internet could view the site in all its glory.

The original WorldWideWeb browser running on NeXTstep 3, browsing the world’s first webpage. Jeremy Reimer

Because NeXT only sold 50,000 computers in total, that intersection did not represent a lot of people. Eight months later, Berners-Lee posted a reply to a question about interesting projects on the alt.hypertext Usenet newsgroup. He described the World Wide Web project and included links to all the software and documentation.

That one post changed the world forever.

Mosaic

On December 9, 1991, President George H.W. Bush signed into law the High Performance Computing Act, also known as the Gore Bill. The bill paid for an upgrade of the NSFNET backbone, as well as a separate funding initiative for the National Center for Supercomputing Applications (NCSA).

NCSA, based out of the University of Illinois, became a dream location for computing research. “NCSA was heaven,” recalled Alex Totic, who was a student there. “They had all the toys, from Thinking Machines to Crays to Macs to beautiful networks. It was awesome.” As is often the case in academia, the professors came up with research ideas but assigned most of the actual work to their grad students.

One of those students was Marc Andreessen, who joined NCSA as a part-time programmer for $6.85 an hour. Andreessen was fascinated by the World Wide Web, especially browsers. A new browser for Unix computers, ViolaWWW, was making the rounds at NCSA. No longer confined to the NeXT workstation, the web had caught the attention of the Unix community. But that community was still too small for Andreessen.

“To use the Net, you had to understand Unix,” he said in an interview with Forbes. “And the current users had no interest in making it easier. In fact, there was a definite element of not wanting to make it easier, of actually wanting to keep the riffraff out.”

Andreessen enlisted the help of his colleague, programmer Eric Bina, and started developing a new web browser in December 1992. In a little over a month, they released version 0.5 of “NCSA X Mosaic”—so called because it was designed to work with Unix’s X Window System. Ports for the Macintosh and Windows followed shortly thereafter.

Being available on the most popular graphical computers changed the trajectory of the web. In just 18 months, millions of copies of Mosaic were downloaded, and the rate was accelerating. The riffraff was here to stay.

Netscape

The instant popularity of Mosaic caused the management at NCSA to take a deeper interest in the project. Jon Mittelhauser, who co-wrote the Windows version, recalled that the small team “suddenly found ourselves in meetings with forty people planning our next features, as opposed to the five of us making plans at 2 am over pizzas and Cokes.”

Andreessen was told to step aside and let more experienced managers take over. Instead, he left NCSA and moved to California, looking for his next opportunity. “I thought I had missed the whole thing,” Andreessen said. “The overwhelming mood in the Valley when I arrived was that the PC was done, and by the way, the Valley was probably done because there was nothing else to do.”

But his reputation had preceded him. Jim Clark, the founder of Silicon Graphics, was also looking to start something new. A friend had shown him a demo of Mosaic, and Clark reached out to meet with Andreessen.

At a meeting, Andreessen pitched the idea of building a “Mosaic killer.” He showed Clark a graph that showed web users doubling every five months. Excited by the possibilities, the two men founded Mosaic Communications Corporation on April 4, 1994. Andreessen quickly recruited programmers from his former team, and they got to work. They codenamed their new browser “Mozilla” since it was going to be a monster that would devour Mosaic. Beta versions were titled “Mosaic Netscape,” but the University of Illinois threatened to sue the new company. To avoid litigation, the name of the company and browser were changed to Netscape, and the programmers audited their code to ensure none of it had been copied from NCSA.

Netscape became the model for all Internet startups to follow. Programmers were given unlimited free sodas and encouraged to basically never leave the office. “Netscape Time” accelerated software development schedules, and because updates could be delivered over the Internet, old principles of quality assurance went out the window. And the business model? It was simply to “get big fast,” and profits could be figured out later.

Work proceeded quickly, and the 1.0 version of Netscape Navigator and the Netsite web server were released on December 15, 1994, for Windows, Macintosh, and Unix systems running X Windows. The browser was priced at $39 for commercial users, but there was no charge for “academic and non-profit use, as well as for free evaluation purposes.”

Version 0.9 was called “Mosaic Netscape,” and the logo and company were still Mosaic. Jeremy Reimer

Netscape quickly became the standard. Within six months, it captured over 70 percent of the market share for web browsers. On August 9, 1995, only 16 months after the founding of the company, Netscape filed for an Initial Public Offering. A last-minute decision doubled the offering price to $28 per share, and on the first day of trading, the stock soared to $75 and closed at $58.25. The Web Era had officially arrived.

The web battles proprietary solutions

The excitement over a new way to transmit text and images to the public over phone lines wasn’t confined to the World Wide Web. Commercial online systems like CompuServe were also evolving to meet the graphical age. These companies released attractive new front-ends for their services that ran on DOS, Windows, and Macintosh computers. There were also new services that were graphics-only, like Prodigy, a cooperation between IBM and Sears, and an upstart that had sprung from the ashes of a Commodore 64 service called Quantum Link. This was America Online, or AOL.

Even Microsoft was getting into the act. Bill Gates believed that the “Information Superhighway” was the future of computing, and he wanted to make sure that all roads went through his company’s toll booth. The highly anticipated Windows 95 was scheduled to ship with a bundled dial-up online service called the Microsoft Network, or MSN.

At first, it wasn’t clear which of these online services would emerge as the winner. But people assumed that at least one of them would beat the complicated, nerdy Internet. CompuServe was the oldest, but AOL was nimbler and found success by sending out millions of free “starter” disks (and later, CDs) to potential customers. Microsoft was sure that bundling MSN with the upcoming Windows 95 would ensure victory.

Most of these services decided to hedge their bets by adding a sort of “side access” to the World Wide Web. After all, if they didn’t, their competitors would. At the same time, smaller companies (many of them former bulletin board services) started becoming Internet service providers. These smaller “ISPs” could charge less money than the big services because they didn’t have to create any content themselves. Thousands of new websites were appearing on the Internet every day, much faster than new sections could be added to AOL or CompuServe.

The tipping point happened very quickly. Before Windows 95 had even shipped, Bill Gates wrote his famous “Internet Tidal Wave” memo, where he assigned the Internet the “highest level of importance.” MSN was quickly changed to become more of a standard ISP and moved all of its content to the web. Microsoft rushed to release its own web browser, Internet Explorer, and bundled it with the Windows 95 Plus Pack.

The hype and momentum were entirely with the web now. It was the most exciting, most transformative technology of its time. The decade-long battle to control the Internet by forcing a shift to a new OSI standards model was forgotten. The web was all anyone cared about, and the web ran on TCP/IP.

The browser wars

Netscape had never expected to make a lot of money from its browser, as it was assumed that most people would continue to download new “evaluation” versions for free. Executives were pleasantly surprised when businesses started sending Netscape huge checks. The company went from $17 million in revenue in 1995 to $346 million the following year, and the press started calling Marc Andreessen “the new Bill Gates.”

The old Bill Gates wasn’t having any of that. Following his 1995 memo, Microsoft worked hard to improve Internet Explorer and made it available for free, including to business users. Netscape tried to fight back. It added groundbreaking new features like JavaScript, which was inspired by LISP but with a syntax similar to Java, the hot new programming language from Sun Microsystems. But it was hard to compete with free, and Netscape’s market share started to fall. By 1996, both browsers had reached version 3.0 and were roughly equal in terms of features. The battle continued, but when the Apache Software Foundation released its free web server, Netscape’s other source of revenue dried up as well. The writing was on the wall.

There was no better way to declare your allegiance to a web browser in 1996 than adding “Best Viewed In” above one of these icons. Credit: Jeremy Reimer

The dot-com boom

In 1989, the NSF lifted the restrictions on providing commercial access to the Internet, and by 1991, it had removed all barriers to commercial trade on the network. With the sudden ascent of the web, thanks to Mosaic, Netscape, and Internet Explorer, new companies jumped into this high-tech gold rush. But at first, it wasn’t clear what the best business strategy was. Users expected everything on the web to be free, so how could you make money?

Many early web companies started as hobby projects. In 1994, Jerry Yang and David Filo were electrical engineering PhD students at Stanford University. After Mosaic started popping off, they began collecting and trading links to new websites. Thus, “Jerry’s Guide to the World Wide Web” was born, running on Yang’s Sun workstation. Renamed Yahoo! (Yet Another Hierarchical, Officious Oracle), the site exploded in popularity. Netscape put multiple links to Yahoo on its main navigation bar, which further accelerated growth. “We weren’t really sure if you could make a business out of it, though,” Yang told Fortune. Nevertheless, venture capital companies came calling. Sequoia, which had made millions investing in Apple, put in $1 million for 25 percent of Yahoo.

Yahoo.com as it would have appeared in 1995. Credit: Jeremy Reimer

Another hobby site, AuctionWeb, was started in 1995 by Pierre Omidyar. Running on his own home server using the regular $30 per month service from his ISP, the site let people buy and sell items of almost any kind. When traffic started growing, his ISP told him it was increasing his Internet fees to $250 per month, as befitting a commercial enterprise. Omidyar decided he would try to make it a real business, even though he didn’t have a merchant account for credit cards or even a way to enforce the new 5 percent or 2.5 percent royalty charges. That didn’t matter, as the checks started rolling in. He found a business partner, changed the name to eBay, and the rest was history.

AuctionWeb (later eBay) as it would have appeared in 1995. Credit: Jeremy Reimer

In 1993, Jeff Bezos, a senior vice president at a hedge fund company, was tasked with investigating business opportunities on the Internet. He decided to create a proof of concept for what he described as an “everything store.” He chose books as an ideal commodity to sell online, since a book in one store was identical to one in another, and a website could offer access to obscure titles that might not get stocked in physical bookstores.

He left the hedge fund company, gathered investors and software development talent, and moved to Seattle. There, he started Amazon. At first, the site wasn’t much more than an online version of an existing bookseller catalog called Books In Print. But over time, Bezos added inventory data from the two major book distributors, Ingram and Baker & Taylor. The promise of access to every book in the world was exciting for people, and the company grew quickly.

Amazon.com as it would have appeared in 1995. Credit: Jeremy Reimer

The explosive growth of these startups fueled a self-perpetuating cycle. As publications like Wired experimented with online versions of their magazines, they invented and sold banner ads to fund their websites. The best customers for these ads were other web startups. These companies wanted more traffic, and they knew ads on sites like Yahoo were the best way to get it. Yahoo salespeople could then turn around and point to their exponential ad sales curves, which caused Yahoo stock to rise. This encouraged people to fund more web startups, which would all need to advertise on Yahoo. These new startups also needed to buy servers from companies like Sun Microsystems, causing those stocks to rise as well.

The crash

In the latter half of the 1990s, it looked like everything was going great. The economy was booming, thanks in part to the rise of the World Wide Web and the huge boost it gave to computer hardware and software companies. The NASDAQ index of tech-focused stocks painted a clear picture of the boom.

The NASDAQ composite index in the 1990s. Credit: Jeremy Reimer

Federal Reserve chairman Alan Greenspan called this phenomenon “irrational exuberance” but didn’t seem to be in a hurry to stop it. The fact that most new web startups didn’t have a realistic business model didn’t seem to bother investors. Sure, WebVan might have been paying more to deliver groceries than they earned from customers, but look at that growth curve!

The exuberance couldn’t last forever. The NASDAQ peaked at 8,843.87 in February 2000 and started to go down. In one month, it lost 34 percent of its value, and by August 2001, it was down to 3,253.38. Web companies laid off employees or went out of business completely. The party was over.

Andreessen said that the tech crash scarred him. “The overwhelming message to our generation in the early nineties was ‘You’re dirty, you’re all about grunge—you guys are fucking losers!’ Then the tech boom hit, and it was ‘We are going to do amazing things!’ And then the roof caved in, and the wisdom was that the Internet was a mirage. I 100 percent believed that because the rejection was so personal—both what everybody thought of me and what I thought of myself.”

But while some companies quietly celebrated the end of the whole Internet thing, others would rise from the ashes of the dot-com collapse. That’s the subject of our third and final article.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

A history of the Internet, part 2: The high-tech gold rush begins Read More »

ex-fcc-chair-ajit-pai-is-now-a-wireless-lobbyist—and-enemy-of-cable-companies

Ex-FCC Chair Ajit Pai is now a wireless lobbyist—and enemy of cable companies


Pai’s return as CTIA lobbyist fuels industry-wide battle over spectrum rights.

Ajit Pai, former chairman of the Federal Communications Commission, during a Senate Commerce Committee hearing on Wednesday, April 9, 2025. Credit: Getty Images | Bloomberg

Ajit Pai is back on the telecom policy scene as chief lobbyist for the mobile industry, and he has quickly managed to anger a coalition that includes both cable companies and consumer advocates.

Pai was the Federal Communications Commission chairman during President Trump’s first term and then spent several years at private equity firm Searchlight Capital. He changed jobs in April, becoming the president and CEO of wireless industry lobby group CTIA. Shortly after, he visited the White House to discuss wireless industry priorities and had a meeting with Brendan Carr, the current FCC chairman who was part of Pai’s Republican majority at the FCC from 2017 to 2021.

Pai’s new job isn’t surprising. He was once a lawyer for Verizon, and it’s not uncommon for FCC chairs and commissioners to be lobbyists before or after terms in government.

Pai’s move to CTIA means he is now battling a variety of industry players and advocacy groups over the allocation of spectrum. As always, wireless companies AT&T, Verizon, and T-Mobile want more spectrum and the exclusive rights to use it. The fight puts Pai at odds with the cable industry that cheered his many deregulatory actions when he led the FCC.

Pai wrote a May 4 op-ed in The Wall Street Journal arguing that China is surging ahead of the US in 5G deployment and that “the US doesn’t even have enough licensed spectrum available to keep up with expected consumer demand.” He said that Congress must restore the FCC’s lapsed authority to auction spectrum licenses, and auction off “at least 600 megahertz of midband spectrum for future 5G services.”

“During the first Trump administration, the US was determined to lead the world in wireless innovation—and by 2021 it did,” Pai wrote. “But that urgency and sense of purpose have diminished. With Mr. Trump’s leadership, we can rediscover both.”

Pai’s op-ed drew a quick rebuke from a group called Spectrum for the Future, which alleged that Pai mangled the facts.

“Mr. Pai’s arguments are wrong on the facts—and wrong on how to accelerate America’s global wireless leadership,” the vaguely named group said in a May 8 press release that accused Pai of “stunning hypocrisy.” Spectrum for the Future said Pai is wrong about the existence of a spectrum shortage, wrong about how much money a spectrum auction could raise, and wrong about the cost of reallocating spectrum from the military to mobile companies.

“Mr. Pai attributes the US losing its lead in 5G availability to the FCC’s lapsed spectrum auction authority. He’d be more accurate to blame his own members’ failure to build out their networks,” the group said.

Big Cable finds allies

Pai’s op-ed said that auctioning 600 MHz “could raise as much as $200 billion” to support other US government priorities. Spectrum for the Future called this an “absurd claim” that “presumes that this auction of 600 MHz could approach the combined total ($233 billion) that has been raised by every prior spectrum auction (totaling nearly 6 GHz of bandwidth) in US history combined.”

The group also said Pai “completely ignores the immense cost to taxpayers to relocate incumbent military and intelligence systems out of the bands CTIA covets for its own use.” Spectrum for the Future didn’t mention that one of the previous auctions, for the 3.7–3.98 GHz band, netted over $81 billion in winning bids.

So who is behind Spectrum for the Future? The group’s website lists 18 members , including the biggest players in the cable industry. Comcast, Charter, Cox, and lobby group NCTA-The Internet & Television Association are all members of Spectrum for the Future. (Disclosure: The Advance/Newhouse Partnership, which owns 12 percent of Charter, is part of Advance Publications, which owns Ars Technica parent Condé Nast.)

When contacted by Ars, a CTIA spokesperson criticized cable companies for “fighting competition” and said the cable firms are being “disingenuous.” Charter and Cox declined to answer our questions about their involvement in Spectrum for the Future. Comcast and the NCTA didn’t respond to requests for comment.

The NCTA and big cable companies are no strangers to lobbying the FCC and Congress and could fight for CBRS entirely on their own. But as it happens, some consumer advocates who regularly oppose the cable industry on other issues are on cable’s side in this battle.

With Spectrum for the Future, the cable industry has allied not just with consumer advocates but also small wireless ISPs and operators of private networks that use spectrum the big mobile companies want for themselves. Another group that is part of the coalition represents schools and libraries that use spectrum to provide local services.

For cable, joining with consumer groups, small ISPs, and others in a broad coalition has an obvious advantage from a public relations standpoint. “This is a lot of different folks who are in it for their own reasons. Sometimes that’s a big advantage because it makes it more authentic,” said Harold Feld, senior VP of consumer advocacy group Public Knowledge, which is part of Spectrum of the Future.

In some cases, a big company will round up nonprofits to which it has donated to make a show of broad public support for one of the company’s regulatory priorities—like a needed merger approval. That’s not what happened here, according to Feld. While cable companies probably provided most of the funding for Spectrum for the Future, the other members are keenly interested in fighting the wireless lobby over spectrum access.

“There’s a difference between cable being a tentpole member and this being cable with a couple of friends on the side,” Feld told Ars. Cable companies “have the most to lose, they have the most initial resources. But all of these other guys who are in here, I’ve been on these calls, they’re pretty active. There are a lot of diverse interests in this, which sometimes makes it easier to lobby, sometimes makes it harder to lobby because you all want to talk about what’s important to you.”

Feld didn’t help write the group’s press release criticizing Pai but said the points made are “all things I agree with.”

The “everybody but Big Mobile” coalition

Public Knowledge and New America’s Open Technology Institute (OTI), another Spectrum for the Future member, are both longtime proponents of shared spectrum. OTI’s Wireless Future Project director, Michael Calabrese, told Ars that Spectrum for the Future is basically the “everybody but Big Mobile” wireless coalition and “a very broad but ad hoc coalition.”

While Public Knowledge and OTI advocate for shared spectrum in many frequency bands, Spectrum for the Future is primarily focused on one: the Citizens Broadband Radio Service (CBRS), which spans from 3550 MHz to 3700 MHz. The CBRS spectrum is used by the Department of Defense and shared with non-federal users.

CBRS users in the cable industry and beyond want to ensure that CBRS remains available to them and free of high-power mobile signals that would crowd out lower-power operations. They were disturbed by AT&T’s October 2024 proposal to move CBRS to the lower part of the 3 GHz band, which is also used by the Department of Defense, and auction existing CBRS frequencies to 5G wireless companies “for licensed, full-power use.”

The NCTA told the FCC in December that “AT&T’s proposal to reallocate the entire 3 GHz band is unwarranted, impracticable, and unworkable and is based on the false assertion that the CBRS band is underutilized.”

Big mobile companies want the CBRS spectrum because it is adjacent to frequencies that are already licensed to them. The Department of Defense seems to support AT&T’s idea, even though it would require moving some military operations and sharing the spectrum with non-federal users.

Pentagon plan similar to AT&T’s

In a May research note provided to Ars, New Street Research Policy Advisor Blair Levin reported some details of a Department of Defense proposal for several bands of spectrum, including CBRS. The White House asked the Department of Defense “to come up with a plan to enable allocation of mid-band exclusive-use spectrum,” and the Pentagon recently started circulating its initial proposal.

The Pentagon plan is apparently similar to AT&T’s, as it would reportedly move current CBRS licensees and users to the lower 3 GHz band to clear spectrum for auctions.

“It represents the first time we can think of where the government would change the license terms of one set of users to benefit a competitor of that first set of users… While the exclusive-use spectrum providers would see this as government exercising its eminent domain rights as it has traditionally done, CBRS users, particularly cable, would see this as the equivalent of a government exercis[ing] its eminent domain rights to condemn and tear down a Costco to give the land to a Walmart,” Levin wrote.

If the proposal is implemented, cable companies would likely sue the government “on the grounds that it violates their property rights” under the priority licenses they purchased to use CBRS, Levin wrote. Levin’s note said he doesn’t think this proposal is likely to be adopted, but it shows that “the game is afoot.”

CBRS is important to cable companies because they have increasingly focused on selling mobile service as another revenue source on top of their traditional TV and broadband businesses. Cable firms got into the mobile business by reselling network access from the likes of Verizon. They’ve been increasing the use of CBRS, reducing their reliance on the major mobile companies, although a recent Light Reading article indicates that cable’s progress with CBRS deployment has been slow.

Then-FCC Chairman Ajit Pai and FCC commissioner Brendan Carr stand next to each other in a Senate committee hearing room in 2018.

Then-FCC Chairman Ajit Pai with FCC Commissioner Brendan Carr before the start of a Senate Commerce Committee hearing on Thursday, Aug. 16, 2018.

Credit: Getty Images | Bill Clark

Then-FCC Chairman Ajit Pai with FCC Commissioner Brendan Carr before the start of a Senate Commerce Committee hearing on Thursday, Aug. 16, 2018. Credit: Getty Images | Bill Clark

In its statement to Ars, CTIA said the cable industry “opposes full-power 5G access in the US at every opportunity” in CBRS and other spectrum bands. Cable companies are “fighting competition” from wireless operators “every chance they can,” CTIA said. “With accelerating losses in the marketplace, their advocacy is now more aggressive and disingenuous.”

The DoD plan that reportedly mirrors AT&T’s proposal seems to represent a significant change from the Biden-era Department of Defense’s stance. In September 2023, the department issued a report saying that sharing the 3.1 GHz band with non-federal users would be challenging and potentially cause interference, even if rules were in place to protect DoD operations.

“DoD is concerned about the high possibility that non-Federal users will not adhere to the established coordination conditions at all times; the impacts related to airborne systems, due to their range and speed; and required upgrades to multiple classes of ships,” the 2023 report said. We contacted the Department of Defense and did not receive a response.

Levin quoted Calabrese as saying the new plan “would pull the rug out from under more than 1,000 CBRS operators that have deployed more than 400,000 base stations. While they could, in theory, share DoD spectrum lower in the band, that spectrum will now be so congested it’s unclear how or when that could be implemented.”

Small ISP slams “AT&T and its cabal of telecom giants”

AT&T argues that CBRS spectrum is underutilized and should be repurposed for commercial mobile use because it “resides between two crucial, high-power, licensed 5G bands”—specifically 3.45–3.55 GHz and 3.7–3.98 GHz. It said its proposal would expand the CBRS band’s total size from 150 MHz to 200 MHz by relocating it to 3.1–3.3 GHz.

Keefe John, CEO of a Wisconsin-based wireless home Internet provider called Ethoplex, argued that “AT&T and its cabal of telecom giants” are “scheming to rip this resource from the hands of small operators and hand it over to their 5G empire. This is nothing less than a brazen theft of America’s digital future, and we must fight back with unrelenting resolve.”

John is vice chairperson of the Wireless Internet Service Providers Association (WISPA), which represents small ISPs and is a member of Spectrum for the Future. He wrote that CBRS is a “vital spectrum band that has become the lifeblood of rural connectivity” because small ISPs use it to deliver fixed wireless Internet service to underserved areas.

John called the AT&T proposal “a deliberate scheme to kneecap WISPs, whose equipment, painstakingly deployed, would be rendered obsolete in the lower band.” Instead of moving CBRS from one band to another, John said CBRS should stay on its current spectrum and expand into additional spectrum “to ensure small providers have a fighting chance.”

An AT&T spokesperson told Ars that “CBRS can coexist with incumbents in the lower 3 GHz band, and with such high demand for spectrum, it should. Thinking creatively about how to most efficiently use scarce spectrum to meet crucial needs is simply good public policy.”

AT&T said that an auction “would provide reimbursement for costs associated with” moving CBRS users to other spectrum and that “the Department of Defense has already stated that incumbents in the lower 3 GHz could share with low-power commercial uses.”

“Having a low-power use sandwiched between two high-power use cases is an inefficient use of spectrum that doesn’t make sense. Our proposal would fix that inefficiency,” AT&T said.

AT&T has previously said that under its proposal, CBRS priority license holders “would have the choice of relocating to the new CBRS band, accepting vouchers they can use toward bidding on new high-power licenses, or receiving a cash payment in exchange for the relinquishment of their priority rights.”

Democrat warns of threat to naval operations

Reallocating spectrum could require the Navy to move from the current CBRS band to the lower part of 3 GHz. US Senator Maria Cantwell (D-Wash.) sent a letter urging the Department of Defense to avoid major changes, saying the current sharing arrangement “allows the Navy to continue using high-power surveillance and targeting radars to protect vessels and our coasts, while also enabling commercial use of the band when and where the Navy does not need access.”

Moving CBRS users would “disrupt critical naval operations and homeland defense” and “undermine an innovative ecosystem of commercial wireless technology that will be extremely valuable for robotic manufacturing, precision agriculture, ubiquitous connectivity in large indoor spaces, and private wireless networks,” Cantwell wrote.

Cantwell said she is also concerned that “a substantial number of military radar systems that operate in the lower 3 GHz band” will be endangered by moving CBRS. She pointed out that the DoD’s September 2023 report said the 3.1 GHz range has “unique spectrum characteristics” that “provide long detection ranges, tracking accuracy, and discrimination capability required for DoD radar systems.” The spectrum “is low enough in the frequency range to maintain a high-power aperture capability in a transportable system” and “high enough in the frequency range that a sufficient angular accuracy can be maintained for a radar track function for a fire control capability,” the DoD report said.

Spectrum for the Future members

In addition to joining the cable industry in Spectrum for the Future, public interest groups are fighting for CBRS on their own. Public Knowledge and OTI teamed up with the American Library Association, the Benton Institute for Broadband & Society, the Schools Health & Libraries Broadband (SHLB) Coalition, and others in a November 2024 FCC filing that praised the pro-consumer virtues of CBRS.

“CBRS has been the most successful innovation in wireless technology in the last decade,” the groups said. They accused the big three mobile carriers of “seeking to cripple CBRS as a band that promotes not only innovation, but also competition.”

These advocacy groups are interested in helping cable companies and small home Internet providers compete against the big three mobile carriers because that opens new options for consumers. But the groups also point to many other use cases for CBRS, writing:

CBRS has encouraged the deployment of “open networks” designed to host users needing greater flexibility and control than that offered by traditional CMRS [Commercial Mobile Radio Services] providers, at higher power and with greater interference protection than possible using unlicensed spectrum. Manufacturing campuses (such as John Deere and Dow Chemical), transit hubs (Miami International Airport, Port of Los Angeles), supply chain and logistic centers (US Marine Corps), sporting arenas (Philadelphia’s Wells Fargo Center), school districts and libraries (Fresno Unified School District, New York Public Library) are all examples of a growing trend toward local spectrum access fueling purpose-built private LTE/5G networks for a wide variety of use cases.

The SHLB told Ars that “CBRS spectrum plays a critical role in helping anchor institutions like schools and libraries connect their communities, especially in rural and underserved areas where traditional broadband options may be limited. A number of our members rely on access to shared and unlicensed spectrum to deliver remote learning and essential digital services, often at low or no cost to the user.”

Spectrum for the Future’s members also include companies that sell services to help customers deploy CBRS networks, as well as entities like Miami International Airport that deploy their own CBRS-based private cellular networks. The NCTA featured Miami International Airport’s private network in a recent press release, saying that CBRS helped the airport “deliver more reliable connectivity for visitors while also powering a robust Internet of Things network to keep the airport running smoothly.”

Spectrum for the Future doesn’t list any staff on its website. Media requests are routed to a third-party public relations firm. An employee of the public relations firm declined to answer our questions about how Spectrum for the Future is structured and operated but said it is “a member-driven coalition with a wide range of active supporters and contributors, including innovators, anchor institutions, and technology companies.”

Spectrum for the Future appears to be organized by Salt Point Strategies, a public affairs consulting firm. Salt Point Spectrum Policy Analyst David Wright is described as Spectrum for the Future’s policy director in an FCC filing. We reached out to Wright and didn’t receive a response.

One Big Beautiful Bill is a battleground

Senator Ted Cruz at a Senate committee hearing, sitting in his seat and using his hand to move a nameplate that says

Senate Commerce Committee Chairman Ted Cruz (R-Texas) at a hearing on Tuesday, January 28, 2025.

Credit: Getty Images | Tom Williams

Senate Commerce Committee Chairman Ted Cruz (R-Texas) at a hearing on Tuesday, January 28, 2025. Credit: Getty Images | Tom Williams

The Trump-backed “One Big Beautiful Bill,” approved by the House, is one area of interest for both sides of the CBRS debate. The bill would restore the FCC’s expired authority to auction spectrum and require new auctions. One question is whether the bill will simply require the FCC to auction a minimum amount of spectrum or if it will require specific bands to be auctioned.

WISPA provided us with a statement about the version that passed the House, saying the group is glad it “excludes the 5.9 GHz and 6 GHz bands from its call to auction off 600 megahertz of spectrum” but worried because the bill “does not exclude the widely used and previously auctioned Citizens Broadband Radio Service (CBRS) band from competitive bidding, leaving it vulnerable to sale and/or major disruption.”

WISPA said that “spectrum auctions are typically designed to favor large players” and “cut out small and rural providers who operate on the front lines of the digital divide.” WISPA said that over 60 percent of its members “use CBRS to deliver high-quality broadband to hard-to-serve and previously unserved Americans.”

On June 5, Sen. Ted Cruz (R-Texas) released the text of the Senate Commerce Committee proposal, which also does not exclude the 3550–3700 MHz from potential auctions. Pai and AT&T issued statements praising Cruz’s bill.

Pai said that Cruz’s “bold approach answers President Trump’s call to keep all options on the table and provides the President with full flexibility to identify the right bands to meet surging consumer demand, safeguard our economic competitiveness, and protect national security.” AT&T said that “by renewing the FCC’s auction authority and creating a pipeline of mid-band spectrum, the Senate is taking a strong step toward meeting consumers’ insatiable demand for mobile data.”

The NCTA said it welcomed the plan to restore the FCC’s auction authority but urged lawmakers to “reject the predictable calls from large mobile carriers that seek to cripple competition and new services being offered over existing Wi-Fi and CBRS bands.”

Licensed, unlicensed, and in-between

Spectrum is generally made available on a licensed or unlicensed basis. Wireless carriers pay big bucks for licenses that grant them exclusive use of spectrum bands on which they deploy nationwide cellular networks. Unlicensed spectrum—like the bands used in Wi-Fi—can be used by anyone without a license as long as they follow rules that prevent interference with other users and services.

The FCC issued rules for the CBRS band in 2015 during the Obama administration, using a somewhat different kind of system. The FCC rules allow “for dynamic spectrum sharing in the 3.5 GHz band between the Department of Defense (DoD) and commercial spectrum users,” the National Telecommunications and Information Administration notes. “DoD users have protected, prioritized use of the spectrum. When the government isn’t using the airwaves, companies and the public can gain access through a tiered framework.”

Instead of a binary licensed-versus-unlicensed system, the FCC implemented a three-tiered system of access. Tier 1 is for incumbent users of the band, including federal users and fixed satellite service. Tier 1 users receive protection against harmful interference from Tier 2 and Tier 3 users.

Tier 2 of CBRS consists of Priority Access Licenses (PALs) that are distributed on a county-by-county basis through competitive bidding. Tier 2 users get interference protection from users of Tier 3, which is made available in a manner similar to unlicensed spectrum.

Tier 3 “is licensed-by-rule to permit open, flexible access to the band for the widest possible group of potential users,” the FCC says. Tier 3 users can operate throughout the 3550–3700 MHz band but “must not cause harmful interference to Incumbent Access users or Priority Access Licensees and must accept interference from these users. GAA users also have no expectation of interference protection from other GAA users.”

The public interest groups’ November 2024 filing with the FCC said the unique approach to spectrum sharing “allow[s] all would-be users to operate where doing so does not threaten harmful interference” and provides a happy medium between high-powered operations in exclusively licensed spectrum bands and low-powered operations in unlicensed spectrum.

CTIA wants the ability to send higher-power signals in the band, arguing that full-power wireless transmissions would help the US match the efforts of other countries “where this spectrum has been identified as central to 5G.” The public interest groups urged the FCC to reject the mobile industry proposal to increase power levels, saying it “would disrupt and diminish the expanding diversity of GAA users and use cases that represent the central purpose of CBRS’s innovative three-tier, low-power and coordinated sharing framework.”

Pai helped carriers as FCC chair

The FCC’s original plan for PALs during the Obama administration was to auction them off for individual Census tracts, small areas containing between 1,200 and 8,000 people each. During President Trump’s first term, the Pai FCC granted a CTIA request to boost the size of license areas from census tracts to counties, making it harder for small companies to win at auction.

The FCC auctioned PALs in 2020, getting bids of nearly $4.6 billion from 228 bidders. The biggest winners were Verizon, Dish Network, Charter, Comcast, and Cox.

Although Verizon uses CBRS for parts of its network, that doesn’t mean it’s on the same side as cable users in the policy debate. Verizon urged the FCC to increase the allowed power levels in the band. Dish owner EchoStar also asked for power increases. Cable companies oppose raising the power levels, with the NCTA saying that doing so would “jeopardize the continued availability of the 3.5 GHz band for lower-power operations” and harm both federal and non-federal users.

As head of CTIA, one of Pai’s main jobs is to obtain more licensed spectrum for the exclusive use of AT&T, Verizon, T-Mobile, and other mobile companies that his group represents. Pai’s Wall Street Journal op-ed said that “traffic on wireless networks is expected to triple by 2029,” driven by “AI, 5G home broadband and other emerging technologies.” Pai cited a study commissioned by CTIA to argue that “wireless networks will be unable to meet a quarter of peak demand in as little as two years.”

Spectrum for the Future countered that Pai “omits that the overwhelming share of this traffic will travel over Wi-Fi, not cellular networks.” CTIA told Ars that “the Ericsson studies we use for traffic growth projections only consider demand over commercial networks using licensed spectrum.”

Spectrum for the Future pointed to statements made by the CEOs of wireless carriers that seem to contradict Pai’s warnings of a spectrum shortage:

Mr. Pai cites a CTIA-funded study to claim “wireless networks will be unable to meet a quarter of peak demand in as little as two years.” If that’s true, then why are his biggest members’ CEOs telling Wall Street the exact opposite?

Verizon’s CEO insists he’s sitting on “a generation of spectrum”—”years and years and years” of spectrum capacity still to deploy. The CEO of Verizon’s consumer group goes even further, insisting they have “almost unlimited spectrum.” T-Mobile agrees, bragging that it has “only deployed 60 percent of our mid-band spectrum on 5G,” leaving “lots of spectrum we haven’t put into the fight yet.”

Battle could last for years

Spectrum for the Future also scoffed at Pai’s comparison of the US to China. Pai’s op-ed said that China “has accelerated its efforts to dominate in wireless and will soon boast more than four times the amount of commercial midband spectrum than the US.” Pai added that “China isn’t only deploying 5G domestically. It’s exporting its spectrum policies, its equipment vendors (such as Huawei and ZTE), and its Communist Party-centric vision of innovation to the rest of the world.”

Spectrum for the Future responded that “China’s spectrum policy goes all-in on exclusive-license frameworks, such as 5G, because they limit spectrum access to just a small handful of regime-aligned telecom companies complicit in Beijing’s censorship regime… America’s global wireless leadership, by contrast, is fueled by spectrum innovations like unlicensed Wi-Fi and CBRS spectrum sharing, whose hardware markets are dominated by American and allied companies.”

Spectrum for the Future also said that Pai and CTIA “blasting China for ‘exporting its spectrum policies’—while asking the US to adopt the same approach—is stunning hypocrisy.”

CTIA’s statement to Ars disputed Spectrum for the Future’s description. “The system of auctioning spectrum licenses was pioneered in America but is not used in China. China does, however, allocate unlicensed spectrum in a similar manner to the United States,” CTIA told Ars.

The lobbying battle and potential legal war that has Pai and CTIA lined up against the “everybody but Big Mobile” wireless coalition could last throughout Trump’s second term. Levin’s research note about the DoD proposal said, “the path from adoption to auction to making the spectrum available to the winners of an auction is likely to be at least three years.” The fight could go on a lot longer if “current licensees object and litigate,” Levin wrote.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Ex-FCC Chair Ajit Pai is now a wireless lobbyist—and enemy of cable companies Read More »

google’s-nightmare:-how-a-search-spinoff-could-remake-the-web

Google’s nightmare: How a search spinoff could remake the web


Google has shaped the Internet as we know it, and unleashing its index could change everything.

Google may be forced to license its search technology when the final antitrust ruling comes down. Credit: Aurich Lawson

Google may be forced to license its search technology when the final antitrust ruling comes down. Credit: Aurich Lawson

Google wasn’t around for the advent of the World Wide Web, but it successfully remade the web on its own terms. Today, any website that wants to be findable has to play by Google’s rules, and after years of search dominance, the company has lost a major antitrust case that could reshape both it and the web.

The closing arguments in the case just wrapped up last week, and Google could be facing serious consequences when the ruling comes down in August. Losing Chrome would certainly change things for Google, but the Department of Justice is pursuing other remedies that could have even more lasting impacts. During his testimony, Google CEO Sundar Pichai seemed genuinely alarmed at the prospect of being forced to license Google’s search index and algorithm, the so-called data remedies in the case. He claimed this would be no better than a spinoff of Google Search. The company’s statements have sometimes derisively referred to this process as “white labeling” Google Search.

But does a white label Google Search sound so bad? Google has built an unrivaled index of the web, but the way it shows results has become increasingly frustrating. A handful of smaller players in search have tried to offer alternatives to Google’s search tools. They all have different approaches to retrieving information for you, but they agree that spinning off Google Search could change the web again. Whether or not those changes are positive depends on who you ask.

The Internet is big and noisy

As Google’s search results have changed over the years, more people have been open to other options. Some have simply moved to AI chatbots to answer their questions, hallucinations be damned. But for most people, it’s still about the 10 blue links (for now).

Because of the scale of the Internet, there are only three general web search indexes: Google, Bing, and Brave. Every search product (including AI tools) relies on one or more of these indexes to probe the web. But what does that mean?

“Generally, a search index is a service that, when given a query, is able to find relevant documents published on the Internet,” said Brave’s search head Josep Pujol.

A search index is essentially a big database, and that’s not the same as search results. According to JP Schmetz, Brave’s chief of ads, it’s entirely possible to have the best and most complete search index in the world and still show poor results for a given query. Sound like anyone you know?

Google’s technological lead has allowed it to crawl more websites than anyone else. It has all the important parts of the web, plus niche sites, abandoned blogs, sketchy copies of legitimate websites, copies of those copies, and AI-rephrased copies of the copied copies—basically everything. And the result of this Herculean digital inventory is a search experience that feels increasingly discombobulated.

“Google is running large-scale experiments in ways that no rival can because we’re effectively blinded,” said Kamyl Bazbaz, head of public affairs at DuckDuckGo, which uses the Bing index. “Google’s scale advantage fuels a powerful feedback loop of different network effects that ensure a perpetual scale and quality deficit for rivals that locks in Google’s advantage.”

The size of the index may not be the only factor that matters, though. Brave, which is perhaps best known for its browser, also has a search engine. Brave Search is the default in its browser, but you can also just go to the URL in your current browser. Unlike most other search engines, Brave doesn’t need to go to anyone else for results. Pujol suggested that Brave doesn’t need the scale of Google’s index to find what you need. And admittedly, Brave’s search results don’t feel meaningfully worse than Google’s—they may even be better when you consider the way that Google tries to keep you from clicking.

Brave’s index spans around 25 billion pages, but it leaves plenty of the web uncrawled. “We could be indexing five to 10 times more pages, but we choose not to because not all the web has signal. Most web pages are basically noise,” said Pujol.

The freemium search engine Kagi isn’t worried about having the most comprehensive index. Kagi is a meta search engine. It pulls in data from multiple indexes, like Bing and Brave, but it has a custom index of what founder and CEO Vladimir Prelovac calls the “non-commercial web.”

When you search with Kagi, some of the results (it tells you the proportion) come from its custom index of personal blogs, hobbyist sites, and other content that is poorly represented on other search engines. It’s reminiscent of the days when huge brands weren’t always clustered at the top of Google—but even these results are being pushed out of reach in favor of AI, ads, Knowledge Graph content, and other Google widgets. That’s a big part of why Kagi exists, according to Prelovac.

A Google spinoff could change everything

We’ve all noticed the changes in Google’s approach to search, and most would agree that they have made finding reliable and accurate information harder. Regardless, Google’s incredibly deep and broad index of the Internet is in demand.

Even with Bing and Brave available, companies are going to extremes to syndicate Google Search results. A cottage industry has emerged to scrape Google searches as a stand-in for an official index. These companies are violating Google’s terms, yet they appear in Google Search results themselves. Google could surely do something about this if it wanted to.

The DOJ calls Google’s mountain of data the “essential raw material” for building a general search engine, and it believes forcing the firm to license that material is key to breaking its monopoly. The sketchy syndication firms will evaporate if the DOJ’s data remedies are implemented, which would give competitors an official way to utilize Google’s index. And utilize it they will.

Google CEO Sundar Pichai decried the court’s efforts to force a “de facto divestiture” of Google’s search tech.

Credit: Ryan Whitwam

Google CEO Sundar Pichai decried the court’s efforts to force a “de facto divestiture” of Google’s search tech. Credit: Ryan Whitwam

According to Prelovac, this could lead to an explosion in search choices. “The whole purpose of the Sherman Act is to proliferate a healthy, competitive marketplace. Once you have access to a search index, then you can have thousands of search startups,” said Prelovac.

The Kagi founder suggested that licensing Google Search could allow entities of all sizes to have genuinely useful custom search tools. Cities could use the data to create deep, hyper-local search, and people who love cats could make a cat-specific search engine, in both cases pulling what they want from the most complete database of online content. And, of course, general search products like Kagi would be able to license Google’s tech for a “nominal fee,” as the DOJ puts it.

Prelovac didn’t hesitate when asked if Kagi, which offers a limited number of free searches before asking users to subscribe, would integrate Google’s index. “Yes, that is something we would do,” he said. “And that’s what I believe should happen.”

There may be some drawbacks to unleashing Google’s search services. Judge Amit Mehta has expressed concern that blocking Google’s search placement deals could reduce browser choice, and there is a similar issue with the data remedies. If Google is forced to license search as an API, its few competitors in web indexing could struggle to remain afloat. In a roundabout way, giving away Google’s search tech could actually increase its influence.

The Brave team worries about how open access to Google’s search technology could impact diversity on the web. “If implemented naively, it’s a big problem,” said Brave’s ad chief JP Schmetz, “If the court forces Google to provide search at a marginal cost, it will not be possible for Bing or Brave to survive until the remedy ends.”

The landscape of AI-based search could also change. We know from testimony given during the remedy trial by OpenAI’s Nick Turley that the ChatGPT maker tried and failed to get access to Google Search to ground its AI models—it currently uses Bing. If Google were suddenly an option, you can be sure OpenAI and others would rush to connect Google’s web data to their large language models (LLMs).

The attempt to reduce Google’s power could actually grant it new monopolies in AI, according to Brave Chief Business Officer Brian Brown. “All of a sudden, you would have a single monolithic voice of truth across all the LLMs, across all the web,” Brown said.

What if you weren’t the product?

If white labeling Google does expand choice, even at the expense of other indexes, it will give more kinds of search products a chance in the market—maybe even some that shun Google’s focus on advertising. You don’t see much of that right now.

For most people, web search is and always has been a free service supported by ads. Google, Brave, DuckDuckGo, and Bing offer all the search queries you want for free because they want eyeballs. It’s been said often, but it’s true: If you’re not paying for it, you’re the product. This is an arrangement that bothers Kagi’s founder.

“For something as important as information consumption, there should not be an intermediary between me and the information, especially one that is trying to sell me something,” said Prelovac.

Kagi search results acknowledge the negative impact of today’s advertising regime. Kagi users see a warning next to results with a high number of ads and trackers. According to Prelovac, that is by far the strongest indication that a result is of low quality. That icon also lets you adjust the prevalence of such sites in your personal results. You can demote a site or completely hide it, which is a valuable option in the age of clickbait.

Kagi search gives you a lot of control.

Credit: Ryan Whitwam

Kagi search gives you a lot of control. Credit: Ryan Whitwam

Kagi’s paid approach to search changes its relationship with your data. “We literally don’t need user data,” Prelovac said. “But it’s not only that we don’t need it. It’s a liability.”

Prelovac admitted that getting people to pay for search is “really hard.” Nevertheless, he believes ad-supported search is a dead end. So Kagi is planning for a future in five or 10 years when more people have realized they’re still “paying” for ad-based search with lost productivity time and personal data, he said.

We know how Google handles user data (it collects a lot of it), but what does that mean for smaller search engines like Brave and DuckDuckGo that rely on ads?

“I’m sure they mean well,” said Prelovac.

Brave said that it shields user data from advertisers, relying on first-party tracking to attribute clicks to Brave without touching the user. “They cannot retarget people later; none of that is happening,” said Brave’s JP Schmetz.

DuckDuckGo is a bit of an odd duck—it relies on Bing’s general search index, but it adds a layer of privacy tools on top. It’s free and ad-supported like Google and Brave, but the company says it takes user privacy seriously.

“Viewing ads is privacy protected by DuckDuckGo, and most ad clicks are managed by Microsoft’s ad network,” DuckDuckGo’s Kamyl Bazbaz said. He explained that DuckDuckGo has worked with Microsoft to ensure its network does not track users or create any profiles based on clicks. He added that the company has a similar privacy arrangement with TripAdvisor for travel-related ads.

It’s AI all the way down

We can’t talk about the future of search without acknowledging the artificially intelligent elephant in the room. As Google continues its shift to AI-based search, it’s tempting to think of the potential search spin-off as a way to escape that trend. However, you may find few refuges in the coming years. There’s a real possibility that search is evolving beyond the 10 blue links and toward an AI assistant model.

All non-Google search engines have AI integrations, with the most prominent being Microsoft Bing, which has a partnership with OpenAI. But smaller players have AI search features, too. The folks working on these products agree with Microsoft and Google on one important point: They see AI as inevitable.

Today’s Google alternatives all have their own take on AI Overviews, which generates responses to queries based on search results. They’re generally not as in-your-face as Google AI, though. While Google and Microsoft are intensely focused on increasing the usage of AI search, other search operators aren’t pushing for that future. They are along for the ride, though.

AI overview on phone

AI Overviews are integrated with Google’s search results, and most other players have their own version.

Credit: Google

AI Overviews are integrated with Google’s search results, and most other players have their own version. Credit: Google

“We’re finding that some people prefer to start in chat mode and then jump into more traditional search results when needed, while others prefer the opposite,” Bazbaz said. “So we thought the best thing to do was offer both. We made it easy to move between them, and we included an off switch for those who’d like to avoid AI altogether.”

The team at Brave views AI as a core means of accessing search and one that will continue to grow. Brave generates AI answers for many searches and prominently cites sources. You can also disable Brave’s AI if you prefer. But according to search chief Josep Pujol, the move to AI search is inevitable for a pretty simple reason: It’s convenient, and people will always choose convenience. So AI is changing the web as we know it, for better or worse, because LLMs can save a smidge of time, especially for more detailed “long-tail” queries. These AI features may give you false information while they do it, but that’s not always apparent.

This is very similar to the language Google uses when discussing agentic search, although it expresses it in a more nuanced way. By understanding the task behind a query, Google hopes to provide AI answers that save people time, even if the model needs a few ticks to fan out and run multiple searches to generate a more comprehensive report on a topic. That’s probably still faster than running multiple searches and manually reviewing the results, and it could leave traditional search as an increasingly niche service, even in a world with more choices.

“Will the 10 blue links continue to exist in 10 years?” Pujol asked. “Actually, one question would be, does it even exist now? In 10 years, [search] will have evolved into more of an AI conversation behavior or even agentic. That is probably the case. What, for sure, will continue to exist is the need to search. Search is a verb, an action that you do, and whether you will do it directly or whether it will be done through an agent, it’s a search engine.”

Vlad from Kagi sees AI becoming the default way we access information in the long term, but his search engine doesn’t force you to use it. On Kagi, you can expand the AI box for your searches and ask follow-ups, and the AI will open automatically if you use a question mark in your search. But that’s just the start.

“You watch Star Trek, nobody’s clicking on links there—I do believe in that vision in science fiction movies,” Prelovac said. “I don’t think my daughter will be clicking links in 10 years. The only question is if the current technology will be the one that gets us there. LLMs have inherent flaws. I would even tend to say it’s likely not going to get us to Star Trek.”

If we think of AI mainly as a way to search for information, the future becomes murky. With generative AI in the driver’s seat, questions of authority and accuracy may be left to language models that often behave in unpredictable and difficult-to-understand ways. Whether we’re headed for an AI boom or bust—for continued Google dominance or a new era of choice—we’re facing fundamental changes to how we access information.

Maybe if we get those thousands of search startups, there will be a few that specialize in 10 blue links. We can only hope.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google’s nightmare: How a search spinoff could remake the web Read More »

what-solar?-what-wind?-texas-data-centers-build-their-own-gas-power-plants

What solar? What wind? Texas data centers build their own gas power plants


Data center operators are turning away from the grid to build their own power plants.

Sisters Abigail and Jennifer Lindsey stand on their rural property on May 27 outside New Braunfels, Texas, where they posted a sign in opposition to a large data center and power plant planned across the street. Credit: Dylan Baddour/Inside Climate News

NEW BRAUNFELS, Texas—Abigail Lindsey worries the days of peace and quiet might be nearing an end at the rural, wooded property where she lives with her son. On the old ranch across the street, developers want to build an expansive complex of supercomputers for artificial intelligence, plus a large, private power plant to run it.

The plant would be big enough to power a major city, with 1,200 megawatts of planned generation capacity fueled by West Texas shale gas. It will only supply the new data center, and possibly other large data centers recently proposed, down the road.

“It just sucks,” Lindsey said, sitting on her deck in the shade of tall oak trees, outside the city of New Braunfels. “They’ve come in and will completely destroy our way of life: dark skies, quiet and peaceful.”

The project is one of many others like it proposed in Texas, where a frantic race to boot up energy-hungry data centers has led many developers to plan their own gas-fired power plants rather than wait for connection to the state’s public grid. Egged on by supportive government policies, this buildout promises to lock in strong gas demand for a generation to come.

The data center and power plant planned across from Lindsey’s home is a partnership between an AI startup called CloudBurst and the natural gas pipeline giant Energy Transfer. It was Energy Transfer’s first-ever contract to supply gas for a data center, but it is unlikely to be its last. In a press release, the company said it was “in discussions with a number of data center developers and expects this to be the first of many agreements.”

Previously, conventional wisdom assumed that this new generation of digital infrastructure would be powered by emissions-free energy sources like wind, solar and battery power, which have lately seen explosive growth. So far, that vision isn’t panning out, as desires to build quickly overcome concerns about sustainability.

“There is such a shortage of data center capacity and power,” said Kent Draper, chief commercial officer at Australian data center developer IREN, which has projects in West Texas. “Even the large hyperscalers are willing to turn a blind eye to their renewable goals for some period of time in order to get access.”

The Hays Energy Project is a 990 MW gas-fired power plant near San Marcos, Texas.

Credit: Dylan Baddour/Inside Climate News

The Hays Energy Project is a 990 MW gas-fired power plant near San Marcos, Texas. Credit: Dylan Baddour/Inside Climate News

IREN prioritizes renewable energy for its data centers—giant warehouses full of advanced computers and high-powered cooling systems that can be configured to produce crypto currency or generate artificial intelligence. In Texas, that’s only possible because the company began work here years ago, early enough to secure a timely connection to the state’s grid, Draper said.

There were more than 2,000 active generation interconnection requests as of April 30, totalling 411,600 MW of capacity, according to grid operator ERCOT. A bill awaiting signature on Gov. Greg Abbott’s desk, S.B. 6, looks to filter out unserious large-load projects bloating the queue by imposing a $100,000 fee for interconnection studies.

Wind and solar farms require vast acreage and generate energy intermittently, so they work best as part of a diversified electrical grid that collectively provides power day and night. But as the AI gold rush gathered momentum, a surge of new project proposals has created years-long wait times to connect to the grid, prompting many developers to bypass it and build their own power supply.

Operating alone, a wind or solar farm can’t run a data center. Battery technologies still can’t store such large amounts of energy for the length of time required to provide steady, uninterrupted power for 24 hours per day, as data centers require. Small nuclear reactors have been touted as a means to meet data center demand, but the first new units remain a decade from commercial deployment, while the AI boom is here today.

Now, Draper said, gas companies approach IREN all the time, offering to quickly provide additional power generation.

Gas provides almost half of all power generation capacity in Texas, far more than any other source. But the amount of gas power in Texas has remained flat for 20 years, while wind and solar have grown sharply, according to records from the US Energy Information Administration. Facing a tidal wave of proposed AI projects, state lawmakers have taken steps to try to slow the expansion of renewable energy and position gas as the predominant supply for a new era of demand.

This buildout promises strong demand and high gas prices for a generation to come, a boon to Texas’ fossil fuel industry, the largest in the nation. It also means more air pollution and emissions of planet-warming greenhouse gases, even as the world continues to barrel past temperature records.

Texas, with 9 percent of the US population, accounted for about 15 percent of current gas-powered generation capacity in the country but 26 percent of planned future generation at the end of 2024, according to data from Global Energy Monitor. Both the current and planned shares are far more than any other state.

GEM identified 42 new gas turbine projects under construction, in development, or announced in Texas before the start of this year. None of those projects are sited at data centers. However, other projects announced since then, like CloudBurst and Energy Transfer outside New Braunfels, will include dedicated gas power plants on site at data centers.

For gas companies, the boom in artificial intelligence has quickly become an unexpected gold mine. US gas production has risen steadily over 20 years since the fracking boom began, but gas prices have tumbled since 2024, dragged down by surging supply and weak demand.

“The sudden emergence of data center demand further brightens the outlook for the renaissance in gas pricing,” said a 2025 oil and gas outlook report by East Daley Analytics, a Colorado-based energy intelligence firm. “The obvious benefit to producers is increased drilling opportunities.”

It forecast up to a 20 percent increase in US gas production by 2030, driven primarily by a growing gas export sector on the Gulf Coast. Several large export projects will finish construction in the coming years, with demand for up to 12 billion cubic feet of gas per day, the report said, while new power generation for data centers would account for 7 billion cubic feet per day of additional demand. That means profits for power providers, but also higher costs for consumers.

Natural gas, a mixture primarily composed of methane, burns much cleaner than coal but still creates air pollution, including soot, some hazardous chemicals, and greenhouse gases. Unburned methane released into the atmosphere has more than 80 times the near-term warming effect of carbon dioxide, leading some studies to conclude that ubiquitous leaks in gas supply infrastructure make it as impactful as coal to the global climate.

Credit: Dylan Baddour/Inside Climate News

It’s a power source that’s heralded for its ability to get online fast, said Ed Hirs, an energy economics lecturer at the University of Houston. But the years-long wait times for turbines have quickly become the industry’s largest constraint in an otherwise positive outlook.

“If you’re looking at a five-year lead time, that’s not going to help Alexa or Siri today,” Hirs said.

The reliance on gas power for data centers is a departure from previous thought, said Larry Fink, founder of global investment firm BlackRock, speaking to a crowd of industry executives at an oil and gas conference in Houston in March.

About four years ago, if someone said they were building a data center, they said it must be powered by renewables, he recounted. Two years ago, it was a preference.

“Today?” Fink said. “They care about power.”

Gas plants for data centers

Since the start of this year, developers have announced a flurry of gas power deals for data centers. In the small city of Abilene, the builders of Stargate, one of the world’s largest data center projects, applied for permits in January to build 360 MW of gas power generation, authorized to emit 1.6 million tons of greenhouse gases and 14 tons of hazardous air pollutants per year. Later, the company announced the acquisition of an additional 4,500 MW of gas power generation capacity.

Also in January, a startup called Sailfish announced ambitious plans for a 2,600-acre, 5,000 MW cluster of data centers in the tiny North Texas town of Tolar, population 940.

“Traditional grid interconnections simply can’t keep pace with hyperscalers’ power demands, especially as AI accelerates energy requirements,” Sailfish founder Ryan Hughes told the website Data Center Dynamics at the time. “Our on-site natural gas power islands will let customers scale quickly.”

CloudBurst and Energy Transfer announced their data center and power plant outside New Braunfels in February, and another company partnership also announced plans for a 250 MW gas plant and data center near Odessa in West Texas. In May, a developer called Tract announced a 1,500-acre, 2,000 MW data center campus with some on-site generation and some purchased gas power near the small Central Texas town of Lockhart.

Not all new data centers need gas plants. A 120 MW South Texas data center project announced in April would use entirely wind power, while an enormous, 5,000 MW megaproject outside Laredo announced in March hopes to eventually run entirely on private wind, solar, and hydrogen power (though it will use gas at first). Another collection of six data centers planned in North Texas hopes to draw 1,400 MW from the grid.

Altogether, Texas’ grid operator predicts statewide power demand will nearly double within five years, driven largely by data centers for artificial intelligence. It mirrors a similar situation unfolding across the country, according to analysis by S&P Global.

“There is huge concern about the carbon footprint of this stuff,” said Dan Stanzione, executive director of the Texas Advanced Computing Center at the University of Texas at Austin. “If we could decarbonize the power grid, then there is no carbon footprint for this.”

However, despite massive recent expansions of renewable power generation, the boom in artificial intelligence appears to be moving the country farther from, not closer to, its decarbonization goals.

Restrictions on renewable energy

Looking forward to a buildout of power supply, state lawmakers have proposed or passed new rules to support the deployment of more gas generation and slow the surging expansion of wind and solar power projects. Supporters of these bills say they aim to utilize Texas’ position as the nation’s top gas producer.

Some energy experts say the rules proposed throughout the legislative session could dismantle the state’s leadership in renewables as well as the state’s ability to provide cheap and reliable power.

“It absolutely would [slow] if not completely stop renewable energy,” said Doug Lewin, a Texas energy consultant, about one of the proposed rules in March. “That would really be extremely harmful to the Texas economy.”

While the bills deemed as “industry killers” for renewables missed key deadlines, failing to reach Abbott’s desk, they illustrate some lawmakers’ aspirations for the state’s energy industry.

One failed bill, S.B. 388, would have required every watt of new solar brought online to be accompanied by a watt of new gas. Another set of twin bills, H.B. 3356 and S.B. 715, would have forced existing wind and solar companies to buy fossil-fuel based power or connect to a battery storage resource to cover the hours the energy plants are not operating.

When the Legislature last met in 2023, it created a $5 billion public “energy fund” to finance new gas plants but not wind or solar farms. It also created a new tax abatement program that excluded wind and solar. This year’s budget added another $5 billion to double the fund.

Bluebonnet Electric Cooperative is currently completing construction on a 190 MW gas-fired peaker plant near the town of Maxwell in Caldwell County.

Credit: Dylan Baddour/Inside Climate News

Bluebonnet Electric Cooperative is currently completing construction on a 190 MW gas-fired peaker plant near the town of Maxwell in Caldwell County. Credit: Dylan Baddour/Inside Climate News

Among the lawmakers leading the effort to scale back the state’s deployment of renewables is state Sen. Lois Kolkhorst, a Republican from Brenham. One bill she co-sponsored, S.B. 819, aimed to create new siting rules for utility-scale renewable projects and would have required them to get permits from the Public Utility Commission that no other energy source—coal, gas or nuclear—needs. “It’s just something that is clearly meant to kneecap an industry,” Lewin said about the bill, which failed to pass.

Kolkhorst said the bill sought to balance the state’s need for power while respecting landowners across the state.

Former state Rep. John Davis, now a board member at Conservative Texans for Energy Innovation, said the session shows how renewables have become a red meat issue.

More than 20 years ago, Davis and Kolkhorst worked together in the Capitol as Texas deregulated its energy market, which encouraged renewables to enter the grid’s mix, he said. Now Davis herds sheep and goats on his family’s West Texas ranch, where seven wind turbines provide roughly 40 percent of their income.

He never could have dreamed how significant renewable energy would become for the state grid, he said. That’s why he’s disappointed with the direction the legislature is headed with renewables.

“I can’t think of anything more conservative, as a conservative, than wind and solar,” Davis said. “These are things God gave us—use them and harness them.”

A report published in April finds that targeted limitations on solar and wind development in Texas could increase electricity costs for consumers and businesses. The report, done by Aurora Energy Research for the Texas Association of Business, said restricting the further deployment of renewables would drive power prices up 14 percent by 2035.

“Texas is at a crossroads in its energy future,” said Olivier Beaufils, a top executive at Aurora Energy Research. “We need policies that support an all-of-the-above approach to meet the expected surge in power demand.”

Likewise, the commercial intelligence firm Wood Mackenzie expects the power demand from data centers to drive up prices of gas and wholesale consumer electricity.

Pollution from gas plants

Even when new power plants aren’t built on the site of data centers, they might still be developed because of demand from the server farms.

For example, in 2023, developer Marathon Digital started up a Bitcoin mine in the small town of Granbury on the site of the 1,100 MW Wolf Hollow II gas power plant. It held contracts to purchase 300 MW from the plant.

One year later, the power plant operator sought permits to install eight additional “peaker” gas turbines able to produce up to 352 MW of electricity. These small units, designed to turn on intermittently during hours of peak demand, release more pollution than typical gas turbines.

Those additional units would be approved to release 796,000 tons per year of greenhouse gases, 251 tons per year of nitrogen oxides and 56 tons per year of soot, according to permitting documents. That application is currently facing challenges from neighboring residents in state administrative courts.

About 150 miles away, neighbors are challenging another gas plant permit application in the tiny town of Blue. At 1,200 MW, the $1.2 billion plant proposed by Sandow Lakes Energy Co. would be among the largest in the state and would almost entirely serve private customers, likely including the large data centers that operate about 20 miles away.

Travis Brown and Hugh Brown, no relation, stand by a sign marking the site of a proposed 1,200 MW gas-fired power plant in their town of Blue on May 7.

Credit: Dylan Baddour/Inside Climate News

Travis Brown and Hugh Brown, no relation, stand by a sign marking the site of a proposed 1,200 MW gas-fired power plant in their town of Blue on May 7. Credit: Dylan Baddour/Inside Climate News

This plan bothers Hugh Brown, who moved out to these green, rolling hills of rural Lee County in 1975, searching for solitude. Now he lives on 153 wooded acres that he’s turned into a sanctuary for wildlife.

“What I’ve had here is a quiet, thoughtful life,” said Brown, skinny with a long grey beard. “I like not hearing what anyone else is doing.”

He worries about the constant roar of giant cooling fans, the bright lights overnight and the air pollution. According to permitting documents, the power plant would be authorized to emit 462 tons per year of ammonia gas, 254 tons per year of nitrogen oxides, 153 tons per year of particulate matter, or soot, and almost 18 tons per year of “hazardous air pollutants,” a collection of chemicals that are known to cause cancer or other serious health impacts.

It would also be authorized to emit 3.9 million tons of greenhouse gases per year, about as much as 72,000 standard passenger vehicles.

“It would be horrendous,” Brown said. “There will be a constant roaring of gigantic fans.”

In a statement, Sandow Lakes Energy denied that the power plant will be loud. “The sound level at the nearest property line will be similar to a quiet library,” the statement said.

Sandow Lakes Energy said the plant will support the local tax base and provide hundreds of temporary construction jobs and dozens of permanent jobs. Sandow also provided several letters signed by area residents who support the plant.

“We recognize the critical need for reliable, efficient, and environmentally responsible energy production to support our region’s growth and economic development,” wrote Nathan Bland, president of the municipal development district in Rockdale, about 20 miles from the project site.

Brown stands next to a pond on his property ringed with cypress trees he planted 30 years ago.

Credit: Dylan Baddour/Inside Climate News

Brown stands next to a pond on his property ringed with cypress trees he planted 30 years ago. Credit: Dylan Baddour/Inside Climate News

Sandow says the plant will be connected to Texas’ public grid, and many supporting letters for the project cited a need for grid reliability. But according to permitting documents, the 1,200 MW plant will supply only 80 MW to the grid and only temporarily, with the rest going to private customers.

“Electricity will continue to be sold to the public until all of the private customers have completed projects slated to accept the power being generated,” said a permit review by the Texas Commission on Environmental Quality.

Sandow has declined to name those customers. However, the plant is part of Sandow’s massive, master-planned mixed-use development in rural Lee and Milam counties, where several energy-hungry tenants are already operating, including Riot Platforms, the largest cryptocurrency mine on the continent. The seven-building complex in Rockdale is built to use up to 700 MW, and in April, it announced the acquisition of a neighboring, 125 MW cryptocurrency mine, previously operated by Rhodium. Another mine by Bitmain, also one of the world’s largest Bitcoin companies, has 560 MW of operating capacity with plans to add 180 more in 2026.

In April, residents of Blue gathered at the volunteer fire department building for a public meeting with Texas regulators and Sandow to discuss questions and concerns over the project. Brown, owner of the wildlife sanctuary, spoke into a microphone and noted that the power plant was placed at the far edge of Sandow’s 33,000-acre development, 20 miles from the industrial complex in Rockdale but near many homes in Blue.

“You don’t want to put it up into the middle of your property where you could deal with the negative consequences,” Brown said, speaking to the developers. “So it looks to me like you are wanting to make money, in the process of which you want to strew grief in your path and make us bear the environmental costs of your profit.”

Inside Climate News’ Peter Aldhous contributed to this report.

This story originally appeared on Inside Climate News.

Photo of Inside Climate News

What solar? What wind? Texas data centers build their own gas power plants Read More »

meta-and-yandex-are-de-anonymizing-android-users’-web-browsing-identifiers

Meta and Yandex are de-anonymizing Android users’ web browsing identifiers


Abuse allows Meta and Yandex to attach persistent identifiers to detailed browsing histories.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Tracking code that Meta and Russia-based Yandex embed into millions of websites is de-anonymizing visitors by abusing legitimate Internet protocols, causing Chrome and other browsers to surreptitiously send unique identifiers to native apps installed on a device, researchers have discovered. Google says it’s investigating the abuse, which allows Meta and Yandex to convert ephemeral web identifiers into persistent mobile app user identities.

The covert tracking—implemented in the Meta Pixel and Yandex Metrica trackers—allows Meta and Yandex to bypass core security and privacy protections provided by both the Android operating system and browsers that run on it. Android sandboxing, for instance, isolates processes to prevent them from interacting with the OS and any other app installed on the device, cutting off access to sensitive data or privileged system resources. Defenses such as state partitioning and storage partitioning, which are built into all major browsers, store site cookies and other data associated with a website in containers that are unique to every top-level website domain to ensure they’re off-limits for every other site.

A blatant violation

“One of the fundamental security principles that exists in the web, as well as the mobile system, is called sandboxing,” Narseo Vallina-Rodriguez, one of the researchers behind the discovery, said in an interview. “You run everything in a sandbox, and there is no interaction within different elements running on it. What this attack vector allows is to break the sandbox that exists between the mobile context and the web context. The channel that exists allowed the Android system to communicate what happens in the browser with the identity running in the mobile app.”

The bypass—which Yandex began in 2017 and Meta started last September—allows the companies to pass cookies or other identifiers from Firefox and Chromium-based browsers to native Android apps for Facebook, Instagram, and various Yandex apps. The companies can then tie that vast browsing history to the account holder logged into the app.

This abuse has been observed only in Android, and evidence suggests that the Meta Pixel and Yandex Metrica target only Android users. The researchers say it may be technically feasible to target iOS because browsers on that platform allow developers to programmatically establish localhost connections that apps can monitor on local ports.

In contrast to iOS, however, Android imposes fewer controls on local host communications and background executions of mobile apps, the researchers said, while also implementing stricter controls in app store vetting processes to limit such abuses. This overly permissive design allows Meta Pixel and Yandex Metrica to send web requests with web tracking identifiers to specific local ports that are continuously monitored by the Facebook, Instagram, and Yandex apps. These apps can then link pseudonymous web identities with actual user identities, even in private browsing modes, effectively de-anonymizing users’ browsing habits on sites containing these trackers.

Meta Pixel and Yandex Metrica are analytics scripts designed to help advertisers measure the effectiveness of their campaigns. Meta Pixel and Yandex Metrica are estimated to be installed on 5.8 million and 3 million sites, respectively.

Meta and Yandex achieve the bypass by abusing basic functionality built into modern mobile browsers that allows browser-to-native app communications. The functionality lets browsers send web requests to local Android ports to establish various services, including media connections through the RTC protocol, file sharing, and developer debugging.

A conceptual diagram representing the exchange of identifiers between the web trackers running on the browser context and native Facebook, Instagram, and Yandex apps for Android.

A conceptual diagram representing the exchange of identifiers between the web trackers running on the browser context and native Facebook, Instagram, and Yandex apps for Android.

While the technical underpinnings differ, both Meta Pixel and Yandex Metrica are performing a “weird protocol misuse” to gain unvetted access that Android provides to localhost ports on the 127.0.0.1 IP address. Browsers access these ports without user notification. Facebook, Instagram, and Yandex native apps silently listen on those ports, copy identifiers in real time, and link them to the user logged into the app.

A representative for Google said the behavior violates the terms of service for its Play marketplace and the privacy expectations of Android users.

“The developers in this report are using capabilities present in many browsers across iOS and Android in unintended ways that blatantly violate our security and privacy principles,” the representative said, referring to the people who write the Meta Pixel and Yandex Metrica JavaScript. “We’ve already implemented changes to mitigate these invasive techniques and have opened our own investigation and are directly in touch with the parties.”

Meta didn’t answer emailed questions for this article, but provided the following statement: “We are in discussions with Google to address a potential miscommunication regarding the application of their policies. Upon becoming aware of the concerns, we decided to pause the feature while we work with Google to resolve the issue.”

Yandex representatives didn’t answer an email seeking comment.

How Meta and Yandex de-anonymize Android users

Meta Pixel developers have abused various protocols to implement the covert listening since the practice began last September. They started by causing apps to send HTTP requests to port 12387. A month later, Meta Pixel stopped sending this data, even though Facebook and Instagram apps continued to monitor the port.

In November, Meta Pixel switched to a new method that invoked WebSocket, a protocol for two-way communications, over port 12387.

That same month, Meta Pixel also deployed a new method that used WebRTC, a real-time peer-to-peer communication protocol commonly used for making audio or video calls in the browser. This method used a complicated process known as SDP munging, a technique for JavaScript code to modify Session Description Protocol data before it’s sent. Still in use today, the SDP munging by Meta Pixel inserts key _fbp cookie content into fields meant for connection information. This causes the browser to send that data as part of a STUN request to the Android local host, where the Facebook or Instagram app can read it and link it to the user.

In May, a beta version of Chrome introduced a mitigation that blocked the type of SDP munging that Meta Pixel used. Within days, Meta Pixel circumvented the mitigation by adding a new method that swapped the STUN requests with the TURN requests.

In a post, the researchers provided a detailed description of the _fbp cookie from a website to the native app and, from there, to the Meta server:

1. The user opens the native Facebook or Instagram app, which eventually is sent to the background and creates a background service to listen for incoming traffic on a TCP port (12387 or 12388) and a UDP port (the first unoccupied port in 12580–12585). Users must be logged-in with their credentials on the apps.

2. The user opens their browser and visits a website integrating the Meta Pixel.

3. At this stage, some websites wait for users’ consent before embedding Meta Pixel. In our measurements of the top 100K website homepages, we found websites that require consent to be a minority (more than 75% of affected sites does not require user consent)…

4. The Meta Pixel script is loaded and the _fbp cookie is sent to the native Instagram or Facebook app via WebRTC (STUN) SDP Munging.

5. The Meta Pixel script also sends the _fbp value in a request to https://www.facebook.com/tr along with other parameters such as page URL (dl), website and browser metadata, and the event type (ev) (e.g., PageView, AddToCart, Donate, Purchase).

6. The Facebook or Instagram apps receive the _fbp cookie from the Meta JavaScripts running on the browser and transmits it to the GraphQL endpoint (https://graph[.]facebook[.]com/graphql) along with other persistent user identifiers, linking users’ fbp ID (web visit) with their Facebook or Instagram account.

Detailed flow of the way the Meta Pixel leaks the _fbp cookie from Android browsers to it’s Facebook and Instagram apps.

Detailed flow of the way the Meta Pixel leaks the _fbp cookie from Android browsers to it’s Facebook and Instagram apps.

The first known instance of Yandex Metrica linking websites visited in Android browsers to app identities was in May 2017, when the tracker started sending HTTP requests to local ports 29009 and 30102. In May 2018, Yandex Metrica also began sending the data through HTTPS to ports 29010 and 30103. Both methods remained in place as of publication time.

An overview of Yandex identifier sharing

An overview of Yandex identifier sharing

A timeline of web history tracking by Meta and Yandex

A timeline of web history tracking by Meta and Yandex

Some browsers for Android have blocked the abusive JavaScript in trackers. DuckDuckGo, for instance, was already blocking domains and IP addresses associated with the trackers, preventing the browser from sending any identifiers to Meta. The browser also blocked most of the domains associated with Yandex Metrica. After the researchers notified DuckDuckGo of the incomplete blacklist, developers added the missing addresses.

The Brave browser, meanwhile, also blocked the sharing of identifiers due to its extensive blocklists and existing mitigation to block requests to the localhost without explicit user consent. Vivaldi, another Chromium-based browser, forwards the identifiers to local Android ports when the default privacy setting is in place. Changing the setting to block trackers appears to thwart browsing history leakage, the researchers said.

Tracking blocker settings in Vivaldi for Android.

There’s got to be a better way

The various remedies DuckDuckGo, Brave, Vivaldi, and Chrome have put in place are working as intended, but the researchers caution they could become ineffective at any time.

“Any browser doing blocklisting will likely enter into a constant arms race, and it’s just a partial solution,” Vallina Rodriguez said of the current mitigations. “Creating effective blocklists is hard, and browser makers will need to constantly monitor the use of this type of capability to detect other hostnames potentially abusing localhost channels and then updating their blocklists accordingly.”

He continued:

While this solution works once you know the hostnames doing that, it’s not the right way of mitigating this issue, as trackers may find ways of accessing this capability (e.g., through more ephemeral hostnames). A long-term solution should go through the design and development of privacy and security controls for localhost channels, so that users can be aware of this type of communication and potentially enforce some control or limit this use (e.g., a permission or some similar user notifications).

Chrome and most other Chromium-based browsers executed the JavaScript as Meta and Yandex intended. Firefox did as well, although for reasons that aren’t clear, the browser was not able to successfully perform the SDP munging specified in later versions of the code. After blocking the STUN variant of SDP munging in the early May beta release, a production version of Chrome released two weeks ago began blocking both the STUN and TURN variants. Other Chromium-based browsers are likely to implement it in the coming weeks. A representative for Firefox-maker Mozilla said the organization prioritizes user privacy and is taking the report seriously

“We are actively investigating the reported behavior, and working to fully understand its technical details and implications,” Mozilla said in an email. “Based on what we’ve seen so far, we consider these to be severe violations of our anti-tracking policies, and are assessing solutions to protect against these new tracking techniques.”

The researchers warn that the current fixes are so specific to the code in the Meta and Yandex trackers that it would be easy to bypass them with a simple update.

“They know that if someone else comes in and tries a different port number, they may bypass this protection,” said Gunes Acar, the researcher behind the initial discovery, referring to the Chrome developer team at Google. “But our understanding is they want to send this message that they will not tolerate this form of abuse.”

Fellow researcher Vallina-Rodriguez said the more comprehensive way to prevent the abuse is for Android to overhaul the way it handles access to local ports.

“The fundamental issue is that the access to the local host sockets is completely uncontrolled on Android,” he explained. “There’s no way for users to prevent this kind of communication on their devices. Because of the dynamic nature of JavaScript code and the difficulty to keep blocklists up to date, the right way of blocking this persistently is by limiting this type of access at the mobile platform and browser level, including stricter platform policies to limit abuse.”

Got consent?

The researchers who made this discovery are:

  • Aniketh Girish, PhD student at IMDEA Networks
  • Gunes Acar, assistant professor in Radboud University’s Digital Security Group & iHub
  • Narseo Vallina-Rodriguez, associate professor at IMDEA Networks
  • Nipuna Weerasekara, PhD student at IMDEA Networks
  • Tim Vlummens, PhD student at COSIC, KU Leuven

Acar said he first noticed Meta Pixel accessing local ports while visiting his own university’s website.

There’s no indication that Meta or Yandex has disclosed the tracking to either websites hosting the trackers or end users who visit those sites. Developer forums show that many websites using Meta Pixel were caught off guard when the scripts began connecting to local ports.

“Since 5th September, our internal JS error tracking has been flagging failed fetch requests to localhost: 12387,” one developer wrote. “No changes have been made on our side, and the existing Facebook tracking pixel we use loads via Google Tag Manager.”

“Is there some way I can disable this?” another developer encountering the unexplained local port access asked.

It’s unclear whether browser-to-native-app tracking violates any privacy laws in various countries. Both Meta and companies hosting its Meta Pixel, however, have faced a raft of lawsuits in recent years alleging that the data collected violates privacy statutes. A research paper from 2023 found that Meta pixel, then called the Facebook Pixel, “tracks a wide range of user activities on websites with alarming detail, especially on websites classified as sensitive categories under GDPR,” the abbreviation for the European Union’s General Data Protection Regulation.

So far, Google has provided no indication that it plans to redesign the way Android handles local port access. For now, the most comprehensive protection against Meta Pixel and Yandex Metrica tracking is to refrain from installing the Facebook, Instagram, or Yandex apps on Android devices.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Meta and Yandex are de-anonymizing Android users’ web browsing identifiers Read More »

breaking-down-why-apple-tvs-are-privacy-advocates’-go-to-streaming-device

Breaking down why Apple TVs are privacy advocates’ go-to streaming device


Using the Apple TV app or an Apple account means giving Apple more data, though.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Every time I write an article about the escalating advertising and tracking on today’s TVs, someone brings up Apple TV boxes. Among smart TVs, streaming sticks, and other streaming devices, Apple TVs are largely viewed as a safe haven.

“Just disconnect your TV from the Internet and use an Apple TV box.”

That’s the common guidance you’ll hear from Ars readers for those seeking the joys of streaming without giving up too much privacy. Based on our research and the experts we’ve consulted, that advice is pretty solid, as Apple TVs offer significantly more privacy than other streaming hardware providers.

But how private are Apple TV boxes, really? Apple TVs don’t use automatic content recognition (ACR, a user-tracking technology leveraged by nearly all smart TVs and streaming devices), but could that change? And what about the software that Apple TV users do use—could those apps provide information about you to advertisers or Apple?

In this article, we’ll delve into what makes the Apple TV’s privacy stand out and examine whether users should expect the limited ads and enhanced privacy to last forever.

Apple TV boxes limit tracking out of the box

One of the simplest ways Apple TVs ensure better privacy is through their setup process, during which you can disable Siri, location tracking, and sending analytics data to Apple. During setup, users also receive several opportunities to review Apple’s data and privacy policies. Also off by default is the boxes’ ability to send voice input data to Apple.

Most other streaming devices require users to navigate through pages of settings to disable similar tracking capabilities, which most people are unlikely to do. Apple’s approach creates a line of defense against snooping, even for those unaware of how invasive smart devices can be.

Apple TVs running tvOS 14.5 and later also make third-party app tracking more difficult by requiring such apps to request permission before they can track users.

“If you choose Ask App Not to Track, the app developer can’t access the system advertising identifier (IDFA), which is often used to track,” Apple says. “The app is also not permitted to track your activity using other information that identifies you or your device, like your email address.”

Users can access the Apple TV settings and disable the ability of third-party apps to ask permission for tracking. However, Apple could further enhance privacy by enabling this setting by default.

The Apple TV also lets users control which apps can access the set-top box’s Bluetooth functionality, photos, music, and HomeKit data (if applicable), and the remote’s microphone.

“Apple’s primary business model isn’t dependent on selling targeted ads, so it has somewhat less incentive to harvest and monetize incredible amounts of your data,” said RJ Cross, director of the consumer privacy program at the Public Interest Research Group (PIRG). “I personally trust them more with my data than other tech companies.”

What if you share analytics data?

If you allow your Apple TV to share analytics data with Apple or app developers, that data won’t be personally identifiable, Apple says. Any collected personal data is “not logged at all, removed from reports before they’re sent to Apple, or protected by techniques, such as differential privacy,” Apple says.

Differential privacy, which injects noise into collected data, is one of the most common methods used for anonymizing data. In support documentation (PDF), Apple details its use of differential privacy:

The first step we take is to privatize the information using local differential privacy on the user’s device. The purpose of privatization is to assure that Apple’s servers don’t receive clear data. Device identifiers are removed from the data, and it is transmitted to Apple over an encrypted channel. The Apple analysis system ingests the differentially private contributions, dropping IP addresses and other metadata. The final stage is aggregation, where the privatized records are processed to compute the relevant statistics, and the aggregate statistics are then shared with relevant Apple teams. Both the ingestion and aggregation stages are performed in a restricted access environment so even the privatized data isn’t broadly accessible to Apple employees.

What if you use an Apple account with your Apple TV?

Another factor to consider is Apple’s privacy policy regarding Apple accounts, formerly Apple IDs.

Apple support documentation says you “need” an Apple account to use an Apple TV, but you can use the hardware without one. Still, it’s common for people to log into Apple accounts on their Apple TV boxes because it makes it easier to link with other Apple products. Another reason someone might link an Apple TV box with an Apple account is to use the Apple TV app, a common way to stream on Apple TV boxes.

So what type of data does Apple harvest from Apple accounts? According to its privacy policy, the company gathers usage data, such as “data about your activity on and use of” Apple offerings, including “app launches within our services…; browsing history; search history; [and] product interaction.”

Other types of data Apple may collect from Apple accounts include transaction information (Apple says this is “data about purchases of Apple products and services or transactions facilitated by Apple, including purchases on Apple platforms”), account information (“including email address, devices registered, account status, and age”), device information (including serial number and browser type), contact information (including physical address and phone number), and payment information (including bank details). None of that is surprising considering the type of data needed to make an Apple account work.

Many Apple TV users can expect Apple to gather more data from their Apple account usage on other devices, such as iPhones or Macs. However, if you use the same Apple account across multiple devices, Apple recognizes that all the data it has collected from, for example, your iPhone activity, also applies to you as an Apple TV user.

A potential workaround could be maintaining multiple Apple accounts. With an Apple account solely dedicated to your Apple TV box and Apple TV hardware and software tracking disabled as much as possible, Apple would have minimal data to ascribe to you as an Apple TV owner. You can also use your Apple TV box without an Apple account, but then you won’t be able to use the Apple TV app, one of the device’s key features.

Data collection via the Apple TV app

You can download third-party apps like Netflix and Hulu onto an Apple TV box, but most TV and movie watching on Apple TV boxes likely occurs via the Apple TV app. The app is necessary for watching content on the Apple TV+ streaming service, but it also drives usage by providing access to the libraries of many (but not all) popular streaming apps in one location. So understanding the Apple TV app’s privacy policy is critical to evaluating how private Apple TV activity truly is.

As expected, some of the data the app gathers is necessary for the software to work. That includes, according to the app’s privacy policy, “information about your purchases, downloads, activity in the Apple TV app, the content you watch, and where you watch it in the Apple TV app and in connected apps on any of your supported devices.” That all makes sense for ensuring that the app remembers things like which episode of Severance you’re on across devices.

Apple collects other data, though, that isn’t necessary for functionality. It says it gathers data on things like the “features you use (for example, Continue Watching or Library),” content pages you view, how you interact with notifications, and approximate location information (that Apple says doesn’t identify users) to help improve the app.

Additionally, Apple tracks the terms you search for within the app, per its policy:

We use Apple TV search data to improve models that power Apple TV. For example, aggregate Apple TV search queries are used to fine-tune the Apple TV search model.

This data usage is less intrusive than that of other streaming devices, which might track your activity and then sell that data to third-party advertisers. But some people may be hesitant about having any of their activities tracked to benefit a multi-trillion-dollar conglomerate.

Data collected from the Apple TV app used for ads

By default, the Apple TV app also tracks “what you watch, your purchases, subscriptions, downloads, browsing, and other activities in the Apple TV app” to make personalized content recommendations. Content recommendations aren’t ads in the traditional sense but instead provide a way for Apple to push you toward products by analyzing data it has on you.

You can disable the Apple TV app’s personalized recommendations, but it’s a little harder than you might expect since you can’t do it through the app. Instead, you need to go to the Apple TV settings and then select Apps > TV > Use Play History > Off.

The most privacy-conscious users may wish that personalized recommendations were off by default. Darío Maestro, senior legal fellow at the nonprofit Surveillance Technology Oversight Project (STOP), noted to Ars that even though Apple TV users can opt out of personalized content recommendations, “many will not realize they can.”

Apple can also use data it gathers on you from the Apple TV app to serve traditional ads. If you allow your Apple TV box to track your location, the Apple TV app can also track your location. That data can “be used to serve geographically relevant ads,” according to the Apple TV app privacy policy. Location tracking, however, is off by default on Apple TV boxes.

Apple’s tvOS doesn’t have integrated ads. For comparison, some TV OSes, like Roku OS and LG’s webOS, show ads on the OS’s home screen and/or when showing screensavers.

But data gathered from the Apple TV app can still help Apple’s advertising efforts. This can happen if you allow personalized ads in other Apple apps serving targeted apps, such as Apple News, the App Store, or Stocks. In such cases, Apple may apply data gathered from the Apple TV app, “including information about the movies and TV shows you purchase from Apple, to serve ads in those apps that are more relevant to you,” the Apple TV app privacy policy says.

Apple also provides third-party advertisers and strategic partners with “non-personal data” gathered from the Apple TV app:

We provide some non-personal data to our advertisers and strategic partners that work with Apple to provide our products and services, help Apple market to customers, and sell ads on Apple’s behalf to display on the App Store and Apple News and Stocks.

Apple also shares non-personal data from the Apple TV with third parties, such as content owners, so they can pay royalties, gauge how much people are watching their shows or movies, “and improve their associated products and services,” Apple says.

Apple’s policy notes:

For example, we may share non-personal data about your transactions, viewing activity, and region, as well as aggregated user demographics[,] such as age group and gender (which may be inferred from information such as your name and salutation in your Apple Account), to Apple TV strategic partners, such as content owners, so that they can measure the performance of their creative work [and] meet royalty and accounting requirements.

When reached for comment, an Apple spokesperson told Ars that Apple TV users can clear their play history from the app.

All that said, the Apple TV app still shares far less data with third parties than other streaming apps. Netflix, for example, says it discloses some personal information to advertising companies “in order to select Advertisements shown on Netflix, to facilitate interaction with Advertisements, and to measure and improve effectiveness of Advertisements.”

Warner Bros. Discovery says it discloses information about Max viewers “with advertisers, ad agencies, ad networks and platforms, and other companies to provide advertising to you based on your interests.” And Disney+ users have Nielsen tracking on by default.

What if you use Siri?

You can easily deactivate Siri when setting up an Apple TV. But those who opt to keep the voice assistant and the ability to control Apple TV with their voice take somewhat of a privacy hit.

According to the privacy policy accessible in Apple TV boxes’ settings, Apple boxes automatically send all Siri requests to Apple’s servers. If you opt into using Siri data to “Improve Siri and Dictation,” Apple will store your audio data. If you opt out, audio data won’t be stored, but per the policy:

In all cases, transcripts of your interactions will be sent to Apple to process your requests and may be stored by Apple.

Apple TV boxes also send audio and transcriptions of dictation input to Apple servers for processing. Apple says it doesn’t store the audio but may store transcriptions of the audio.

If you opt to “Improve Siri and Dictation,” Apple says your history of voice requests isn’t tied to your Apple account or email. But Apple is vague about how long it may store data related to voice input performed with the Apple TV if you choose this option.

The policy states:

Your request history, which includes transcripts and any related request data, is associated with a random identifier for up to six months and is not tied to your Apple Account or email address. After six months, you request history is disassociated from the random identifier and may be retained for up to two years. Apple may use this data to develop and improve Siri, Dictation, Search, and limited other language processing functionality in Apple products …

Apple may also review a subset of the transcripts of your interactions and this … may be kept beyond two years for the ongoing improvements of products and services.

Apple promises not to use Siri and voice data to build marketing profiles or sell them to third parties, but it hasn’t always adhered to that commitment. In January, Apple agreed to pay $95 million to settle a class-action lawsuit accusing Siri of recording private conversations and sharing them with third parties for targeted ads. In 2019, contractors reported hearing private conversations and recorded sex via Siri-gathered audio.

Outside of Apple, we’ve seen voice request data used questionably, including in criminal trials and by corporate employees. Siri and dictation data also represent additional ways a person’s Apple TV usage might be unexpectedly analyzed to fuel Apple’s business.

Automatic content recognition

Apple TVs aren’t preloaded with automatic content recognition (ACR), an Apple spokesperson confirmed to Ars, another plus for privacy advocates. But ACR is software, so Apple could technically add it to Apple TV boxes via a software update at some point.

Sherman Li, the founder of Enswers, the company that first put ACR in Samsung TVs, confirmed to Ars that it’s technically possible for Apple to add ACR to already-purchased Apple boxes. Years ago, Enswers retroactively added ACR to other types of streaming hardware, including Samsung and LG smart TVs. (Enswers was acquired by Gracenote, which Nielsen now owns.)

In general, though, there are challenges to adding ACR to hardware that people already own, Li explained:

Everyone believes, in theory, you can add ACR anywhere you want at any time because it’s software, but because of the way [hardware is] architected… the interplay between the chipsets, like the SoCs, and the firmware is different in a lot of situations.

Li pointed to numerous variables that could prevent ACR from being retroactively added to any type of streaming hardware, “including access to video frame buffers, audio streams, networking connectivity, security protocols, OSes, and app interface communication layers, especially at different levels of the stack in these devices, depending on the implementation.”

Due to the complexity of Apple TV boxes, Li suspects it would be difficult to add ACR to already-purchased Apple TVs. It would likely be simpler for Apple to release a new box with ACR if it ever decided to go down that route.

If Apple were to add ACR to old or new Apple TV boxes, the devices would be far less private, and the move would be highly unpopular and eliminate one of the Apple TV’s biggest draws.

However, Apple reportedly has a growing interest in advertising to streaming subscribers. The Apple TV+ streaming service doesn’t currently show commercials, but the company is rumored to be exploring a potential ad tier. The suspicions stem from a reported meeting between Apple and the United Kingdom’s ratings body, Barb, to discuss how it might track ads on Apple TV+, according to a July report from The Telegraph.

Since 2023, Apple has also hired several prominent names in advertising, including a former head of advertising at NBCUniversal and a new head of video ad sales. Further, Apple TV+ is one of the few streaming services to remain ad-free, and it’s reported to be losing Apple $1 billion per year since its launch.

One day soon, Apple may have much more reason to care about advertising in streaming and being able to track the activities of people who use its streaming offerings. That has implications for Apple TV box users.

“The more Apple creeps into the targeted ads space, the less I’ll trust them to uphold their privacy promises. You can imagine Apple TV being a natural progression for selling ads,” PIRG’s Cross said.

Somewhat ironically, Apple has marketed its approach to privacy as a positive for advertisers.

“Apple’s commitment to privacy and personal relevancy builds trust amongst readers, driving a willingness to engage with content and ads alike,” Apple’s advertising guide for buying ads on Apple News and Stocks reads.

The most private streaming gadget

It remains technologically possible for Apple to introduce intrusive tracking or ads to Apple TV boxes, but for now, the streaming devices are more private than the vast majority of alternatives, save for dumb TVs (which are incredibly hard to find these days). And if Apple follows its own policies, much of the data it gathers should be kept in-house.

However, those with strong privacy concerns should be aware that Apple does track certain tvOS activities, especially those that happen through Apple accounts, voice interaction, or the Apple TV app. And while most of Apple’s streaming hardware and software settings prioritize privacy by default, some advocates believe there’s room for improvement.

For example, STOP’s Maestro said:

Unlike in the [European Union], where the upcoming Data Act will set clearer rules on transfers of data generated by smart devices, the US has no real legislation governing what happens with your data once it reaches Apple’s servers. Users are left with little way to verify those privacy promises.

Maestro suggested that Apple could address these concerns by making it easier for people to conduct security research on smart device software. “Allowing the development of alternative or modified software that can evaluate privacy settings could also increase user trust and better uphold Apple’s public commitment to privacy,” Maestro said.

There are ways to limit the amount of data that advertisers can get from your Apple TV. But if you use the Apple TV app, Apple can use your activity to help make business decisions—and therefore money.

As you might expect from a device that connects to the Internet and lets you stream shows and movies, Apple TV boxes aren’t totally incapable of tracking you. But they’re still the best recommendation for streaming users seeking hardware with more privacy and fewer ads.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Breaking down why Apple TVs are privacy advocates’ go-to streaming device Read More »

my-3d-printing-journey,-part-2:-printing-upgrades-and-making-mistakes

My 3D printing journey, part 2: Printing upgrades and making mistakes


3D-printing new parts for the A1 taught me a lot about plastic, and other things.

Different plastic filament is good for different things (and some kinds don’t work well with the A1 and other open-bed printers). Credit: Andrew Cunningham

Different plastic filament is good for different things (and some kinds don’t work well with the A1 and other open-bed printers). Credit: Andrew Cunningham

For the last three months or so, I’ve been learning to use (and love) a Bambu Labs A1 3D printer, a big, loud machine that sits on my desk and turns pictures on my computer screen into real-world objects.

In the first part of my series about diving into the wild world of 3D printers, I covered what I’d learned about the different types of 3D printers, some useful settings in the Bambu Studio app (which should also be broadly useful to know about no matter what printer you use), and some initial, magical-feeling successes in downloading files that I turned into useful physical items using a few feet of plastic filament and a couple hours of time.

For this second part, I’m focusing on what I learned when I embarked on my first major project—printing upgrade parts for the A1 with the A1. It was here that I made some of my first big 3D printing mistakes, mistakes that prompted me to read up on the different kinds of 3D printer filament, what each type of filament is good for, and which types the A1 is (and is not) good at handling as an un-enclosed, bed-slinging printer.

As with the information in part one, I share this with you not because it is groundbreaking but because there’s a lot of information out there, and it can be an intimidating hobby to break into. By sharing what I learned and what I found useful early in my journey, I hope I can help other people who have been debating whether to take the plunge.

Adventures in recursion: 3D-printing 3D printer parts

A display cover for the A1’s screen will protect it from wear and tear and allow you to easily hide it when you want to. Credit: Andrew Cunningham

My very first project was a holder for my office’s ceiling fan remote. My second, similarly, was a wall-mounted holder for the Xbox gamepad and wired headset I use with my gaming PC, which normally just had to float around loose on my desk when I wasn’t using them.

These were both relatively quick, simple prints that showed the printer was working like it was supposed to—all of the built-in temperature settings, the textured PEI plate, the printer’s calibration and auto-bed-leveling routines added up to make simple prints as dead-easy as Bambu promised they would be. It made me eager to seek out other prints, including stuff on the Makerworld site I hadn’t thought to try yet.

The first problem I had? Well, as part of its pre-print warmup routine, the A1 spits a couple of grams of filament out and tosses it to the side. This is totally normal—it’s called “purging,” and it gets rid of filament that’s gone brittle from being heated too long. If you’re changing colors, it also clears any last bits of the previous color that are still in the nozzle. But it didn’t seem particularly elegant to have the printer eternally launching little knots of plastic onto my desk.

The A1’s default design just ejects little molten wads of plastic all over your desk when it’s changing or purging filament. This is one of many waste bin (or “poop bucket”) designs made to catch and store these bits and pieces. Credit: Andrew Cunningham

The solution to this was to 3D-print a purging bucket for the A1 (also referred to, of course, as a “poop bucket” or “poop chute.”) In fact, there are tons of purging buckets designed specifically for the A1 because it’s a fairly popular budget model and there’s nothing stopping people from making parts that fit it like a glove.

I printed this bucket, as well as an additional little bracket that would “catch” the purged filament and make sure it fell into the bucket. And this opened the door to my first major printing project: printing additional parts for the printer itself.

I took to YouTube and watched a couple of videos on the topic because I’m apparently far from the first person who has had this reaction to the A1. After much watching and reading, here are the parts I ended up printing:

  • Bambu Lab AMS Lite Top Mount and Z-Axis Stiffener: The Lite version of Bambu’s Automated Materials System (AMS) is the optional accessory that enables multi-color printing for the A1. And like the A1 itself, it’s a lower-cost, open-air version of the AMS that works with Bambu’s more expensive printers.
    • The AMS Lite comes with a stand that you can use to set it next to the A1, but that’s more horizontal space than I had to spare. This top mount is Bambu’s official solution for putting the AMS Lite on top of the A1 instead, saving you some space.
    • The top mount actually has two important components: the top mount itself and a “Z-Axis Stiffener,” a pair of legs that extend behind the A1 to make the whole thing more stable on a desk or table. Bambu already recommends 195 mm (or 7.7 inches) of “safety margin” behind the A1 to give the bed room to sling, so if you’ve left that much space behind the printer, you probably have enough space for these legs.
    • After installing all of these parts, the top mount, and a fully loaded AMS, it’s probably a good idea to run the printer’s calibration cycle again to account for the difference in balance.
    • You may want to print the top mount itself with PETG, which is a bit stronger and more impact-resistant than PLA plastic.
  • A1 Purge Waste Bin and Deflector, by jimbobble. There are approximately 1 million different A1 purge bucket designs, each with its own appeal. But this one is large and simple and includes a version that is compatible with the printer Z-Axis Stiffener legs.
  • A1 rectangular fan cover, by Arzhang Lotfi. There are a bunch of options for this, including fun ones, but you can find dozens of simple grille designs that snap in place and protect the fan on the A1’s print head.
  • Bambu A1 Adjustable Camera Holder, by mlodybuk: This one’s a little more complicated because it does require some potentially warranty-voiding disassembly of components. The A1’s camera is also pretty awful no matter how you position it, with sub-1 FPS video that’s just barely suitable for checking on whether a print has been ruined or not.
    • But if you want to use it, I’d highly recommend moving it from the default location, which is low down and at an odd angle, so you’re not getting the best view of your print that you can.
    • This print includes a redesigned cover for the camera area, a filler piece to fill the hole where the camera used to be to keep dust and other things from getting inside the printer, and a small camera receptacle that snaps in place onto the new cover and can be turned up and down.
    • If you’re not comfortable modding your machine like this, the camera is livable as-is, but this got me a much better vantage point on my prints.

With a little effort, this print allows you to reposition the A1’s camera, giving you a better angle on your prints and making it adjustable. Credit: Andrew Cunningham

  • A1 Screen Protector New Release, by Rox3D: Not strictly necessary, but an unobtrusive way to protect (and to “turn off”) the A1’s built-in LCD screen when it’s not in use. The hinge mechanism of this print is stiff enough that the screen cover can be lifted partway without flopping back down.
  • A1 X-Axis Cover, by Moria3DPStudio: Another only-if-you-want-it print, this foldable cover slides over the A1’s exposed rail when you’re not using it. Just make sure you take it back off before you try to print anything—it won’t break anything, but the printer won’t be happy with you. Not that I’m speaking from experience.
  • Ultimate Filament Spool Enclosure for the AMS Lite, by Supergrapher: Here’s the big one, and it’s a true learning experience for all kinds of things. The regular Bambu AMS system for the P- and X-series printers is enclosed, which is useful not just for keeping dust from settling on your filament spools but for controlling humidity and keeping spools you’ve dried from re-absorbing moisture. There’s no first-party enclosure for the AMS Lite, but this user-created enclosure is flexible and popular, and it can be used to enclose the AMS Lite whether you have it mounted on top of or to the side of the A1. The small plastic clips that keep the lids on are mildly irritating to pop on and off, relative to a lid that you can just lift up and put back down, but the benefits are worth it.
  • 3D Disc for A1 – “Pokéball,” by BS 3D Print: One of the few purely cosmetic parts I’ve printed. The little spinning bit on the front of the A1’s print head shows you when the filament is being extruded, but it’s not a functional part. This is just one of dozens and dozens of cosmetic replacements for it if you choose to pop it off.
  • Sturdy Modular Filament Spool Rack, by Antiphrasis: Not technically an upgrade for the A1, but an easy recommendation for any new 3D printers who suddenly find themselves with a rainbow of a dozen-plus different filaments you want to try. Each shelf here holds three spools of filament, and you can print additional shelves to spread them out either horizontally, vertically, or both, so you can make something that exactly meets your needs and fits your space. A two-by-three shelf gave me room for 18 spools, and I can print more if I need them.

There are some things that others recommend for the A1 that I haven’t printed yet—mainly guides for cables, vibration dampeners for the base, and things to reinforce areas of possible stress for the print head and the A1’s loose, dangly wire.

Part of the fun is figuring out what your problems are, identifying prints that could help solve the problem, and then trying them out to see if they do solve your problem. (The parts have also given my A1 its purple accents, since a bright purple roll of filament was one of the first ones my 5-year-old wanted to get.)

Early mistakes

The “Z-Axis stiffener,” an extra set of legs for the A1 that Bambu recommends if you top-mount your AMS Lite. This took me three tries to print, mainly because of my own inexperience. Credit: Andrew Cunningham

Printing each of these parts gave me a solid crash course into common pitfalls and rookie mistakes.

For example, did you know that ABS plastic doesn’t print well on an open-bed printer? Well, it doesn’t! But I didn’t know that when I bought a spool of ABS to print some parts that I wanted to be sturdier and more resistant to wear and tear. I’d open the window and leave the room to deal with the fumes and be fine, I figured.

I tried printing the Z-Axis Stiffener supports for the A1 in ABS, but they went wonky. Lower bed temperature and (especially) ambient temperature tends to make ABS warp and curl upward, and extrusion-based printers rely on precision to do their thing. Once a layer—any layer!—gets screwed up during a print, that will reverberate throughout the entire rest of the object. Which is why my first attempt at supports ended up being totally unusable.

Large ABS plastic prints are tough to do on an open-bed printer. You can see here how that lower-left corner peeled upward slightly from the print bed, and any unevenness in the foundation of your print is going to reverberate in the layers that are higher up. Credit: Andrew Cunningham

I then tried printing another set of supports with PLA plastic, ones that claimed to maintain their sturdiness while using less infill (that is, how much plastic is actually used inside the print to give it rigidity—around 15 percent is typically a good balance between rigidity and wasting plastic that you’ll never see, though there may be times when you want more or less). I’m still not sure what I did, but the prints I got were squishy and crunchy to the touch, a clear sign that the amount and/or type of infill wasn’t sufficient. It wasn’t until my third try—the original Bambu-made supports, in PLA instead of ABS—that I made supports I could actually use.

An attempt at printing the same part with PLA, but with insufficient infill plastic that left my surfaces rough and the interiors fragile and crunchy. I canceled this one about halfway through when it became clear that something wasn’t right. Credit: Andrew Cunningham

After much reading and research, I learned that for most things, PETG plastic is what you use if you want to make sturdier (and outdoor-friendly) prints on an open bed. Great! I decided I’d print most of the A1 ABS enclosure with clear PETG filament to make something durable that I could also see through when I wanted to see how much filament was left on a given spool.

This ended up being a tricky first experiment with PETG plastic for three different reasons. For one, printing “clear” PETG that actually looks clear is best done with a larger nozzle (Bambu offers 0.2 mm, 0.6 mm, and 0.8 mm nozzles for the A1, in addition to the default 0.4 mm) because you can get the same work done in fewer layers, and the more layers you have, the less “clear” that clear plastic will be. Fine!

The Inland-brand clear PETG+ I bought from our local Micro Center also didn’t love the default temperature settings for generic PETG that the A1 uses, both for the heatbed and the filament itself; plastic flowed unevenly from the nozzle and was prone to coming detached from the bed. If this is happening to you (or if you want to experiment with lowering your temperatures to save a bit of energy), going into Bambu Studio, nudging temperatures by 5 degrees in either direction, and trying a quick test print (I like this one) helped me dial in my settings when using unfamiliar filament.

This homebrewed enclosure for the AMS Lite multi-color filament switcher (and the top mount that sticks it on the top of the printer) has been my biggest and most complex print to date. An 0.8 mm nozzle and some settings changes are recommended to maximize the transparency of transparent PETG filament. Credit: Andrew Cunningham

Finally, PETG is especially prone to absorbing ambient moisture. When that moisture hits a 260° nozzle, it quickly evaporates, and that can interfere with the evenness of the flow rate and the cleanliness of your print (this usually manifests as “stringing,” fine, almost cotton-y strands that hang off your finished prints).

You can buy dedicated filament drying boxes or stick spools in an oven at a low temperature for a few hours if this really bothers you or if it’s significant enough to affect the quality of your prints. One of the reasons to have an enclosure is to create a humidity-controlled environment to keep your spools from absorbing too much moisture in the first place.

The temperature and nozzle-size adjustments made me happy enough with my PETG prints that I was fine to pick off the little fuzzy stringers that were on my prints afterward, but your mileage may vary.

These are just a few examples of the kinds of things you learn if you jump in with both feet and experiment with different prints and plastics in rapid succession. Hopefully, this advice helps you avoid my specific mistakes. But the main takeaway is that experience is the best teacher.

The wide world of plastics

I used filament to print a modular filament shelf for my filaments. Credit: Andrew Cunningham

My wife had gotten me two spools of filament, a white and a black spool of Bambu’s own PLA Basic. What does all of that mean?

No matter what you’re buying, it’s most commonly sold in 1 kilogram spools (the weight of the plastic, not the plastic and the spool together). Each thing you print will give you an estimate of how much filament, in grams, you’ll need to print it.

There are quite a few different types of plastics out there, on Bambu’s site and in other stores. But here are the big ones I found out about almost immediately:

Polylactic acid, or PLA

By far the most commonly used plastic, PLA is inexpensive, available in a huge rainbow of colors and textures, and has a relatively low melting point, making it an easy material for most 3D printers to work with. It’s made of renewable material rather than petroleum, which makes it marginally more environmentally friendly than some other kinds of plastic. And it’s easy to “finish” PLA-printed parts if you’re trying to make props, toys, or other objects that you don’t want to have that 3D printed look about them, whether you’re sanding those parts or using a chemical to smooth the finish.

The downside is that it’s not particularly resilient—sitting in a hot car or in direct sunlight for very long is enough to melt or warp it, which makes it a bad choice for anything that needs to survive outdoors or anything load-bearing. Its environmental bona fides are also a bit oversold—it is biodegradable, but it doesn’t do so quickly outside of specialized composting facilities. If you throw it in the trash and it goes to a landfill, it will still take its time returning to nature.

You’ll find a ton of different kinds of PLA out there. Some have additives that give them a matte or silky texture. Some have little particles of wood or metal or even coffee or spent beer grains embedded in them, meant to endow 3D printed objects with the look, feel, or smell of those materials.

Some PLA just has… some other kind of unspecified additive in it. You’ll see “PLA+” all over the place, but as far as I can tell, there is no industry-wide agreed-upon standard for what the plus is supposed to mean. Manufacturers sometimes claim it’s stronger than regular PLA; other terms like “PLA Pro” and “PLA Max” are similarly non-standardized and vague.

Polyethylene terephthalate glycol, or PETG

PET is a common household plastic, and you’ll find it in everything from clothing fibers to soda bottles. PETG is the same material, with ethylene glycol (the “G”) added to lower the melting point and make it less prone to crystallizing and warping. It also makes it more transparent, though trying to print anything truly “transparent” with an extrusion printer is difficult.

PETG has a higher melting point than PLA, but it’s still lower than other kinds of plastics. This makes PETG a good middle ground for some types of printing. It’s better than PLA for functional load-bearing parts and outdoor use because it’s stronger and able to bend a bit without warping, but it’s still malleable enough to print well on all kinds of home 3D printers.

PETG can still be fussier to work with than PLA. I more frequently had issues with the edges of my PETG prints coming unstuck from the bed of the printer before the print was done.

PETG filament is also especially susceptible to absorbing moisture from the air, which can make extrusion messier. My PETG prints have usually had lots of little wispy strings of plastic hanging off them by the end—not enough to affect the strength or utility of the thing I’ve printed but enough that I needed to pull the strings off to clean up the print once it was done. Drying the filament properly could help with that if I ever need the prints to be cleaner in the first place.

It’s also worth noting that PETG is the strongest kind of filament that an open-bed printer like the A1 can handle reliably. You can succeed with other plastics, but Reddit anecdotes, my own personal experience, and Bambu’s filament guide all point to a higher level of difficulty.

Acrylonitrile butadiene styrene, or ABS

“Going to look at the filament wall at Micro Center” is a legit father-son activity at this point. Credit: Andrew Cunningham

You probably have a lot of ABS plastic in your life. Game consoles and controllers, the plastic keys on most keyboards, Lego bricks, appliances, plastic board game pieces—it’s mostly ABS.

Thin layers of ABS stuck together aren’t as strong or durable as commercially manufactured injection-molded ABS, but it’s still more heat-resistant and durable than 3D-printed PLA or PETG.

There are two big issues specific to ABS, which are also outlined in Bambu’s FAQ for the A1. The first is that it doesn’t print well on an open-bed printer, especially for larger prints. The corners are more prone to pulling up off the print bed, and as with a house, any problems in your foundation will reverberate throughout the rest of your print.

The second is fumes. All 3D-printed plastics emit fumes when they’ve been melted, and a good rule of thumb is to at least print things in a room where you can open the window (and not in a room where anyone or anything sleeps). But ABS and ASA plastics in particular can emit fumes that cause eye and respiratory irritation, headaches, and nausea if you’re printing them indoors with insufficient ventilation.

As for what quantity of printing counts as “dangerous,” there’s no real consensus, and the studies that have been done mostly land in inconclusive “further study is needed” territory. At a bare minimum, it’s considered a best practice to at least be able to open a window if you’re printing with ABS or to use a closed-bed printer in an unoccupied part of your home, like a garage, shed, or workshop space (if you have one).

Acrylonitrile styrene acrylate, or ASA

Described to me by Ars colleague Lee Hutchinson as “ABS but with more UV resistance,” this material is even better suited for outdoor applications than the other plastics on this list.

But also like ABS, you’ll have a hard time getting good results with an open-bed printer, and the fumes are more harmful to inhale. You’ll want a closed-bed printer and decent ventilation for good results.

Thermoplastic polyurethane, or TPU

TPU is best known for its flexibility relative to the other kinds of plastics on this list. It doesn’t get as brittle when it’s cold and has more impact-resistance, and it can print reasonably well on an open-bed printer.

One downside of TPU is that you need to print slowly to get reliably good results—a pain, when even relatively simple fidget toys can take an hour or two to print at full speed using PLA. Longer prints mean more power use and more opportunities for your print to peel off the print bed. A roll of TPU filament will also usually run you a few dollars more than a roll of PLA, PETG, or ABS.

First- or third-party filament?

The first-party Bambu spools have RFID chips in them that Bambu printers can scan to automatically show the type and color of filament that it is and to keep track of how much filament you have remaining. Bambu also has temperature and speed presets for all of its first-party filaments built into the printer and the Bambu Studio software. There are presets for a few other filament brands in the printer, but I usually ended up using the “generic” presets, which may need some tuning to ensure the best possible adhesion to the print bed and extrusion from the nozzle.

I mostly ended up using Inland-branded filament I picked up from my local Micro Center—both because it’s cheaper than Bambu’s first-party stuff and because it’s faster and easier for me to get to. If you don’t have a brick-and-mortar hobby store with filaments in stock, the A1 and other printers sometimes come with some sample filament swatches so you can see the texture and color of the stuff you’re buying online.

What’s next?

Part of the fun of 3D printing is that it can be used for a wide array of projects—organizing your desk or your kitchen, printing out little fidget-toy favors for your kid’s birthday party, printing out replacement parts for little plastic bits and bobs that have broken, or just printing out decorations and other objects you’ll enjoy looking at.

Once you’re armed with all of the basic information in this guide, the next step is really up to you. What would you find fun or useful? What do you need? How can 3D printing help you with other household tasks or hobbies that you might be trying to break into? For the last part of this series, the Ars staffers with 3D printers at home will share some of their favorite prints—hearing people talk about what they’d done themselves really opened my eyes to the possibilities and the utility of these devices, and more personal testimonials may help those of you who are on the fence to climb down off of it.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

My 3D printing journey, part 2: Printing upgrades and making mistakes Read More »

where-hyperscale-hardware-goes-to-retire:-ars-visits-a-very-big-itad-site

Where hyperscale hardware goes to retire: Ars visits a very big ITAD site

Inside the laptop/desktop examination bay at SK TES’s Fredericksburg, Va. site.

Credit: SK tes

Inside the laptop/desktop examination bay at SK TES’s Fredericksburg, Va. site. Credit: SK tes

The details of each unit—CPU, memory, HDD size—are taken down and added to the asset tag, and the device is sent on to be physically examined. This step is important because “many a concealed drive finds its way into this line,” Kent Green, manager of this site, told me. Inside the machines coming from big firms, there are sometimes little USB, SD, SATA, or M.2 drives hiding out. Some were make-do solutions installed by IT and not documented, and others were put there by employees tired of waiting for more storage. “Some managers have been pretty surprised when they learn what we found,” Green said.

With everything wiped and with some sense of what they’re made of, each device gets a rating. It’s a three-character system, like “A-3-6,” based on function, cosmetic condition, and component value. Based on needs, trends, and other data, devices that are cleared for resale go to either wholesale, retail, component harvesting, or scrap.

Full-body laptop skins

Wiping down and prepping a laptop, potentially for a full-cover adhesive skin.

Credit: SK TES

Wiping down and prepping a laptop, potentially for a full-cover adhesive skin. Credit: SK TES

If a device has retail value, it heads into a section of this giant facility where workers do further checks. Automated software plays sounds on the speakers, checks that every keyboard key is sending signals, and checks that laptop batteries are at 80 percent capacity or better. At the end of the line is my favorite discovery: full-body laptop skins.

Some laptops—certain Lenovo, Dell, and HP models—are so ubiquitous in corporate fleets that it’s worth buying an adhesive laminating sticker in their exact shape. They’re an uncanny match for the matte black, silver, and slightly less silver finishes of the laptops, covering up any blemishes and scratches. Watching one of the workers apply this made me jealous of their ability to essentially reset a laptop’s condition (so one could apply whole new layers of swag stickers, of course). Once rated, tested, and stickered, laptops go into a clever “cradle” box, get the UN 3481 “battery inside” sticker, and can be sold through retail.

Where hyperscale hardware goes to retire: Ars visits a very big ITAD site Read More »

200-mph-for-500-miles:-how-indycar-drivers-prepare-for-the-big-race

200 mph for 500 miles: How IndyCar drivers prepare for the big race


Andretti Global’s Kyle Kirkwood and Marcus Ericsson talk to us about the Indy 500.

INDIANAPOLIS, INDIANA - MAY 15: #28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana.

#28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana. Credit: Brandon Badraoui/Lumen via Getty Images

#28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana. Credit: Brandon Badraoui/Lumen via Getty Images

This coming weekend is a special one for most motorsport fans. There are Formula 1 races in Monaco and NASCAR races in Charlotte. And arguably towering over them both is the Indianapolis 500, being held this year for the 109th time. America’s oldest race is also one of its toughest: The track may have just four turns, but the cars negotiate them going three times faster than you drive on the highway, inches from the wall. For hours. At least at Le Mans, you have more than one driver per car.

This year’s race promises to be an exciting one. The track is sold out for the first time since the centenary race in 2016. A rookie driver and a team new to the series took pole position. Two very fast cars are starting at the back thanks to another conflict-of-interest scandal involving Team Penske, the second in two years for a team whose owner also owns the track and the series. And the cars are trickier to drive than they have been for many years, thanks to a new supercapacitor-based hybrid system that has added more than 100 lbs to the rear of the car, shifting the weight distribution further back.

Ahead of Sunday’s race, I spoke with a couple of IndyCar drivers and some engineers to get a better sense of how they prepare and what to expect.

INDIANAPOLIS, INDIANA - MAY 17: #28, Marcus Ericsson, Andretti Global Honda during qualifying for the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 17, 2025 in Indianapolis, Indiana.

This year, the cars are harder to drive thanks to a hybrid system that has altered the weight balance. Credit: Geoff MIller/Lumen via Getty Images

Concentrate

It all comes “from months of preparation,” said Marcus Ericsson, winner of the race in 2022 and one of Andretti Global’s drivers in this year’s event. “When we get here to the month of May, it’s just such a busy month. So you’ve got to be prepared mentally—and basically before you get to the month of May because if you start doing it now, it’s too late,” he told me.

The drivers spend all month at the track, with a race on the road course earlier this month. Then there’s testing on the historic oval, followed by qualifying last weekend and the race this coming Sunday. “So all those hours you put in in the winter, really, and leading up here to the month of May—it’s what pays off now,” Ericsson said. That work involved multiple sessions of physical training each week, and Ericsson says he also does weekly mental coaching sessions.

“This is a mental challenge,” Ericsson told me. “Doing those speeds with our cars, you can’t really afford to have a split second of loss of concentration because then you might be in the wall and your day is over and you might hurt yourself.”

When drivers get tired or their focus slips, that’s when mistakes happen, and a mistake at Indy often has consequences.

A racing driver stands in front of four mechanics, who are facing away from him. The mechanics have QR codes on the back of their shirts.

Ericsson is sponsored by the antihistamine Allegra and its anti-drowsy-driving campaign. Fans can scan the QR codes on the back of his pit crew’s shirts for a “gamified experience.” Credit: Andretti Global/Allegra

Simulate

Being mentally and physically prepared is part of it. It also helps if you can roll the race car off the transporter and onto the track with a setup that works rather than spending the month chasing the right combination of dampers, springs, wing angles, and so on. And these days, that means a lot of simulation testing.

The multi-axis driver in the loop simulators might look like just a very expensive video game, but these multimillion-dollar setups aren’t about having fun. “Everything that you are feeling or changing in the sim is ultimately going to reflect directly to what happens on track,” explained Kyle Kirkwood, teammate to Ericsson at Andretti Global and one of only two drivers to have won an Indycar race in 2025.

Andretti, like the other teams using Honda engines, uses the new HRC simulator in Indiana. “And yes, it’s a very expensive asset, but it’s also likely cheaper than going to the track and doing the real thing,” Kirkwood said. “And it’s a much more controlled environment than being at the track because temperature changes or track conditions or wind direction play a huge factor with our car.”

A high degree of correlation between the simulation and the track is what makes it a powerful tool. “We run through a sim, and you only get so many opportunities, especially at a place like Indianapolis, where you go from one day to the next and the temperature swings, or the wind conditions, or whatever might change drastically,” Kirkwood said. “You have to be able to sim it and be confident with the sim that you’re running to go out there and have a similar balance or a similar performance.”

Kyle Kirkwood's indycar drives past the IMS logo on one of the track walls.

Andretti Global’s Kyle Kirkwood is the only driver other than Álex Palou to have won an IndyCar race in 2025. Credit: Alison Arena/Andretti Global

“So you have to make adjustments, whether it’s a spring rate, whether it’s keel ballast or just overall, maybe center of pressure, something like that,” Kirkwood said. “You have to be able to adjust to it. And that’s where the sim tool comes in play. You move the weight balance back, and you’re like, OK, now what happens with the balance? How do I tune that back in? And you run that all through the sim, and for us, it’s been mirror-perfect going to the track when we do that.”

More impressively, a lot of that work was done months ago. “I would say most of it, we got through it before the start of this season,” Kirkwood said. “Once we get into the season, we only get a select few days because every Honda team has to run on the same simulator. Of course, it’s different with the engineering sim; those are running nonstop.”

Sims are for engineers, too

An IndyCar team is more than just its drivers—”the spacer between the seat and the wheel,” according to Kirkwood—and the engineers rely heavily on sim work now that real-world testing is so highly restricted. And they use a lot more than just driver-in-the-loop (DiL).

“Digital simulation probably goes to a higher level,” explained Scott Graves, engineering manager at Andretti Global. “A lot of the models we develop work in the DiL as well as our other digital tools. We try to develop universal models, whether that’s tire models, engine models, or transmission models.”

“Once you get into to a fully digital model, then I think your optimization process starts kicking in,” Graves said. “You’re not just changing the setting and running a pretend lap with a driver holding a wheel. You’re able to run through numerous settings and optimization routines and step through a massive number of permutations on a car. Obviously, you’re looking for better lap times, but you’re also looking for fuel efficiency and a lot of other parameters that go into crossing the finish line first.”

A screenshot of a finite element analysis tool

Parts like this anti-roll bar are simulated thousands of times. Credit: Siemens/Andretti Global

As an example, Graves points to the dampers. “The shock absorber is a perfect example where that’s a highly sophisticated piece of equipment on the car and it’s very open for team development. So our cars have fully customized designs there that are optimized for how we run the car, and they may not be good on another team’s car because we’re so honed in on what we’re doing with the car,” he said.

“The more accurate a digital twin is, the more we are able to use that digital twin to predict the performance of the car,” said David Taylor, VP of industry strategy at Siemens DISW, which has partnered with Andretti for some years now. “It will never be as complete and accurate as we want it to be. So it’s a continuous pursuit, and we keep adding technology to our portfolio and acquiring companies to try to provide more and more tools to people like Scott so they can more accurately predict that performance.”

What to expect on Sunday?

Kirkwood was bullish about his chances despite starting relatively deep in the field, qualifying in 23rd place. “We’ve been phenomenal in race trim and qualifying,” he said. “We had a bit of a head-scratcher if I’m being honest—I thought we would definitely be a top-six contender, if not a front row contender, and it just didn’t pan out that way on Saturday qualifying.”

“But we rolled back out on Monday—the car was phenomenal. Once again, we feel very, very racy in traffic, which is a completely different animal than running qualifying,” Kirkwood said. “So I’m happy with it. I think our chances are good. We’re starting deep in the field, but so are a lot of other drivers. So you can expect a handful of us to move forward.”

The more nervous hybrid IndyCars with their more rearward weight bias will probably result in more cautions, according to Ericsson, who will line up sixth for the start of the race on Sunday.

“Whereas in previous years you could have a bit of a moment and it would scare you, you usually get away with it,” he said. “This year, if you have a moment, it usually ends up with you being in the fence. I think that’s why we’ve seen so many crashes this year—because a pendulum effect from the rear of the car that when you start losing it, this is very, very difficult or almost impossible to catch.”

“I think it’s going to mean that the race is going to be quite a few incidents with people making mistakes,” Ericsson said. “In practice, if your car is not behaving well, you bring it to the pit lane, right? You can do adjustments, whereas in the race, you have to just tough it out until the next pit stop and then make some small adjustments. So if you have a bad car at the start a race, it’s going to be a tough one. So I think it’s going to be a very dramatic and entertaining race.”

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

200 mph for 500 miles: How IndyCar drivers prepare for the big race Read More »

what-i-learned-from-my-first-few-months-with-a-bambu-lab-a1-3d-printer,-part-1

What I learned from my first few months with a Bambu Lab A1 3D printer, part 1


One neophyte’s first steps into the wide world of 3D printing.

The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham

The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham

For a couple of years now, I’ve been trying to find an excuse to buy a decent 3D printer.

Friends and fellow Ars staffers who had them would gush about them at every opportunity, talking about how useful they can be and how much can be printed once you get used to the idea of being able to create real, tangible objects with a little time and a few bucks’ worth of plastic filament.

But I could never quite imagine myself using one consistently enough to buy one. Then, this past Christmas, my wife forced the issue by getting me a Bambu Lab A1 as a present.

Since then, I’ve been tinkering with the thing nearly daily, learning more about what I’ve gotten myself into and continuing to find fun and useful things to print. I’ve gathered a bunch of thoughts about my learning process here, not because I think I’m breaking new ground but to serve as a blueprint for anyone who has been on the fence about Getting Into 3D Printing. “Hyperfixating on new hobbies” is one of my go-to coping mechanisms during times of stress and anxiety, and 3D printing has turned out to be the perfect combination of fun, practical, and time-consuming.

Getting to know my printer

My wife settled on the Bambu A1 because it’s a larger version of the A1 Mini, Wirecutter’s main 3D printer pick at the time (she also noted it was “hella on sale”). Other reviews she read noted that it’s beginner-friendly, easy to use, and fun to tinker with, and it has a pretty active community for answering questions, all assessments I agree with so far.

Note that this research was done some months before Bambu earned bad headlines because of firmware updates that some users believe will lead to a more locked-down ecosystem. This is a controversy I understand—3D printers are still primarily the realm of DIYers and tinkerers, people who are especially sensitive to the closing of open ecosystems. But as a beginner, I’m already leaning mostly on the first-party tools and built-in functionality to get everything going, so I’m not really experiencing the sense of having “lost” features I was relying on, and any concerns I did have are mostly addressed by Bambu’s update about its update.

I hadn’t really updated my preconceived notions of what home 3D printing was since its primordial days, something Ars has been around long enough to have covered in some depth. I was wary of getting into yet another hobby where, like building your own gaming PC, fiddling with and maintaining the equipment is part of the hobby. Bambu’s printers (and those like them) are capable of turning out fairly high-quality prints with minimal fuss, and nothing will draw you into the hobby faster than a few successful prints.

Basic terminology

Extrusion-based 3D printers (also sometimes called “FDM,” for “fused deposition modeling”) work by depositing multiple thin layers of melted plastic filament on a heated bed. Credit: Andrew Cunningham

First things first: The A1 is what’s called an “extrusion” printer, meaning that it functions by melting a long, slim thread of plastic (filament) and then depositing this plastic onto a build plate seated on top of a heated bed in tens, hundreds, or even thousands of thin layers. In the manufacturing world, this is also called “fused deposition modeling,” or FDM. This layer-based extrusion gives 3D-printed objects their distinct ridged look and feel and is also why a 3D printed piece of plastic is less detailed-looking and weaker than an injection-molded piece of plastic like a Lego brick.

The other readily available home 3D printing technology takes liquid resin and uses UV light to harden it into a plastic structure, using a process called “stereolithography” (SLA). You can get inexpensive resin printers in the same price range as the best cheap extrusion printers, and the SLA process can create much more detailed, smooth-looking, and watertight 3D prints (it’s popular for making figurines for tabletop games). Some downsides are that the print beds in these printers are smaller, resin is a bit fussier than filament, and multi-color printing isn’t possible.

There are two main types of home extrusion printers. The Bambu A1 is a Cartesian printer, or in more evocative and colloquial terms, a “bed slinger.” In these, the head of the printer can move up and down on one or two rails and from side to side on another rail. But the print bed itself has to move forward and backward to “move” the print head on the Y axis.

More expensive home 3D printers, including higher-end Bambu models in the P- and X-series, are “CoreXY” printers, which include a third rail or set of rails (and more Z-axis rails) that allow the print head to travel in all three directions.

The A1 is also an “open-bed” printer, which means that it ships without an enclosure. Closed-bed printers are more expensive, but they can maintain a more consistent temperature inside and help contain the fumes from the melted plastic. They can also reduce the amount of noise coming from your printer.

Together, the downsides of a bed-slinger (introducing more wobble for tall prints, more opportunities for parts of your print to come loose from the plate) and an open-bed printer (worse temperature, fume, and dust control) mainly just mean that the A1 isn’t well-suited for printing certain types of plastic and has more potential points of failure for large or delicate prints. My experience with the A1 has been mostly positive now that I know about those limitations, but the printer you buy could easily change based on what kinds of things you want to print with it.

Setting up

Overall, the setup process was reasonably simple, at least for someone who has been building PCs and repairing small electronics for years now. It’s not quite the same as the “take it out of the box, remove all the plastic film, and plug it in” process of setting up a 2D printer, but the directions in the start guide are well-illustrated and clearly written; if you can put together prefab IKEA furniture, that’s roughly the level of complexity we’re talking about here. The fact that delicate electronics are involved might still make it more intimidating for the non-technical, but figuring out what goes where is fairly simple.

The only mistake I made while setting the printer up involved the surface I initially tried to put it on. I used a spare end table, but as I discovered during the printer’s calibration process, the herky-jerky movement of the bed and print head was way too much for a little table to handle. “Stable enough to put a lamp on” is not the same as “stable enough to put a constantly wobbling contraption” on—obvious in retrospect, but my being new to this is why this article exists.

After some office rearrangement, I was able to move the printer to my sturdy L-desk full of cables and other doodads to serve as ballast. This surface was more than sturdy enough to let the printer complete its calibration process—and sturdy enough not to transfer the printer’s every motion to our kid’s room below, a boon for when I’m trying to print something after he has gone to bed.

The first-party Bambu apps for sending files to the printer are Bambu Handy (for iOS/Android, with no native iPad version) and Bambu Studio (for Windows, macOS, and Linux). Handy works OK for sending ready-made models from MakerWorld (a mostly community-driven but Bambu-developer repository for 3D printable files) and for monitoring prints once they’ve started. But I’ll mostly be relaying my experience with Bambu Studio, a much more fully featured app. Neither app requires sign-in, at least not yet, but the path of least resistance is to sign into your printer and apps with the same account to enable easy communication and syncing.

Bambu Studio: A primer

Bambu Studio is what’s known in the hobby as a “slicer,” software that takes existing 3D models output by common CAD programs (Tinkercad, FreeCAD, SolidWorks, Autodesk Fusion, others) and converts them into a set of specific movement instructions that the printer can follow. Bambu Studio allows you to do some basic modification of existing models—cloning parts, resizing them, adding supports for overhanging bits that would otherwise droop down, and a few other functions—but it’s primarily there for opening files, choosing a few settings, and sending them off to the printer to become tangible objects.

Bambu Studio isn’t the most approachable application, but if you’ve made it this far, it shouldn’t be totally beyond your comprehension. For first-time setup, you’ll choose your model of printer (all Bambu models and a healthy selection of third-party printers are officially supported), leave the filament settings as they are, and sign in if you want to use Bambu’s cloud services. These sync printer settings and keep track of the models you save and download from MakerWorld, but a non-cloud LAN mode is available for the Bambu skeptics and privacy-conscious.

For any newbie, pretty much all you need to do is connect your printer, open a .3MF or .STL file you’ve downloaded from MakerWorld or elsewhere, select your filament from the drop-down menu, click “slice plate,” and then click “print.” Things like the default 0.4 mm nozzle size and Bambu’s included Textured PEI Build Plate are generally already factored in, though you may need to double-check these selections when you open a file for the first time.

When you slice your build plate for the first time, the app will spit a pile of numbers back at you. There are two important ones for 3D printing neophytes to track. One is the “total filament” figure, which tells you how many grams of filament the printer will use to make your model (filament typically comes in 1 kg spools, and the printer generally won’t track usage for you, so if you want to avoid running out in the middle of the job, you may want to keep track of what you’re using). The second is the “total time” figure, which tells you how long the entire print will take from the first calibration steps to the end of the job.

Selecting your filament and/or temperature presets. If you have the Automatic Material System (AMS), this is also where you’ll manage multicolor printing. Andrew Cunningham

When selecting filament, people who stick to Bambu’s first-party spools will have the easiest time, since optimal settings are already programmed into the app. But I’ve had almost zero trouble with the “generic” presets and the spools of generic Inland-branded filament I’ve bought from our local Micro Center, at least when sticking to PLA (polylactic acid, the most common and generally the easiest-to-print of the different kinds of filament you can buy). But we’ll dive deeper into plastics in part 2 of this series.

I won’t pretend I’m skilled enough to do a deep dive on every single setting that Bambu Studio gives you access to, but here are a few of the odds and ends I’ve found most useful:

  • The “clone” function, accessed by right-clicking an object and clicking “clone.” Useful if you’d like to fit several copies of an object on the build plate at once, especially if you’re using a filament with a color gradient and you’d like to make the gradient effect more pronounced by spreading it out over a bunch of prints.
  • The “arrange all objects” function, the fourth button from the left under the “prepare” tab. Did you just clone a bunch of objects? Did you delete an individual object from a model because you didn’t need to print that part? Bambu Studio will arrange everything on your build plate to optimize the use of space.
  • Layer height, located in the sidebar directly beneath “Process” (which is directly underneath the area where you select your filament. For many functional parts, the standard 0.2 mm layer height is fine. Going with thinner layer heights adds to the printing time but can preserve more detail on prints that have a lot of it and slightly reduce the visible layer lines that give 3D-printed objects their distinct look (for better or worse). Thicker layer heights do the opposite, slightly reducing the amount of time a model takes to print but preserving less detail.
  • Infill percentage and wall loops, located in the Strength tab beneath the “Process” sidebar item. For most everyday prints, you don’t need to worry about messing with these settings much; the infill percentage determines the amount of your print’s interior that’s plastic and the part that’s empty space (15 percent is a good happy medium most of the time between maintaining rigidity and overusing plastic). The number of wall loops determines how many layers the printer uses for the outside surface of the print, with more walls using more plastic but also adding a bit of extra strength and rigidity to functional prints that need it (think hooks, hangers, shelves and brackets, and other things that will be asked to bear some weight).

My first prints

A humble start: My very first print was a wall bracket for the remote for my office’s ceiling fan. Credit: Andrew Cunningham

When given the opportunity to use a 3D printer, my mind went first to aggressively practical stuff—prints for organizing the odds and ends that eternally float around my office or desk.

When we moved into our current house, only one of the bedrooms had a ceiling fan installed. I put up remote-controlled ceiling fans in all the other bedrooms myself. And all those fans, except one, came with a wall-mounted caddy to hold the remote control. The first thing I decided to print was a wall-mounted holder for that remote control.

MakerWorld is just one of several resources for ready-made 3D-printable files, but the ease with which I found a Hampton Bay Ceiling Fan Remote Wall Mount is pretty representative of my experience so far. At this point in the life cycle of home 3D printing, if you can think about it and it’s not a terrible idea, you can usually find someone out there who has made something close to what you’re looking for.

I loaded up my black roll of PLA plastic—generally the cheapest, easiest-to-buy, easiest-to-work-with kind of 3D printer filament, though not always the best for prints that need more structural integrity—into the basic roll-holder that comes with the A1, downloaded that 3MF file, opened it in Bambu Studio, sliced the file, and hit print. It felt like there should have been extra steps in there somewhere. But that’s all it took to kick the printer into action.

After a few minutes of warmup—by default, the A1 has a thorough pre-print setup process where it checks the levelness of the bed and tests the flow rate of your filament for a few minutes before it begins printing anything—the nozzle started laying plastic down on my build plate, and inside of an hour or so, I had my first 3D-printed object.

Print No. 2 was another wall bracket, this time for my gaming PC’s gamepad and headset. Credit: Andrew Cunningham

It wears off a bit after you successfully execute a print, but I still haven’t quite lost the feeling of magic of printing out a fully 3D object that comes off the plate and then just exists in space along with me and all the store-bought objects in my office.

The remote holder was, as I’d learn, a fairly simple print made under near-ideal conditions. But it was an easy success to start off with, and that success can help embolden you and draw you in, inviting more printing and more experimentation. And the more you experiment, the more you inevitably learn.

This time, I talked about what I learned about basic terminology and the different kinds of plastics most commonly used by home 3D printers. Next time, I’ll talk about some of the pitfalls I ran into after my initial successes, what I learned about using Bambu Studio, what I’ve learned about fine-tuning settings to get good results, and a whole bunch of 3D-printable upgrades and mods available for the A1.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

What I learned from my first few months with a Bambu Lab A1 3D printer, part 1 Read More »

meta-hypes-ai-friends-as-social-media’s-future,-but-users-want-real-connections

Meta hypes AI friends as social media’s future, but users want real connections


Two visions for social media’s future pit real connections against AI friends.

A rotting zombie thumb up buzzing with flies while the real zombies are the people in the background who can't put their phones down

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

If you ask the man who has largely shaped how friends and family connect on social media over the past two decades about the future of social media, you may not get a straight answer.

At the Federal Trade Commission’s monopoly trial, Meta CEO Mark Zuckerberg attempted what seemed like an artful dodge to avoid criticism that his company allegedly bought out rivals Instagram and WhatsApp to lock users into Meta’s family of apps so they would never post about their personal lives anywhere else. He testified that people actually engage with social media less often these days to connect with loved ones, preferring instead to discover entertaining content on platforms to share in private messages with friends and family.

As Zuckerberg spins it, Meta no longer perceives much advantage in dominating the so-called personal social networking market where Facebook made its name and cemented what the FTC alleged is an illegal monopoly.

“Mark Zuckerberg says social media is over,” a New Yorker headline said about this testimony in a report noting a Meta chart that seemed to back up Zuckerberg’s words. That chart, shared at the trial, showed the “percent of time spent viewing content posted by ‘friends'” had declined over the past two years, from 22 to 17 percent on Facebook and from 11 to 7 percent on Instagram.

Supposedly because of this trend, Zuckerberg testified that “it doesn’t matter much” if someone’s friends are on their preferred platform. Every platform has its own value as a discovery engine, Zuckerberg suggested. And Meta platforms increasingly compete on this new playing field against rivals like TikTok, Meta argued, while insisting that it’s not so much focused on beating the FTC’s flagged rivals in the connecting-friends-and-family business, Snap and MeWe.

But while Zuckerberg claims that hosting that kind of content doesn’t move the needle much anymore, owning the biggest platforms that people use daily to connect with friends and family obviously still matters to Meta, MeWe founder Mark Weinstein told Ars. And Meta’s own press releases seem to back that up.

Weeks ahead of Zuckerberg’s testimony, Meta announced that it would bring back the “magic of friends,” introducing a “friends” tab to Facebook to make user experiences more like the original Facebook. The company intentionally diluted feeds with creator content and ads for the past two years, but it now appears intent on trying to spark more real conversations between friends and family, at least partly to fuel its newly launched AI chatbots.

Those chatbots mine personal information shared on Facebook and Instagram, and Meta wants to use that data to connect more personally with users—but “in a very creepy way,” The Washington Post wrote. In interviews, Zuckerberg has suggested these AI friends could “meaningfully” fill the void of real friendship online, as the average person has only three friends but “has demand” for up to 15. To critics seeking to undo Meta’s alleged monopoly, this latest move could signal a contradiction in Zuckerberg’s testimony, showing that the company is so invested in keeping users on its platforms that it’s now creating AI friends (wh0 can never leave its platform) to bait the loneliest among us into more engagement.

“The average person wants more connectivity, connection, than they have,” Zuckerberg said, hyping AI friends. For the Facebook founder, it must be hard to envision a future where his platforms aren’t the answer to providing that basic social need. All this comes more than a decade after he sought $5 billion in Facebook’s 2012 initial public offering so that he could keep building tools that he told investors would expand “people’s capacity to build and maintain relationships.”

At the trial, Zuckerberg testified that AI and augmented reality will be key fixtures of Meta’s platforms in the future, predicting that “several years from now, you are going to be scrolling through your feed, and not only is it going to be sort of animated, but it will be interactive.”

Meta declined to comment further on the company’s vision for social media’s future. In a statement, a Meta spokesperson told Ars that “the FTC’s lawsuit against Meta defies reality,” claiming that it threatens US leadership in AI and insisting that evidence at trial would establish that platforms like TikTok, YouTube, and X are Meta’s true rivals.

“More than 10 years after the FTC reviewed and cleared our acquisitions, the Commission’s action in this case sends the message that no deal is ever truly final,” Meta’s spokesperson said. “Regulators should be supporting American innovation rather than seeking to break up a great American company and further advantaging China on critical issues like AI.”

Meta faces calls to open up its platforms

Weinstein, the MeWe founder, told Ars that back in the 1990s when the original social media founders were planning the first community portals, “it was so beautiful because we didn’t think about bots and trolls. We didn’t think about data mining and surveillance capitalism. We thought about making the world a more connected and holistic place.”

But those who became social media overlords found more money in walled gardens and increasingly cut off attempts by outside developers to improve the biggest platforms’ functionality or leverage their platforms to compete for their users’ attention. Born of this era, Weinstein expects that Zuckerberg, and therefore Meta, will always cling to its friends-and-family roots, no matter which way Zuckerberg says the wind is blowing.

Meta “is still entirely based on personal social networking,” Weinstein told Ars.

In a Newsweek op-ed, Weinstein explained that he left MeWe in 2021 after “competition became impossible” with Meta. It was a time when MeWe faced backlash over lax content moderation, drawing comparisons between its service and right-wing apps like Gab or Parler. Weinstein rejected those comparisons, seeing his platform as an ideal Facebook rival and remaining a board member through the app’s more recent shift to decentralization. Still defending MeWe’s failed efforts to beat Facebook, he submitted hundreds of documents and was deposed in the monopoly trial, alleging that Meta retaliated against MeWe as a privacy-focused rival that sought to woo users away by branding itself the “anti-Facebook.”

Among his complaints, Weinstein accused Meta of thwarting MeWe’s attempts to introduce interoperability between the two platforms, which he thinks stems from a fear that users might leave Facebook if they discover a more appealing platform. That’s why he’s urged the FTC—if it wins its monopoly case—to go beyond simply ordering a potential breakup of Facebook, Instagram, and WhatsApp to also require interoperability between Meta’s platforms and all rivals. That may be the only way to force Meta to release its clutch on personal data collection, Weinstein suggested, and allow for more competition broadly in the social media industry.

“The glue that holds it all together is Facebook’s monopoly over data,” Weinstein wrote in a Wall Street Journal op-ed, recalling the moment he realized that Meta seemed to have an unbeatable monopoly. “Its ownership and control of the personal information of Facebook users and non-users alike is unmatched.”

Cory Doctorow, a special advisor to the Electronic Frontier Foundation, told Ars that his vision of a better social media future goes even further than requiring interoperability between all platforms. Social networks like Meta’s should also be made to allow reverse engineering so that outside developers can modify their apps with third-party tools without risking legal attacks, he said.

Doctorow said that solution would create “an equilibrium where companies are more incentivized to behave themselves than they are to cheat” by, say, retaliating against, killing off, or buying out rivals. And “if they fail to respond to that incentive and they cheat anyways, then the rest of the world still has a remedy,” Doctorow said, by having the choice to modify or ditch any platform deemed toxic, invasive, manipulative, or otherwise offensive.

Doctorow summed up the frustration that some users have faced through the ongoing “enshittification” of platforms (a term he coined) ever since platforms took over the Internet.

“I’m 55 now, and I’ve gotten a lot less interested in how things work because I’ve had too many experiences with how things fail,” Doctorow told Ars. “And I just want to make sure that if I’m on a service and it goes horribly wrong, I can leave.”

Social media haters wish OG platforms were doomed

Weinstein pointed out that Meta’s alleged monopoly impacts a group often left out of social media debates: non-users. And if you ask someone who hates social media what the future of social media should look like, they will not mince words: They want a way to opt out of all of it.

As Meta’s monopoly trial got underway, a personal blog post titled “No Instagram, no privacy” rose to the front page of Hacker News, prompting a discussion about social media norms and reasonable expectations for privacy in 2025.

In the post, Wouter-Jan Leys, a privacy advocate, explained that he felt “blessed” to have “somehow escaped having an Instagram account,” feeling no pressure to “update the abstract audience of everyone I ever connected with online on where I am, what I am doing, or who I am hanging out with.”

But despite never having an account, he’s found that “you don’t have to be on Instagram to be on Instagram,” complaining that “it bugs me” when friends seem to know “more about my life than I tell them” because of various friends’ posts that mention or show images of him. In his blog, he defined privacy as “being in control of what other people know about you” and suggested that because of platforms like Instagram, he currently lacked this control. There should be some way to “fix or regulate this,” Leys suggested, or maybe some universal “etiquette where it’s frowned upon to post about social gatherings to any audience beyond who already was at that gathering.”

On Hacker News, his post spurred a debate over one of the longest-running privacy questions swirling on social media: Is it OK to post about someone who abstains from social media?

Some seeming social media fans scolded Leys for being so old-fashioned about social media, suggesting, “just live your life without being so bothered about offending other people” or saying that “the entire world doesn’t have to be sanitized to meet individual people’s preferences.” Others seemed to better understand Leys’ point of view, with one agreeing that “the problem is that our modern norms (and tech) lead to everyone sharing everything with a large social network.”

Surveying the lively thread, another social media hater joked, “I feel vindicated for my decision to entirely stay off of this drama machine.”

Leys told Ars that he would “absolutely” be in favor of personal social networks like Meta’s platforms dying off or losing steam, as Zuckerberg suggested they already are. He thinks that the decline in personal post engagement that Meta is seeing is likely due to a combination of factors, where some users may prefer more privacy now after years of broadcasting their lives, and others may be tired of the pressure of building a personal brand or experiencing other “odd social dynamics.”

Setting user sentiments aside, Meta is also responsible for people engaging with fewer of their friends’ posts. Meta announced that it would double the amount of force-fed filler in people’s feeds on Instagram and Facebook starting in 2023. That’s when the two-year span begins that Zuckerberg measured in testifying about the sudden drop-off in friends’ content engagement.

So while it’s easy to say the market changed, Meta may be obscuring how much it shaped that shift. Degrading the newsfeed and changing Instagram’s default post shape from square to rectangle seemingly significantly shifted Instagram social norms, for example, creating an environment where Gen Z users felt less comfortable posting as prolifically as millennials did when Instagram debuted, The New Yorker explained last year. Where once millennials painstakingly designed immaculate grids of individual eye-catching photos to seem cool online, Gen Z users told The New Yorker that posting a single photo now feels “humiliating” and like a “social risk.”

But rather than eliminate the impulse to post, this cultural shift has popularized a different form of personal posting: staggered photo dumps, where users wait to post a variety of photos together to sum up a month of events or curate a vibe, the trend piece explained. And Meta is clearly intent on fueling that momentum, doubling the maximum number of photos that users can feature in a single post to encourage even more social posting, The New Yorker noted.

Brendan Benedict, an attorney for Benedict Law Group PLLC who has helped litigate big tech antitrust cases, is monitoring the FTC monopoly trial on a Substack called Big Tech on Trial. He told Ars that the evidence at the trial has shown that “consumers want more friends and family content, and Meta is belatedly trying to address this” with features like the “friends” tab, while claiming there’s less interest in this content.

Leys doesn’t think social media—at least the way that Facebook defined it in the mid-2000s—will ever die, because people will never stop wanting social networks like Facebook or Instagram to stay connected with all their friends and family. But he could see a world where, if people ever started truly caring about privacy or “indeed [got] tired of the social dynamics and personal brand-building… the kind of social media like Facebook and Instagram will have been a generational phenomenon, and they may not immediately bounce back,” especially if it’s easy to switch to other platforms that respond better to user preferences.

He also agreed that requiring interoperability would likely lead to better social media products, but he maintained that “it would still not get me on Instagram.”

Interoperability shakes up social media

Meta thought it may have already beaten the FTC’s monopoly case, filing for a motion for summary judgment after the FTC rested its case in a bid to end the trial early. That dream was quickly dashed when the judge denied the motion days later. But no matter the outcome of the trial, Meta’s influence over the social media world may be waning just as it’s facing increasing pressure to open up its platforms more than ever.

The FTC has alleged that Meta weaponized platform access early on, only allowing certain companies to interoperate and denying access to anyone perceived as a threat to its alleged monopoly power. That includes limiting promotions of Instagram to keep users engaged with Facebook Blue. A primary concern for Meta (then Facebook), the FTC claimed, was avoiding “training users to check multiple feeds,” which might allow other apps to “cannibalize” its users.

“Facebook has used this power to deter and suppress competitive threats to its personal social networking monopoly. In order to protect its monopoly, Facebook adopted and required developers to agree to conditional dealing policies that limited third-party apps’ ability to engage with Facebook rivals or to develop into rivals themselves,” the FTC alleged.

By 2011, the FTC alleged, then-Facebook had begun terminating API access to any developers that made it easier to export user data into a competing social network without Facebook’s permission. That practice only ended when the UK parliament started calling out Facebook’s anticompetitive conduct toward app developers in 2018, the FTC alleged.

According to the FTC, Meta continues “to this day” to “screen developers and can weaponize API access in ways that cement its dominance,” and if scrutiny ever subsides, Meta is expected to return to such anticompetitive practices as the AI race heats up.

One potential hurdle for Meta could be that the push for interoperability is not just coming from the FTC or lawmakers who recently reintroduced bipartisan legislation to end walled gardens. Doctorow told Ars that “huge public groundswells of mistrust and anger about excessive corporate power” that “cross political lines” are prompting global antitrust probes into big tech companies and are perhaps finally forcing a reckoning after years of degrading popular products to chase higher and higher revenues.

For social media companies, mounting concerns about privacy and suspicions about content manipulation or censorship are driving public distrust, Doctorow said, as well as fears of surveillance capitalism. The latter includes theories that Doctorow is skeptical of. Weinstein embraced them, though, warning that platforms seem to be profiting off data without consent while brainwashing users.

Allowing users to leave the platform without losing access to their friends, their social posts, and their messages might be the best way to incentivize Meta to either genuinely compete for billions of users or lose them forever as better options pop up that can plug into their networks.

In his Newsweek op-ed, Weinstein suggested that web inventor Tim Berners-Lee has already invented a working protocol “to enable people to own, upload, download, and relocate their social graphs,” which maps users’ connections across platforms. That could be used to mitigate “the network effect” that locks users into platforms like Meta’s “while interrupting unwanted data collection.”

At the same time, Doctorow told Ars that increasingly popular decentralized platforms like Bluesky and Mastodon already provide interoperability and are next looking into “building interoperable gateways” between their services. Doctorow said that communicating with other users across platforms may feel “awkward” at first, but ultimately, it may be like “having to find the diesel pump at the gas station” instead of the unleaded gas pump. “You’ll still be going to the same gas station,” Doctorow suggested.

Opening up gateways into all platforms could be useful in the future, Doctorow suggested. Imagine if one platform goes down—it would no longer disrupt communications as drastically, as users could just pivot to communicate on another platform and reach the same audience. The same goes for platforms that users grow to distrust.

The EFF supports regulators’ attempts to pass well-crafted interoperability mandates, Doctorow said, noting that “if you have to worry about your users leaving, you generally have to treat them better.”

But would interoperability fix social media?

The FTC has alleged that “Facebook’s dominant position in the US personal social networking market is durable due to significant entry barriers, including direct network effects and high switching costs.”

Meta disputes the FTC’s complaint as outdated, arguing that its platform could be substituted by pretty much any social network.

However, Guy Aridor, a co-author of a recent article called “The Economics of Social Media” in the Journal of Economic Literature, told Ars that dominant platforms are probably threatened by shifting social media trends and are likely to remain “resistant to interoperability” because “it’s in the interest of the platform to make switching and coordination costs high so that users are less likely to migrate away.” For Meta, research shows its platforms’ network effects have appeared to weaken somewhat but “clearly still exist” despite social media users increasingly seeking content on platforms rather than just socialization, Aridor said.

Interoperability advocates believe it will make it easier for startups to compete with giants like Meta, which fight hard and sometimes seemingly dirty to keep users on their apps. Reintroducing the ACCESS Act, which requires platform compatibility to enable service switching, Senator Mark R. Warner (D-Va.) said that “interoperability and portability are powerful tools to promote innovative new companies and limit anti-competitive behaviors.” He’s hoping that passing these “long-overdue requirements” will “boost competition and give consumers more power.”

Aridor told Ars it’s obvious that “interoperability would clearly increase competition,” but he still has questions about whether users would benefit from that competition “since one consistent theme is that these platforms are optimized to maximize engagement, and there’s numerous empirical evidence we have by now that engagement isn’t necessarily correlated with utility.”

Consider, Aridor suggested, how toxic content often leads to high engagement but lower user satisfaction, as MeWe experienced during its 2021 backlash.

Aridor said there is currently “very little empirical evidence on the effects of interoperability,” but theoretically, if it increased competition in the current climate, it would likely “push the market more toward supplying engaging entertainment-related content as opposed to friends and family type of content.”

Benedict told Ars that a remedy like interoperability would likely only be useful to combat Meta’s alleged monopoly following a breakup, which he views as the “natural remedy” following a potential win in the FTC’s lawsuit.

Without the breakup and other meaningful reforms, a Meta win could preserve the status quo and see the company never open up its platforms, perhaps perpetuating Meta’s influence over social media well into the future. And if Zuckerberg’s vision comes to pass, instead of seeing what your friends are posting on interoperating platforms across the Internet, you may have a dozen AI friends trained on your real friends’ behaviors sending you regular dopamine hits to keep you scrolling on Facebook or Instagram.

Aridor’s team’s article suggested that, regardless of user preferences, social media remains a permanent fixture of society. If that’s true, users could get stuck forever using whichever platforms connect them with the widest range of contacts.

“While social media has continued to evolve, one thing that has not changed is that social media remains a central part of people’s lives,” his team’s article concluded.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Meta hypes AI friends as social media’s future, but users want real connections Read More »

zero-click-searches:-google’s-ai-tools-are-the-culmination-of-its-hubris

Zero-click searches: Google’s AI tools are the culmination of its hubris


Google’s first year with AI search was a wild ride. It will get wilder.

Google is constantly making changes to its search rankings, but not all updates are equal. Every few months, the company bundles up changes into a larger “core update.” These updates make rapid and profound changes to search, so website operators watch them closely.

The March 2024 update was unique. It was one of Google’s largest core updates ever, and it took over a month to fully roll out. Nothing has felt quite the same since. Whether the update was good or bad depends on who you ask—and maybe who you are.

It’s common for websites to see traffic changes after a core update, but the impact of the March 2024 update marked a seismic shift. Google says the update aimed to address spam and AI-generated content in a meaningful way. Still, many publishers say they saw clicks on legitimate sites evaporate, while others have had to cope with unprecedented volatility in their traffic. Because Google owns almost the entire search market, changes in its algorithm can move the Internet itself.

In hindsight, the March 2024 update looks like the first major Google algorithm update for the AI era. Not only did it (supposedly) veer away from ranking AI-authored content online, but it also laid the groundwork for Google’s ambitious—and often annoying—desire to fuse AI with search.

A year ago, this ambition surfaced with AI Overviews, but now the company is taking an even more audacious route, layering in a new chat-based answer service called “AI Mode.” Both of these technologies do at least two things: They aim to keep you on Google properties longer, and they remix publisher content without always giving prominent citations.

Smaller publishers appear to have borne the brunt of the changes caused by these updates. “Google got all this flak for crushing the small publishers, and it’s true that when they make these changes, they do crush a lot of publishers,” says Jim Yu, CEO of enterprise SEO platform BrightEdge. Yu explains that Google is the only search engine likely to surface niche content in the first place, and there are bound to be changes to sites at the fringes during a major core update.

Google’s own view on the impact of the March 2024 update is unsurprisingly positive. The company said it was hoping to reduce the appearance of unhelpful content in its search engine results pages (SERPs) by 40 percent. After the update, the company claimed an actual reduction of closer to 45 percent. But does it feel like Google’s results have improved by that much? Most people don’t think so.

What causes this disconnect? According to Michael King, founder of SEO firm iPullRank, we’re not speaking the same language as Google. “Google’s internal success metrics differ from user perceptions,” he says. “Google measures user satisfaction through quantifiable metrics, while external observers rely on subjective experiences.”

Google evaluates algorithm changes with various tests, including human search quality testers and running A/B tests on live searches. But more than anything else, success is about the total number of searches (5 trillion of them per year). Google often makes this number a centerpiece of its business updates to show investors that it can still grow.

However, using search quantity to measure quality has obvious problems. For instance, more engagement with a search engine might mean that quality has decreased, so people try new queries (e.g., the old trick of adding “Reddit” to the end of your search string). In other words, people could be searching more because they don’t like the results.

Jim Yu suggests that Google is moving fast and breaking things, but it may not be as bad as we think. “I think they rolled things out faster because they had to move a lot faster than they’ve historically had to move, and it ends up that they do make some real mistakes,” says Yu. “[Google] is held to a higher standard, but by and large, I think their search quality is improving.”

According to King, Google’s current search behavior still favors big names, but other sites have started to see a rebound. “Larger brands are performing better in the top three positions, while lesser-known websites have gained ground in positions 4 through 10,” says King. “Although some websites have indeed lost traffic due to reduced organic visibility, the bigger issue seems tied to increased usage of AI Overviews”—and now the launch of AI Mode.

Yes, the specter of AI hangs over every SERP. The unhelpful vibe many people now get from Google searches, regardless of the internal metrics the company may use, may come from a fundamental shift in how Google surfaces information in the age of AI.

The AI Overview hangover

In 2025, you can’t talk about Google’s changes to search without acknowledging the AI-generated elephant in the room. As it wrapped up that hefty core update in March 2024, Google also announced a major expansion of AI in search, moving the “Search Generative Experience” out of labs and onto Google.com. The feature was dubbed “AI Overviews.”

The AI Overview box has been a fixture on Google’s search results page ever since its debut a year ago. The feature uses the same foundational AI model as Google’s Gemini chatbot to formulate answers to your search queries by ingesting the top 100 (!) search results. It sits at the top of the page, pushing so-called blue link content even farther down below the ads and knowledge graph content. It doesn’t launch on every query, and sometimes it answers questions you didn’t ask—or even hallucinates a totally wrong answer.

And it’s not without some irony that Google’s laudable decision to de-rank synthetic AI slop comes at the same time that Google heavily promotes its own AI-generated content right at the top of SERPs.

AI Overview on phone

AI Overviews appear right at the top of many search results.

Credit: Google

AI Overviews appear right at the top of many search results. Credit: Google

What is Google getting for all of this AI work? More eyeballs, it would seem. “AI is driving more engagement than ever before on Google,” says Yu. BrightEdge data shows that impressions on Google are up nearly 50 percent since AI Overviews launched. Many of the opinions you hear about AI Overviews online are strongly negative, but that doesn’t mean people aren’t paying attention to the feature. In its Q1 2025 earnings report, Google announced that AI Overviews is being “used” by 1.5 billion people every month. (Since you can’t easily opt in or opt out of AI Overviews, this “usage” claim should be taken with a grain of salt.)

Interestingly, the impact of AI Overviews has varied across the web. In October 2024, Google was so pleased with AI Overviews that it expanded them to appear in more queries. And as AI crept into more queries, publishers saw a corresponding traffic drop. Yu estimates this drop to be around 30 percent on average for those with high AI query coverage. For searches that are less supported in AI Overviews—things like restaurants and financial services—the traffic change has been negligible. And there are always exceptions. Yu suggests that some large businesses with high AI Overview query coverage have seen much smaller drops in traffic because they rank extremely well as both AI citations and organic results.

Lower traffic isn’t the end of the world for some businesses. Last May, AI Overviews were largely absent from B2B queries, but that turned around in a big way in recent months. BrightEdge estimates that 70 percent of B2B searches now have AI answers, which has reduced traffic for many companies. Yu doesn’t think it’s all bad, though. “People don’t click through as much—they engage a lot more on the AI—but when they do click, the conversion rate for the business goes up,” Yu says. In theory, serious buyers click and window shoppers don’t.

But the Internet is not a giant mall that exists only for shoppers. It is, first and foremost, a place to share and find information, and AI Overviews have hit some purveyors of information quite hard. At launch, AI Overviews were heavily focused on “What is” and “How to” queries. Such “service content” is a staple of bloggers and big media alike, and these types of publishers aren’t looking for sales conversions—it’s traffic that matters. And they’re getting less of it because AI Overviews “helpfully” repackages and remixes their content, eliminating the need to click through to the site. Some publishers are righteously indignant, asking how it’s fair for Google to remix content it doesn’t own, and to do so without compensation.

But Google’s intentions don’t end with AI Overviews. Last week, the company started an expanded public test of so-called “AI Mode,” right from the front page. AI Mode doesn’t even bother with those blue links. It’s a chatbot experience that, at present, tries to answer your query without clearly citing sources inline. (On some occasions, it will mention Reddit or Wikipedia.) On the right side of the screen, Google provides a little box with three sites linked, which you can expand to see more options. To the end user, it’s utterly unclear if those are “sources,” “recommendations,” or “partner deals.”

Perhaps more surprisingly, in our testing, not a single AI Mode “sites box” listed a site that ranked on the first page for the same query on a regular search. That is, the links in AI Mode for “best foods to eat for a cold” don’t overlap at all with the SERP for the same query in Google Search. In fairness, AI Mode is very new, and its behavior will undoubtedly change. But the direction the company is headed seems clear.

Google’s real goal is to keep you on Google or other Alphabet properties. In 2019, Rand Fishkin noticed that Google’s evolution from search engine to walled garden was at a tipping point. At that time—and for the first time—more than half of Google searches resulted in zero click-throughs to other sites. But data did show large numbers of clicks to Google’s own properties, like YouTube and Maps. If Google doesn’t intend to deliver a “zero-click” search experience, you wouldn’t know it from historical performance data or the new features the company develops.

You also wouldn’t know it from the way AI Overviews work. They do cite some of the sources used in building each output, and data suggests people click on those links. But are the citations accurate? Is every source used for constructing an AI Overview cited? We don’t really know, as Google is famously opaque about how its search works. We do know that Google uses a customized version of Gemini to support AI Overviews and that Gemini has been trained on billions and billions of webpages.

When AI Overviews do cite a source, it’s not clear how those sources came to be the ones cited. There’s good reason to be suspicious here: AI Overview’s output is not great, as witnessed by the numerous hallucinations we all know and love (telling people to eat rocks, for instance). The only thing we know for sure is that Google isn’t transparent about any of this.

No signs of slowing

Despite all of that, Google is not slowing down on AI in search. More recent core updates have only solidified this new arrangement with an ever-increasing number of AI-answered queries. The company appears OK with its current accuracy problems, or at the very least, it’s comfortable enough to push out AI updates anyway. Google appears to have been caught entirely off guard by the public launch of ChatGPT, and it’s now utilizing its search dominance to play catch-up.

To make matters even more dicey, Google isn’t even trying to address the biggest issue in all this: The company’s quest for zero-click search harms the very content creators upon which the company has built its empire.

For its part, Google has been celebrating its AI developments, insisting that content producers don’t know what’s best for them, refuting any concerns with comments about search volume increases and ever-more-complex search query strings. The changes must be working!

Google has been building toward this moment for years. The company started with a list of 10 blue links and nothing else, but little by little, it pushed the links down the page and added more content that keeps people in the Google ecosystem. Way back in 2007, Google added Universal Search, which allowed it to insert content from Google Maps, YouTube, and other services. In 2009, Rich Snippets began displaying more data from search results on SERPs. In 2012, the Knowledge Graph began extracting data from search results to display answers in the search results. Each change kept people on Google longer and reduced click-throughs, all the while pushing the search results down the page.

AI Overviews, and especially AI Mode, are the logical outcome of Google’s yearslong transformation from an indexer of information to an insular web portal built on scraping content from around the web. Earlier in Google’s evolution, the implicit agreement was that websites would allow Google to crawl their pages in exchange for sending them traffic. That relationship has become strained as the company has kept more traffic for itself, reducing click-throughs to websites even as search volume continues to increase. And locking Google out isn’t a realistic option when the company controls almost the entire search market.

Even when Google has taken a friendlier approach, business concerns could get in the way. During the search antitrust trial, documents showed that Google initially intended to let sites opt out of being used for AI training for its search-based AI features—but these sites would still be included in search results. The company ultimately canned that idea, leaving site operators with the Pyrrhic choice of participating in the AI “revolution” or becoming invisible on the web. Google now competes with, rather than supports, the open web.

When many of us look at Google’s search results today, the vibe feels off. Maybe it’s the AI, maybe it’s Google’s algorithm, or maybe the Internet just isn’t what it once was. Whatever the cause, the shift toward zero-click search that began more than a decade ago was made clear by the March 2024 core update, and it has only accelerated with the launch of AI Mode. Even businesses that have escaped major traffic drops from AI Overviews could soon find that Google’s AI-only search can get much more overbearing.

The AI slop will continue until morale improves.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Zero-click searches: Google’s AI tools are the culmination of its hubris Read More »