Features

how-to-draft-a-will-to-avoid-becoming-an-ai-ghost—it’s-not-easy

How to draft a will to avoid becoming an AI ghost—it’s not easy


Why requests for “no AI resurrections” will probably go ignored.

Proton beams capturing the ghost of OpenAI to suck it into a trap where it belongs

All right! This AI is TOAST! Credit: Aurich Lawson

All right! This AI is TOAST! Credit: Aurich Lawson

As artificial intelligence has advanced, AI tools have emerged to make it possible to easily create digital replicas of lost loved ones, which can be generated without the knowledge or consent of the person who died.

Trained on the data of the dead, these tools, sometimes called grief bots or AI ghosts, may be text-, audio-, or even video-based. Chatting provides what some mourners feel is a close approximation to ongoing interactions with the people they love most. But the tech remains controversial, perhaps complicating the grieving process while threatening to infringe upon the privacy of the deceased, whose data could still be vulnerable to manipulation or identity theft.

Because of suspected harms and perhaps a general repulsion to the idea of it, not everybody wants to become an AI ghost.

After a realistic video simulation was recently used to provide a murder victim’s impact statement in court, Futurism summed up social media backlash, noting that the use of AI was “just as unsettling as you think.” And it’s not the first time people have expressed discomfort with the growing trend. Last May, The Wall Street Journal conducted a reader survey seeking opinions on the ethics of so-called AI resurrections. Responding, a California woman, Dorothy McGarrah, suggested there should be a way to prevent AI resurrections in your will.

“Having photos or videos of lost loved ones is a comfort. But the idea of an algorithm, which is as prone to generate nonsense as anything lucid, representing a deceased person’s thoughts or behaviors seems terrifying. It would be like generating digital dementia after your loved ones’ passing,” McGarrah said. “I would very much hope people have the right to preclude their images being used in this fashion after death. Perhaps something else we need to consider in estate planning?”

For experts in estate planning, the question may start to arise as more AI ghosts pop up. But for now, writing “no AI resurrections” into a will remains a complicated process, experts suggest, and such requests may not be honored by all unless laws are changed to reinforce a culture of respecting the wishes of people who feel uncomfortable with the idea of haunting their favorite people through AI simulations.

Can you draft a will to prevent AI resurrection?

Ars contacted several law associations to find out if estate planners are seriously talking about AI ghosts. Only the National Association of Estate Planners and Councils responded; it connected Ars to Katie Sheehan, an expert in the estate planning field who serves as a managing director and wealth strategist for Crestwood Advisors.

Sheehan told Ars that very few estate planners are prepared to answer questions about AI ghosts. She said not only does the question never come up in her daily work, but it’s also “essentially uncharted territory for estate planners since AI is relatively new to the scene.”

“I have not seen any documents drafted to date taking this into consideration, and I review estate plans for clients every day, so that should be telling,” Sheehan told Ars.

Although Sheehan has yet to see a will attempting to prevent AI resurrection, she told Ars that there could be a path to make it harder for someone to create a digital replica without consent.

“You certainly could draft into a power of attorney (for use during lifetime) and a will (for use post death) preventing the fiduciary (attorney in fact or executor) from lending any of your texts, voice, image, writings, etc. to any AI tools and prevent their use for any purpose during life or after you pass away, and/or lay the ground rules for when they can and cannot be used after you pass away,” Sheehan told Ars.

“This could also invoke issues with contract, property and intellectual property rights, and right of publicity as well if AI replicas (image, voice, text, etc.) are being used without authorization,” Sheehan said.

And there are likely more protections for celebrities than for everyday people, Sheehan suggested.

“As far as I know, there is no law” preventing unauthorized non-commercial digital replicas, Sheehan said.

Widely adopted by states, the Revised Uniform Fiduciary Access to Digital Assets Act—which governs who gets access to online accounts of the deceased, like social media or email accounts—could be helpful but isn’t a perfect remedy.

That law doesn’t directly “cover someone’s AI ghost bot, though it may cover some of the digital material some may seek to use to create a ghost bot,” Sheehan said.

“Absent any law” blocking non-commercial digital replicas, Sheehan expects that people’s requests for “no AI resurrections” will likely “be dealt with in the courts and governed by the terms of one’s estate plan, if it is addressed within the estate plan.”

Those potential fights seemingly could get hairy, as “it may be some time before we get any kind of clarity or uniform law surrounding this,” Sheehan suggested.

In the future, Sheehan said, requests prohibiting digital replicas may eventually become “boilerplate language in almost every will, trust, and power of attorney,” just as instructions on digital assets are now.

As “all things AI become more and more a part of our lives,” Sheehan said, “some aspects of AI and its components may also be woven throughout the estate plan regularly.”

“But we definitely aren’t there yet,” she said. “I have had zero clients ask about this.”

Requests for “no AI resurrections” will likely be ignored

Whether loved ones would—or even should—respect requests blocking digital replicas appears to be debatable. But at least one person who built a grief bot wished he’d done more to get his dad’s permission before moving forward with his own creation.

A computer science professor at the University of Washington Bothell, Muhammad Aurangzeb Ahmad, was one of the earliest AI researchers to create a grief bot more than a decade ago after his father died. He built the bot to ensure that his future kids would be able to interact with his father after seeing how incredible his dad was as a grandfather.

When Ahmad started his project, there was no ChatGPT or other advanced AI model to serve as the foundation, so he had to train his own model based on his dad’s data. Putting immense thought into the effort, Ahmad decided to close off the system from the rest of the Internet so that only his dad’s memories would inform the model. To prevent unauthorized chats, he kept the bot on a laptop that only his family could access.

Ahmad was so intent on building a digital replica that felt just like his dad that it didn’t occur to him until after his family started using the bot that he never asked his dad if this was what he wanted. Over time, he realized that the bot was biased to his view of his dad, perhaps even feeling off to his siblings who had a slightly different relationship with their father. It’s unclear if his dad would similarly view the bot as preserving just one side of him.

Ultimately, Ahmad didn’t regret building the bot, and he told Ars he thinks his father “would have been fine with it.”

But he did regret not getting his father’s consent.

For people creating bots today, seeking consent may be appropriate if there’s any chance the bot may be publicly accessed, Ahmad suggested. He told Ars that he would never have been comfortable with the idea of his dad’s digital replica being publicly available because the question of an “accurate representation” would come even more into play, as malicious actors could potentially access it and sully his dad’s memory.

Today, anybody can use ChatGPT’s model to freely create a similar bot with their own loved one’s data. And a wide range of grief tech services have popped up online, including HereAfter AI, SeanceAI, and StoryFile, Axios noted in an October report detailing the latest ways “AI could be used to ‘resurrect’ loved ones.” As this trend continues “evolving very fast,” Ahmad told Ars that estate planning is probably the best way to communicate one’s AI ghost preferences.

But in a recently published article on “The Law of Digital Resurrection,” law professor Victoria Haneman warned that “there is no legal or regulatory landscape against which to estate plan to protect those who would avoid digital resurrection, and few privacy rights for the deceased. This is an intersection of death, technology, and privacy law that has remained relatively ignored until recently.”

Haneman agreed with Sheehan that “existing protections are likely sufficient to protect against unauthorized commercial resurrections”—like when actors or musicians are resurrected for posthumous performances. However, she thinks that for personal uses, digital resurrections may best be blocked not through estate planning but by passing a “right to deletion” that would focus on granting the living or next of kin the rights to delete the data that could be used to create the AI ghost rather than regulating the output.

A “right to deletion” could help people fight inappropriate uses of their loved ones’ data, whether AI is involved or not. After her article was published, a lawyer reached out to Haneman about a client’s deceased grandmother whose likeness was used to create a meme of her dancing in a church. The grandmother wasn’t a public figure, and the client had no idea “why or how somebody decided to resurrect her deceased grandmother,” Haneman told Ars.

Although Haneman sympathized with the client, “if it’s not being used for a commercial purpose, she really has no control over this use,” Haneman said. “And she’s deeply troubled by this.”

Haneman’s article offers a rare deep dive into the legal topic. It sensitively maps out the vague territory of digital rights of the dead and explains how those laws—or the lack thereof—interact with various laws dealing with death, from human remains to property rights.

In it, Haneman also points out that, on balance, the rights of the living typically outweigh the rights of the dead, and even specific instructions on how to handle human remains aren’t generally considered binding. Some requests, like organ donation that can benefit the living, are considered critical, Haneman noted. But there are mixed results on how courts enforce other interests of the dead—like a famous writer’s request to destroy all unpublished work or a pet lover’s insistence to destroy their cat or dog at death.

She told Ars that right now, “a lot of people are like, ‘Why do I care if somebody resurrects me after I’m dead?’ You know, ‘They can do what they want.’ And they think that, until they find a family member who’s been resurrected by a creepy ex-boyfriend or their dead grandmother’s resurrected, and then it becomes a different story.”

Existing law may protect “the privacy interests of the loved ones of the deceased from outrageous or harmful digital resurrections of the deceased,” Haneman noted, but in the case of the dancing grandma, her meme may not be deemed harmful, no matter how much it troubles the grandchild to see her grandma’s memory warped.

Limited legal protections may not matter so much if, culturally, communities end up developing a distaste for digital replicas, particularly if it becomes widely viewed as disrespectful to the dead, Haneman suggested. Right now, however, society is more fixated on solving other problems with deepfakes rather than clarifying the digital rights of the dead. That could be because few people have been impacted so far, or it could also reflect a broader cultural tendency to ignore death, Haneman told Ars.

“We don’t want to think about our own death, so we really kind of brush aside whether or not we care about somebody else being digitally resurrected until it’s in our face,” Haneman said.

Over time, attitudes may change, especially if the so-called “digital afterlife industry” takes off. And there is some precedent that the law could be changed to reinforce any culture shift.

“The throughline revealed by the law of the dead is that a sacred trust exists between the living and the deceased, with an emphasis upon protecting common humanity, such that data afforded no legal status (or personal data of the deceased) may nonetheless be treated with dignity and receive some basic protections,” Haneman wrote.

An alternative path to prevent AI resurrection

Preventing yourself from becoming an AI ghost seemingly now falls in a legal gray zone that policymakers may need to address.

Haneman calls for a solution that doesn’t depend on estate planning, which she warned “is a structurally inequitable and anachronistic approach that maximizes social welfare only for those who do estate planning.” More than 60 percent of Americans die without a will, often including “those without wealth,” as well as women and racial minorities who “are less likely to die with a valid estate plan in effect,” Haneman reported.”We can do better in a technology-based world,” Haneman wrote. “Any modern framework should recognize a lack of accessibility as an obstacle to fairness and protect the rights of the most vulnerable through approaches that do not depend upon hiring an attorney and executing an estate plan.”

Rather than twist the law to “recognize postmortem privacy rights,” Haneman advocates for a path for people resistant to digital replicas that focuses on a right to delete the data that would be used to create the AI ghost.

“Put simply, the deceased may exert control over digital legacy through the right to deletion of data but may not exert broader rights over non-commercial digital resurrection through estate planning,” Haneman recommended.

Sheehan told Ars that a right to deletion would likely involve estate planners, too.

“If this is not addressed in an estate planning document and not specifically addressed in the statute (or deemed under the authority of the executor via statute), then the only way to address this would be to go to court,” Sheehan said. “Even with a right of deletion, the deceased would need to delete said data before death or authorize his executor to do so post death, which would require an estate planning document, statutory authority, or court authority.”

Haneman agreed that for many people, estate planners would still be involved, recommending that “the right to deletion would ideally, from the perspective of estate administration, provide for a term of deletion within 12 months.” That “allows the living to manage grief and open administration of the estate before having to address data management issues,” Haneman wrote, and perhaps adequately balances “the interests of society against the rights of the deceased.”

To Haneman, it’s also the better solution for the people left behind because “creating a right beyond data deletion to curtail unauthorized non-commercial digital resurrection creates unnecessary complexity that overreaches, as well as placing the interests of the deceased over those of the living.”

Future generations may be raised with AI ghosts

If a dystopia that experts paint comes true, Big Tech companies may one day profit by targeting grieving individuals to seize the data of the dead, which could be more easily abused since it’s granted fewer rights than data of the living.

Perhaps in that future, critics suggest, people will be tempted into free trials in moments when they’re missing their loved ones most, then forced to either pay a subscription to continue accessing the bot or else perhaps be subjected to ad-based models where their chats with AI ghosts may even feature ads in the voices of the deceased.

Today, even in a world where AI ghosts aren’t yet compelling ad clicks, some experts have warned that interacting with AI ghosts could cause mental health harms, New Scientist reported, especially if the digital afterlife industry isn’t carefully designed, AI ethicists warned. Some people may end up getting stuck maintaining an AI ghost if it’s left behind as a gift, and ethicists suggested that the emotional weight of that could also eventually take a negative toll. While saying goodbye is hard, letting go is considered a critical part of healing during the mourning process, and AI ghosts may make that harder.

But the bots can be a helpful tool to manage grief, some experts suggest, provided that their use is limited to allow for a typical mourning process or combined with therapy from a trained professional, Al Jazeera reported. Ahmad told Ars that working on his bot has not only kept his father close to him but also helped him think more deeply about relationships and memory.

Haneman noted that people have many ways of honoring the dead. Some erect statues, and others listen to saved voicemails or watch old home movies. For some, just “smelling an old sweater” is a comfort. And creating digital replicas, as creepy as some people might find them, is not that far off from these traditions, Haneman said.

“Feeding text messages and emails into existing AI platforms such as ChatGPT and asking the AI to respond in the voice of the deceased is simply a change in degree, not in kind,” Haneman said.

For Ahmad, the decision to create a digital replica of his dad was a learning experience, and perhaps his experience shows why any family or loved one weighing the option should carefully consider it before starting the process.

In particular, he warns families to be careful introducing young kids to grief bots, as they may not be able to grasp that the bot is not a real person. When he initially saw his young kids growing confused with whether their grandfather was alive or not—the introduction of the bot was complicated by the early stages of the pandemic, a time when they met many relatives virtually—he decided to restrict access to the bot until they were older. For a time, the bot only came out for special events like birthdays.

He also realized that introducing the bot also forced him to have conversations about life and death with his kids at ages younger than he remembered fully understanding those concepts in his own childhood.

Now, Ahmad’s kids are among the first to be raised among AI ghosts. To continually enhance the family’s experience, their father continuously updates his father’s digital replica. Ahmad is currently most excited about recent audio advancements that make it easier to add a voice element. He hopes that within the next year, he might be able to use AI to finally nail down his South Asian father’s accent, which up to now has always sounded “just off.” For others working in this space, the next frontier is realistic video or even augmented reality tools, Ahmad told Ars.

To this day, the bot retains sentimental value for Ahmad, but, as Haneman suggested, the bot was not the only way he memorialized his dad. He also created a mosaic, and while his father never saw it, either, Ahmad thinks his dad would have approved.

“He would have been very happy,” Ahmad said.

There’s no way to predict how future generations may view grief tech. But while Ahmad said he’s not sure he’d be interested in an augmented reality interaction with his dad’s digital replica, kids raised seeing AI ghosts as a natural part of their lives may not be as hesitant to embrace or even build new features. Talking to Ars, Ahmad fondly remembered his young daughter once saw that he was feeling sad and came up with her own AI idea to help her dad feel better.

“It would be really nice if you can just take this program and we build a robot that looks like your dad, and then add it to the robot, and then you can go and hug the robot,” she said, according to her father’s memory.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

How to draft a will to avoid becoming an AI ghost—it’s not easy Read More »

she-was-a-disney-star-with-platinum-records,-but-bridgit-mendler-gave-it-up-to-change-the-world

She was a Disney star with platinum records, but Bridgit Mendler gave it up to change the world


“The space industry has a ground bottleneck, and the problem is going to get worse.”

The Northwood Space team is all smiles after the first successful test of “Frankie.” Clockwise, from lower left: Shaurya Luthra, Marvin Shu, Josh Lehtonen, Thomas Row, Dan Meinzer, Griffin Cleverly, Bridgit Mendler. Credit: Shaurya Luthra

The Northwood Space team is all smiles after the first successful test of “Frankie.” Clockwise, from lower left: Shaurya Luthra, Marvin Shu, Josh Lehtonen, Thomas Row, Dan Meinzer, Griffin Cleverly, Bridgit Mendler. Credit: Shaurya Luthra

Bridgit Mendler was not in Hollywood anymore. Instead, she found herself in rural North Dakota, where the stars sparkled overhead rather than on the silver screen. And she was freezing.

When her team tumbled out of their rental cars after midnight, temperatures had already plummeted into the 40s. Howling winds carried their breath away before it could fog the air. So it was with no small sense of urgency that the group scrambled to assemble a jury-rigged antenna to talk to a spacecraft that would soon come whizzing over the horizon. A few hours later, the rosy light of dawn shone on the faces of a typically scrappy space startup: mostly male, mostly disheveled.

Then there was Mendler, the former Disney star and pop music sensation—and she was running the whole show.

Mendler followed an improbable path from the Disney Channel to North Dakota. She was among the brightest adolescent stars born in the early 1990s, along with Ariana Grande, Demi Lovato, and Selena Gomez, who gained fame as teenagers on the Disney Channel and Nickelodeon by enthralling Gen Z. During the first decade of the new millennium, before the rise of Musical.ly and then TikTok, television still dominated the attention of young children. And they were watching the Disney Channel in droves.

Like many of her fellow teenage stars, Mendler parlayed television fame into pop stardom, scoring a handful of platinum records. But in her mid-20s, Mendler left that world behind and threw herself into academia. She attended some of the country’s top universities and married an aerospace engineer. A couple of years ago, the two of them founded a company to address what they believed was a limiting factor in the space economy: transferring data from orbit.

Their company, Northwood Space, employed just six people when it deployed to North Dakota last October. But the team already had real hardware. On the windswept plain, they unpacked and assembled “Frankie,” their cobbled-together, phased-array satellite dish affectionately named after Mary Shelley’s masterpiece Frankenstein.

“We had the truck arrive at two o’clock in the morning,” Mendler said. “Six hours later, we were operational. We started running passes. We were able to transmit to a satellite on our first try.” The team had been up all night by then. “I guess that’s when my Celsius addiction kind of kicked in,” she said.

Guzzling energy drinks isn’t the healthiest activity, but it fits with the high-energy, frenetic rush of building a space startup. To survive without a billionaire’s backing, startups must stay lean and move quickly. And it’s not at all clear that Northwood will survive, as most space startups fail due to a lack of funding, long technology horizons, or regulatory hurdles. So within a year of seriously beginning operations, it’s notable that Northwood was already in the field, testing hardware and finding modest success.

From a technological standpoint, a space mission must usually complete three functions. A spacecraft must launch into orbit. It must deploy its solar panels, begin operations, and collect data. Finally, it must send its data back. If satellite data does not return to Earth in a timely manner, it’s worthless. This process is far more difficult than one might think—and not that many people think about it. “Ground stations,” Mendler acknowledges, are some of the most “unsexy and boring problems” in the space industry.

The 32-year-old Mendler now finds herself exactly where she wants to be. The life she has chosen—leading a startup in gritty El Segundo, California, delving into regulatory minutiae, and freezing in rural North Dakota to tackle “boring” problems—lies a world away from a seemingly glamorous life in the entertainment industry. That’s just fine with her.

“When I was growing up, I always said I wanted to be everything,” she said. “So in a certain sense, maybe I wouldn’t be surprised about where I ended up. But I would certainly be happy.”

Good Luck Charlie

Mendler may have wanted to be everything, but in her early years, what she most wanted to be was an actor. In 2001, when Mendler was eight, her parents moved across the country from Washington, DC, to the Bay Area. Her father designed fuel-efficient automobile engines, and her mother was an architect doing green design. Her mom, working from home, enrolled Mendler in an acting camp to help fill the days.

Mendler caught the bug. Although her parents were supportive of these dreams, they told her she would have to work to make it happen.

“We still had the Yellow Pages at the time, and so my little kid self was just flipping through the Yellow Pages trying to figure out how to get an agent,” she said. “And it was a long journey. Something that people outside of acting maybe don’t realize is that you encounter a shit ton of rejection. And so my introduction to acting was a ton of rejection in the entertainment industry. But I was like, ‘I’m gonna freaking figure this out.’”

After three years, Mendler began to get voice-acting roles in small films and video games. In November, 2006, she appeared on television for the first time in an episode of the soap opera General Hospital. Another three years would pass before she had a real breakthrough, appearing as a recurring character on Wizards of Waverly Place, a Disney Channel show starring Selena Gomez. She played a vampire girlfriend.

Mendler starred as “Teddy” in the Disney Channel show Good Luck Charlie. Here, she’s sharing a moment with her sister, “Charlie.”

Credit: Adam Taylor/Disney Channel via Getty Images

Mendler starred as “Teddy” in the Disney Channel show Good Luck Charlie. Here, she’s sharing a moment with her sister, “Charlie.” Credit: Adam Taylor/Disney Channel via Getty Images

Mendler impressed enough in this role to be offered the lead in a new sitcom on Disney Channel, Good Luck Charlie, playing the older sister to a toddler named Charlie. In this role, Mendler made a video diary for Charlie, offering advice on how to be a successful teenager. The warm-hearted series ran for four years. Episodes regularly averaged more than 5 million viewers.

My two daughters were among them. They were a decade younger than Mendler, who was 18 when the first episodes aired in 2010. I would sometimes watch the show with my girls. Mendler’s character was endearing, and her advice to Charlie, I believe, helped my own younger daughters anticipate their teenage years. A decade and a half later, my kids still look up to her not just for being on television but for everything else she has accomplished.

As her star soared on the Disney Channel, Mendler moved into music. She recorded gold and platinum records, including her biggest hit, “Ready or Not,” in 2012.

Prominent childhood actors have always struggled with the transition to adulthood. Disney stars like Lindsay Lohan and Demi Lovato developed serious substance abuse problems, while others, such as Miley Cyrus and Selena Gomez, abruptly adopted new, much more mature images that contrasted sharply with their characters on children’s TV shows.

Mendler chose a different path.

Making an impact

As a pre-teen, Mendler would lie in bed at night listening to her mom working upstairs in the kitchen. They lived in a small house amid the redwoods north of Sausalito, California. When Mendler awoke some mornings, her mom would still be tapping away at her architectural designs. “That’s kind of how I viewed work,” Mendler said.

One of her favorite books as a kid was Miss Rumphius, about a woman who spread lupine seeds (also known as bluebonnets) along the coast of Maine to make the countryside more beautiful. The picture book offered an empowering message: Every person has a choice about how to make an impact on the world.

This environment shaped Mendler. She saw her mom work all night, saw experimental engines built by her dad scattered around the house, and had conversations around the dinner table about the future and how she could find her place in it. As she aged into adulthood, performing before thousands of people on stage and making TV shows and movies, Mendler felt like she was missing something. In her words, life in Los Angeles felt “anemic.” She had always liked to create things herself, and she wasn’t doing that.

“The niche that I had wedged myself into was not allowing me to have my own voice and perspective,” she said. “I wound up going down a path where I was more the vessel for other people’s creations, and I wondered what it would be like to be a little bit more in charge of my voice than I was in Hollywood.”

So Mendler channeled her inner nerd. She began to bring textbooks on game theory to the set of movies and TV shows. She took a few college courses. When a topic intrigued her, she would email an author or professor or reach out to them on Twitter.

Her interest was turbocharged when she neared her 25th birthday. Throughout the mid-2010s, Mendler continued to act and release music. One day, while filming a movie called Father of the Year in Massachusetts for Netflix, she had a day off. Her uncle took Mendler to visit the famed Media Lab at the Massachusetts Institute of Technology. This research lab brings together grad students, researchers, and entrepreneurs from various disciplines to develop technology—things like socially engaging robots and biologically inspired engineering. It was a vibrant meeting space for brilliant minds who wanted to build a better future.

“I knew right then I needed to go there,” she said. “I needed to find a way.”

But there was a problem. The Media Lab only offered graduate student programs. Mender didn’t have an undergraduate degree. She’d only taken a handful of college courses. Officials at MIT told her that if she could build her own things, they would consider admitting her to the program. So she threw herself into learning how to code, working on starter projects in HTML, JavaScript, CSS, and Python. It worked.

In 2018, Mendler posted on Twitter that she was starting a graduate program at MIT to focus on better understanding social media. “As an entertainer, for years I struggled with social media because I felt like there was a more loving and human way to connect with fans. That is what I’m going to study,” she wrote. “Overall, I just hope that this time can be an adventure, and I have a thousand ideas I want to share with you so please stay tuned!”

That fall she did, in fact, start working on social media. Mendler was fascinated with it—Twitter in particular—and its role as the new public square. But at the Media Lab, there are all manner of interdisciplinary groups. The one right next to Mendler, for example, was focused on space.

Pop startup

In the months before she left Los Angeles for MIT, Mendler’s life changed in an important way. Through friends, she met an aerospace engineer named Griffin Cleverly. Southern California is swarming with aerospace engineers, but it’s perhaps indicative of the different circles between Hollywood and Hawthorne that Cleverly was the first rocket scientist Mendler had ever met.

“The conversations we had were totally different,” she said. “He has so many thoughts about so many things, both in aerospace and other topics.”

They hit it off. Not long after Mendler left for the MIT Lab, Cleverly followed her to Massachusetts, first applying himself to different projects at the lab before taking a job working on satellites for Lockheed Martin. The two married a year later, in 2019.

By the next spring, Mendler was finishing her master’s thesis at MIT on using technology to help resolve conflicts. Then the world shut down due to the COVID-19 pandemic. She and Cleverly suddenly had a lot of time on their hands.

They retreated to a lake house owned by Mendler’s family in rural New Hampshire. The house had been in the family since just after World War II, and the couple decided to experiment with antennas to see what they could do. They would periodically mask up and drive to a Home Depot in nearby Concord for supplies. They built different kinds of antennas, including parabolic and helical designs, to see what they could communicate with far away.

Mendler gave up a successful career in music and acting to earn a master’s degree at MIT.

Mendler gave up a successful career in music and acting to earn a master’s degree at MIT.

As they experimented, Mendler and Cleverly began to think about the changing nature of the space industry. At the time, SpaceX’s Starlink constellation was just coming online to deliver broadband around the world. The company’s Falcon 9 launches were ramping up. Satellites were becoming smaller and cheaper, constellations were proliferating, and companies like K2 were seeking to mass produce.

Mendler and Cleverly believed that the volume of data coming down from space was about to explode—and that existing commercial networks weren’t capable of handling it all.

“The space industry has been on even-keeled growth for a long time,” Cleverly said. “But what happens when you hit that hockey stick across the industry? Launch seemed like it was getting taken care of. Mass manufacturing of satellites appeared to be coming. We saw these trends and were trying to understand how the industry was going to react to them. When we looked at the ground side, it wasn’t clear that anyone really was thinking about the ramifications there.”

As the pandemic waned, the couple resumed more normal lives. Mendler continued her studies at MIT, but she was now thoroughly hooked on space. Her husband excelled at working with technology to communicate with satellites, so Mendler focused on the non-engineering side of the space industry. “With space, so many folks focus on how complicated it is from an engineering perspective, and for good reason, because there are massive engineering problems to solve,” she said. “But these are also really operationally complex problems.”

For example, ground systems that communicate with satellites as they travel around the world operate in different jurisdictions, necessitating contracts and transactions in many countries. Issues with liability, intellectual property, insurance, and regulations abound. So Mendler decided that the next logical step after MIT was to attend law school. Because she lacked an undergraduate degree, most schools wouldn’t admit her. But Harvard University has an exception for exceptional students.

“Harvard was one of the few schools that admitted me,” she said. “I ended up going to law school because I was curious about understanding the operational aspects of working in space.”

These were insanely busy years. In 2022, when she began law school, Mendler was still conducting research at MIT. She soon got an internship at the Federal Communications Commission that gave her a broader view of the space industry from a regulatory standpoint. And in August 2022, she and Cleverly, alongside a software expert from Capella Space named Shaurya Luthra, founded Northwood Space.

So Bridgit Mendler, while studying at MIT and Harvard simultaneously, added a new title to her CV: chief executive officer.

Wizards of Waverly Space

Initially, the founders of Northwood Space did little more than study the market and write a few research papers, assessing the demand for sending data down to Earth, whether there would be customers for a new commercial network to download this data, and if affordable technology solutions could be built for this purpose. After about a year, they were convinced.

“Here’s the vision we ended up with,” Mendler said. “The space industry has a ground bottleneck, and the problem is going to get worse. So let’s build a network that can address that bottleneck and accelerate space capabilities. The best way to go about that was building capacity.”

If you’re like most people, you don’t spend much time pondering how data gets to and from space. To the extent one thinks about Starlink, it’s probably the satellite trains and personal dishes that spring to mind. But SpaceX has also had to build large ground stations around the world, known as gateways, to pipe data into space from the terrestrial Internet. Most companies lack the resources to build global gateways, so they use a shared commercial network. This has drawbacks, though.

Getting data down in a timely manner is not a trivial problem. From the earliest days of NASA through commercial operations today, operators on Earth generally do not maintain continual contact with satellites in space. For spacecraft in a polar orbit, contact might be made several times a day, with a lag in data of perhaps 30 minutes or as high as 90 minutes in some cases.

This is not great. Let’s say you want to use satellite imagery to fight wildfires. Data on the spread of a wildfire can help operators on the ground deploy resources to fight it. But for this information to be useful in real time, it must be downlinked within minutes of its collection. The existing infrastructure incurs delays that make most currently collected data non-actionable for firefighters. So the first problem Northwood wants to solve is persistence, with a network of ground stations around the world that would allow operators to continually connect with their satellites.

After persistence, the next problem faced by satellite operators is constraints on bandwidth. Satellites collect reams of data in orbit and must either process it on board or throw a lot of it away.

Mendler said that within three years, Northwood aims to build a shared network capable of linking to 500 spacecraft at a time. This may not sound like a big deal, but it’s larger than every commercially available shared ground network and the US government’s Satellite Control Network combined. And these tracking centers took decades to build. Each of Northwood’s sites, spread across six continents, is intended to download far more data than can be brought down on commercial networks today, the equivalent of streaming tens of thousands of Blu-ray discs from space concurrently.

“Our job is to figure out how to most efficiently deliver those capabilities,” Mendler said. “We’re asking, how can we reliably deliver a new standard of connectivity to the industry, at a viable price point?”

With these aims in mind, Mendler and Cleverly got serious about their startup in the fall of 2023.

Frankie goes from Hollywood

Over the previous decade, SpaceX had revolutionized the rocket industry, and a second generation of private launch companies was maturing. Some, like Rocket Lab, were succeeding. Others, such as Virgin Orbit, had gone bankrupt. There were important lessons in these ashes for a space startup CEO.

Among the most critical for Mendler was keeping costs low. Virgin Orbit’s payroll had approached 700 people to support a rocket capable of limited revenue. That kind of payroll growth was a ticket to insolvency. She also recognized SpaceX’s relentless push to build things in-house and rapidly prototype hardware through iterative design as key to the company’s success.

By the end of 2023, Mendler was raising the company’s initial funding, a seed round worth $6.3 million. Northwood emerged from “stealth mode” in February 2024 and set about hiring a small team. Early that summer, it began pulling together components to build Frankie, a prototype for the team’s first product—modular phased-array antennas. Northwood put Frankie together in four months.

“Our goal was to build things quickly,” Mendler said. “That’s why the first thing we did after raising our seed round was to build something and put it in the field. We wanted to show people it was real.”

Unlike a parabolic dish antenna—think a DirecTV satellite dish or the large ground-based antennas that Ellie Arroway uses in Contact—phased-array antennas are electrically steerable. Instead of needing to point directly at their target to collect a signal, phased-array antennas produce a beam of radio waves that can “point” in different directions without moving the antenna. The technology is decades old, but its use in commercial applications has been limited because it’s more difficult to work with than parabolic dishes. In theory, however, phased array antennas should let Northwood build more capable ground stations, pulling down vastly more data within a smaller footprint. In business terms, the technology is “scalable.”

But before a technology can scale, it must work.

In late September 2024, the company’s six engineers, a business development director, and Mendler packed Frankie into a truck and sent it rolling off to the Dakotas. They soon followed, flying commercial to Denver and then into Devils Lake Regional Airport. On the first day of October, the party checked into Spirit Lake Casino.

That night, they drove out to a rural site owned by Planet Labs, nearly an hour away, that has a small network station to communicate with its Earth-imaging satellites. This site consisted of two large antennas, a small operations shed for the networking equipment, and a temporary trailer. The truck hauling Frankie arrived at 2 am local time.

The company’s antenna, “Frankie,” arrives early on October 2 and the team begins to unload it.

Credit: Bridgit Mendler

The company’s antenna, “Frankie,” arrives early on October 2 and the team begins to unload it. Credit: Bridgit Mendler

Before sunrise, as the team completed setup, Mendler went into the nearest town, Maddock. The village has one main establishment, Harriman’s Restaurant & Bobcat Bar. The protean facility also serves as an opera house, community library, and meeting place. When Mendler went to the restaurant’s counter and ordered eight breakfast burritos, she attracted notice. But the locals were polite.

Returning to her team, they gathered in the small Planet Labs trailer on the windswept site. There were no lights, so they carried their portable floodlights inside. The space lacked room for chairs, so they huddled around one another in what they affectionately began referring to as the “food closet.” At least it kept them out of the wind.

The team had some success on the first morning, as Frankie communicated with a SkySat flying overhead, a Planet satellite a little larger than a mini refrigerator. First contact came at 7: 34 am, and they had some additional successes throughout the day. But communication remained one-way, from the ground to space. For satellite telemetry, tracking, and command—TT&C in industry parlance—they needed to close the loop. But Frankie could not receive a clear X Band signal from space; it was coming in too weak.

“While we could command the satellite, we could not receive the acknowledgments of the command,” Mendler said.

The best satellite passes were clumped during the overnight hours. So over the next few days, the team napped in their rental cars, waiting to see if Frankie could hear satellites calling home. But as the days ticked by, they had no luck. Time was running out.

Solving their RF problems

As the Northwood engineers troubleshot the problem with low signal power, they realized that with some minor changes, they could probably boost the signal. But this would require reconfiguring and calibrating Frankie.

The team scrambled to make these changes on the afternoon of October 4, before four passes in a row that night starting at 3 am. This was one of their last, best chances to make things work. After implementing the fix, the bedraggled Northwood team ate a muted dinner at their casino hotel before heading back out to the ground station. There, they waited in nervous silence for the first pass of the night.

When the initial satellite passed overhead, the space-to-ground power finally reached the requisite level. But Northwood could not decode the message due to a coaxial cable being plugged into the wrong port.

Then they missed the second pass because an inline amplifier was mistakenly switched off.

The third satellite pass failed due to a misrouted switch in Planet’s radio-frequency equipment.

So they were down to the final pass. But this time, there were no technical snafus. The peak of the signal came in clean and, to the team’s delight, with an even higher signal-to-noise ratio than anticipated. Frankie had done it. High fives and hugs all around. The small team crashed that morning before successfully repeating the process the next day.

After that, it was time to celebrate, Dakota style. The team decamped to Harriman’s, where Mendler’s new friend Jim Walter, the proprietor, served them shots. After a while, he disappeared into the basement and returned with Bobcat Bar T-shirts he wanted them to have as mementos. Later that night, the Northwood team played blackjack at the casino and lost their money at the slot machines.

Yet in the bigger picture, they had gambled and won. Mendler wanted to build fast, to show the world that her company had technical chops. They had thrown Frankie together and rushed headlong into the rough-and-tumble countryside, plugged in the antenna, and waited to see what happened. A lot of bad things could have happened, but instead, the team hit the jackpot.

“We were able to go from the design to actually build and deploy in that four-month time period,” Mendler said. “That resulted in a lot of different customers knocking down our door and helping to shape requirements for this next version of the system that we’re going to be able to start demoing soon. So in half a year, we radically revised our product, and we will begin actually putting them out in the field and operating this year. Time is very much at the forefront of our mind.”

Can ground stations fly high?

The fundamental premise behind Northwood is that a bottleneck constrains the ability to bring down data from space and that a lean, new-space approach can disrupt the existing industry. But is this the case?

“The demand for ground-based connectivity is rising,” said Caleb Henry, director of research at Quilty Space. “And your satellites are only as effective as your gateways.”

This trend is being driven not only by the rise of satellites in general but also by higher-resolution imaging satellites like Planet’s Pelican satellites or BlackSky’s Gen-3 satellites. There has also been a corresponding increase in the volume of data from synthetic aperture radar satellites, Henry said. Recent regulatory filings, such as this one in the United Kingdom, underscore the notion that ongoing data bottlenecks persist. However, Henry said it’s not clear whether this growth in data will be linear or exponential.

The idea of switching from large, single-dish antennas to phased arrays is not new. This is partly because there are questions about how expensive it would be to build large, capable phased-array antennas to talk to satellites hundreds of miles away—and how energy intensive this would be.

Commercial satellite operators currently have a limited number of options for communicating with the ground. A Norwegian company, Kongsberg Satellite Services (or KSAT), has the largest network of ground stations. Other players include Swedish Space Systems, Leaf Space in Italy, Atlas Space Operations in Michigan, and more. Some of these companies have experimented with phased-array antennas, Henry said, but no one has made the technology the backbone of its network.

By far the largest data operator in low-Earth orbit, SpaceX, chose dish-based gateways for its ground stations around the world that talk to Starlink satellites. (The individual user terminals are phased-array antennas, however.)

Like reuse in the launch industry, a switch to phased-array antennas is potentially disruptive. Large dishes can only communicate with a single satellite at a time, whereas phased-array antennas can make multiple connections. This allows an operator to pack much more power into a smaller footprint on the ground. But as with SpaceX and reuse, the existing ground station operators seem to be waiting to see if anyone else can pull it off.

“The industry just has not trusted that the level of disruption phased-array antennas can bring is worth the cost,” Henry said. “Reusability wasn’t trusted, either, because no one could do it affordably and effectively.”

So can Northwood Space do it? One of the very first investors in SpaceX, the Founders Fund, believes so. It participated in the seed round for Northwood and again in a Series A round, valued at $30 million, which closed in April.

When Mendler first approached the fund about 18 months ago, it was an easy decision, said Delian Asparouhov, a partner at the fund.

“We probably only discussed it for about 15 minutes,” Asparouhov said. “Bridgit was perfect for this. I think we met on a Tuesday and had a term sheet signed on a Thursday night. It happened that fast.”

The Founders Fund had been studying the idea for a while. Rocket, satellites, and reentry vehicles get all of the attention, but Asparouhov said there is a huge need for ground systems and that phased-array technology has the ability to unlock a future of abundant data from space. His own company, Varda Space, is only able to communicate with its spacecraft for about 35 minutes every two hours. Varda vehicles conduct autonomous manufacturing in space, and the ability to have continuous data from its vehicles about their health and the work on board would be incredibly helpful.

“Infrastructure is not sexy,” Asparouhov said. “We needed someone who could turn that into a compelling story.”

Mendler, with her novel background, was the person. But she’s not just an eloquent spokesperson for the industry, he said. Building a company is hard, from finding facilities to navigating legal work to staffing up. Mendler appears to be acing these tasks. “Run through the LinkedIn of the team she’s recruited,” he said. “You’ll see that she’s knocked it out of the park.”

Ready or not

At Northwood, Mendler has entered a vastly different world from the entertainment industry or academia. She consults with fast-talking venture capitalists, foreign regulators, lawyers, rocket scientists, and occasionally the odd space journalist. It’s a challenging environment usually occupied by hotshot engineers—often arrogant, hard-charging men.

Mendler stands out in this setting. But her life has always been about thriving in tough environments.

Whatever happens, she has already achieved success in one important way. As an actor and singer, Mendler often felt as though she was dancing to someone else’s tune. No longer. At Northwood, she holds the microphone, but she is also a director and producer. If she fails—and let’s be honest, most new space companies do fail—it will be on her own terms.

Several weeks ago, Mendler was sitting at home, watching the movie Meet the Robinsons with her 6-year-old son. One of the main themes of the animated Disney film is that one should “keep moving forward” in life and that it’s possible to build a future that is optimistic for humanity—say, Star Trek rather than The Terminator or The Matrix.

“It shows you what the future could look like,” Mendler said of the movie. “And it gave me a little sad feeling, because it is so optimistic and beautiful. I think people can get discouraged by a dystopian outlook about what the future can look like. We need to remember we can build something positive.”

She will try to do just that.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

She was a Disney star with platinum records, but Bridgit Mendler gave it up to change the world Read More »

a-history-of-the-internet,-part-2:-the-high-tech-gold-rush-begins

A history of the Internet, part 2: The high-tech gold rush begins


The Web Era arrives, the browser wars flare, and a bubble bursts.

Welcome to the second article in our three-part series on the history of the Internet. If you haven’t already, read part one here.

As a refresher, here’s the story so far:

The ARPANET was a project started by the Defense Department’s Advanced Research Project Agency in 1969 to network different mainframe computers together across the country.  Later, it evolved into the Internet, connecting multiple global networks together using a common TCP/IP protocol.

By the late 1980s, investments from the National Science Foundation (NSF) had established an “Internet backbone” supporting hundreds of thousands of users worldwide. These users were mostly professors, researchers, and graduate students.

In the meantime, commercial online services like CompuServe were growing rapidly. These systems connected personal computer users, using dial-up modems, to a mainframe running proprietary software. Once online, people could read news articles and message other users. In 1989, CompuServe added the ability to send email to anyone on the Internet.

In 1965, Ted Nelson submitted a paper to the Association for Computing Machinery. He wrote: “Let me introduce the word ‘hypertext’ to mean a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.” The paper was part of a grand vision he called Xanadu, after the poem by Samuel Coleridge.

A decade later, in his book “Dream Machines/Computer Lib,” he described Xanadu thusly: “To give you a screen in your home from which you can see into the world’s hypertext libraries.” He admitted that the world didn’t have any hypertext libraries yet, but that wasn’t the point. One day, maybe soon, it would. And he was going to dedicate his life to making it happen.

As the Internet grew, it became more and more difficult to find things on it. There were lots of cool documents like the Hitchhiker’s Guide To The Internet, but to read them, you first had to know where they were.

The community of helpful programmers on the Internet leapt to the challenge. Alan Emtage at McGill University in Montreal wrote a tool called Archie. It searched a list of public file transfer protocol (FTP) servers. You still had to know the file name you were looking for, but Archie would let you download it no matter what server it was on.

An improved search engine was Gopher, written by a team headed by Mark McCahill at the University of Minnesota. It used a text-based menu system so that users didn’t have to remember file names or locations. Gopher servers could display a customized collection of links inside nested menus, and they integrated with other services like Archie and Veronica to help users search for more resources.

Gopher is a text-based Internet search and retrieval system. It’s still running in 2025! Jeremy Reimer

A Gopher server could provide many of the things we take for granted today: search engines, personal pages that could contain links, and downloadable files. But this wasn’t enough for a British computer scientist who was working at CERN, an intergovernmental institute that operated the world’s largest particle physics lab.

The World Wide Web

Hypertext had come a long way since Ted Nelson had coined the word in 1965. Bill Atkinson, a member of the original Macintosh development team, released HyperCard in 1987. It used the Mac’s graphical interface to let anyone develop “stacks,” collections of text, graphics, and sounds that could be connected together with clickable links. There was no networking, but stacks could be shared with other users by sending the files on a floppy disk.

The home screen of HyperCard 1.0 for Macintosh. Jeremy Reimer

Hypertext was so big that conferences were held just to discuss it in 1987 and 1988. Even Ted Nelson had finally found a sponsor for his personal dream: Autodesk founder John Walker had agreed to spin up a subsidiary to create a commercial version of Xanadu.

It was in this environment that CERN fellow Tim Berners-Lee drew up his own proposal in March 1989 for a new hypertext environment. His goal was to make it easier for researchers at CERN to collaborate and share information about new projects.

The proposal (which he called “Mesh”) had several objectives. It would provide a system for connecting information about people, projects, documents, and hardware being developed at CERN. It would be decentralized and distributed over many computers. Not all the computers at CERN were the same—there were Digital Equipment minis running VMS, some Macintoshes, and an increasing number of Unix workstations. Each of them should be able to view the information in the same way.

As Berners-Lee described it, “There are few products which take Ted Nelson’s idea of a wide ‘docuverse’ literally by allowing links between nodes in different databases. In order to do this, some standardization would be necessary.”

The original proposal document for the web, written in Microsoft Word for Macintosh 4.0, downloaded from Tim Berners-Lee’s website. Credit: Jeremy Reimer

The document ended by describing the project as “practical” and estimating that it might take two people six to 12 months to complete. Berners-Lee’s manager called it “vague, but exciting.” Robert Cailliau, who had independently proposed a hypertext system for CERN, joined Berners-Lee to start designing the project.

The computer Berners-Lee used was a NeXT cube, from the company Steve Jobs started after he was kicked out of Apple. NeXT workstations were expensive, but they came with a software development environment that was years ahead of its time. If you could afford one, it was like a coding accelerator. John Carmack would later write DOOM on a NeXT.

The NeXT workstation that Tim Berners-Lee used to create the World Wide Web. Please do not power down the World Wide Web. Credit: Coolcaesar (CC BY-SA 3.0)

Berners-Lee called his application “WorldWideWeb.” The software consisted of a server, which delivered pages of text over a new protocol called “Hypertext Transport Protocol,” or HTTP, and a browser that rendered the text. The browser translated markup code like “h1” to indicate a larger header font or “a” to indicate a link. There was also a graphical webpage editor, but it didn’t work very well and was abandoned.

The very first website was published, running on the development NeXT cube, on December 20, 1990. Anyone who had a NeXT machine and access to the Internet could view the site in all its glory.

The original WorldWideWeb browser running on NeXTstep 3, browsing the world’s first webpage. Jeremy Reimer

Because NeXT only sold 50,000 computers in total, that intersection did not represent a lot of people. Eight months later, Berners-Lee posted a reply to a question about interesting projects on the alt.hypertext Usenet newsgroup. He described the World Wide Web project and included links to all the software and documentation.

That one post changed the world forever.

Mosaic

On December 9, 1991, President George H.W. Bush signed into law the High Performance Computing Act, also known as the Gore Bill. The bill paid for an upgrade of the NSFNET backbone, as well as a separate funding initiative for the National Center for Supercomputing Applications (NCSA).

NCSA, based out of the University of Illinois, became a dream location for computing research. “NCSA was heaven,” recalled Alex Totic, who was a student there. “They had all the toys, from Thinking Machines to Crays to Macs to beautiful networks. It was awesome.” As is often the case in academia, the professors came up with research ideas but assigned most of the actual work to their grad students.

One of those students was Marc Andreessen, who joined NCSA as a part-time programmer for $6.85 an hour. Andreessen was fascinated by the World Wide Web, especially browsers. A new browser for Unix computers, ViolaWWW, was making the rounds at NCSA. No longer confined to the NeXT workstation, the web had caught the attention of the Unix community. But that community was still too small for Andreessen.

“To use the Net, you had to understand Unix,” he said in an interview with Forbes. “And the current users had no interest in making it easier. In fact, there was a definite element of not wanting to make it easier, of actually wanting to keep the riffraff out.”

Andreessen enlisted the help of his colleague, programmer Eric Bina, and started developing a new web browser in December 1992. In a little over a month, they released version 0.5 of “NCSA X Mosaic”—so called because it was designed to work with Unix’s X Window System. Ports for the Macintosh and Windows followed shortly thereafter.

Being available on the most popular graphical computers changed the trajectory of the web. In just 18 months, millions of copies of Mosaic were downloaded, and the rate was accelerating. The riffraff was here to stay.

Netscape

The instant popularity of Mosaic caused the management at NCSA to take a deeper interest in the project. Jon Mittelhauser, who co-wrote the Windows version, recalled that the small team “suddenly found ourselves in meetings with forty people planning our next features, as opposed to the five of us making plans at 2 am over pizzas and Cokes.”

Andreessen was told to step aside and let more experienced managers take over. Instead, he left NCSA and moved to California, looking for his next opportunity. “I thought I had missed the whole thing,” Andreessen said. “The overwhelming mood in the Valley when I arrived was that the PC was done, and by the way, the Valley was probably done because there was nothing else to do.”

But his reputation had preceded him. Jim Clark, the founder of Silicon Graphics, was also looking to start something new. A friend had shown him a demo of Mosaic, and Clark reached out to meet with Andreessen.

At a meeting, Andreessen pitched the idea of building a “Mosaic killer.” He showed Clark a graph that showed web users doubling every five months. Excited by the possibilities, the two men founded Mosaic Communications Corporation on April 4, 1994. Andreessen quickly recruited programmers from his former team, and they got to work. They codenamed their new browser “Mozilla” since it was going to be a monster that would devour Mosaic. Beta versions were titled “Mosaic Netscape,” but the University of Illinois threatened to sue the new company. To avoid litigation, the name of the company and browser were changed to Netscape, and the programmers audited their code to ensure none of it had been copied from NCSA.

Netscape became the model for all Internet startups to follow. Programmers were given unlimited free sodas and encouraged to basically never leave the office. “Netscape Time” accelerated software development schedules, and because updates could be delivered over the Internet, old principles of quality assurance went out the window. And the business model? It was simply to “get big fast,” and profits could be figured out later.

Work proceeded quickly, and the 1.0 version of Netscape Navigator and the Netsite web server were released on December 15, 1994, for Windows, Macintosh, and Unix systems running X Windows. The browser was priced at $39 for commercial users, but there was no charge for “academic and non-profit use, as well as for free evaluation purposes.”

Version 0.9 was called “Mosaic Netscape,” and the logo and company were still Mosaic. Jeremy Reimer

Netscape quickly became the standard. Within six months, it captured over 70 percent of the market share for web browsers. On August 9, 1995, only 16 months after the founding of the company, Netscape filed for an Initial Public Offering. A last-minute decision doubled the offering price to $28 per share, and on the first day of trading, the stock soared to $75 and closed at $58.25. The Web Era had officially arrived.

The web battles proprietary solutions

The excitement over a new way to transmit text and images to the public over phone lines wasn’t confined to the World Wide Web. Commercial online systems like CompuServe were also evolving to meet the graphical age. These companies released attractive new front-ends for their services that ran on DOS, Windows, and Macintosh computers. There were also new services that were graphics-only, like Prodigy, a cooperation between IBM and Sears, and an upstart that had sprung from the ashes of a Commodore 64 service called Quantum Link. This was America Online, or AOL.

Even Microsoft was getting into the act. Bill Gates believed that the “Information Superhighway” was the future of computing, and he wanted to make sure that all roads went through his company’s toll booth. The highly anticipated Windows 95 was scheduled to ship with a bundled dial-up online service called the Microsoft Network, or MSN.

At first, it wasn’t clear which of these online services would emerge as the winner. But people assumed that at least one of them would beat the complicated, nerdy Internet. CompuServe was the oldest, but AOL was nimbler and found success by sending out millions of free “starter” disks (and later, CDs) to potential customers. Microsoft was sure that bundling MSN with the upcoming Windows 95 would ensure victory.

Most of these services decided to hedge their bets by adding a sort of “side access” to the World Wide Web. After all, if they didn’t, their competitors would. At the same time, smaller companies (many of them former bulletin board services) started becoming Internet service providers. These smaller “ISPs” could charge less money than the big services because they didn’t have to create any content themselves. Thousands of new websites were appearing on the Internet every day, much faster than new sections could be added to AOL or CompuServe.

The tipping point happened very quickly. Before Windows 95 had even shipped, Bill Gates wrote his famous “Internet Tidal Wave” memo, where he assigned the Internet the “highest level of importance.” MSN was quickly changed to become more of a standard ISP and moved all of its content to the web. Microsoft rushed to release its own web browser, Internet Explorer, and bundled it with the Windows 95 Plus Pack.

The hype and momentum were entirely with the web now. It was the most exciting, most transformative technology of its time. The decade-long battle to control the Internet by forcing a shift to a new OSI standards model was forgotten. The web was all anyone cared about, and the web ran on TCP/IP.

The browser wars

Netscape had never expected to make a lot of money from its browser, as it was assumed that most people would continue to download new “evaluation” versions for free. Executives were pleasantly surprised when businesses started sending Netscape huge checks. The company went from $17 million in revenue in 1995 to $346 million the following year, and the press started calling Marc Andreessen “the new Bill Gates.”

The old Bill Gates wasn’t having any of that. Following his 1995 memo, Microsoft worked hard to improve Internet Explorer and made it available for free, including to business users. Netscape tried to fight back. It added groundbreaking new features like JavaScript, which was inspired by LISP but with a syntax similar to Java, the hot new programming language from Sun Microsystems. But it was hard to compete with free, and Netscape’s market share started to fall. By 1996, both browsers had reached version 3.0 and were roughly equal in terms of features. The battle continued, but when the Apache Software Foundation released its free web server, Netscape’s other source of revenue dried up as well. The writing was on the wall.

There was no better way to declare your allegiance to a web browser in 1996 than adding “Best Viewed In” above one of these icons. Credit: Jeremy Reimer

The dot-com boom

In 1989, the NSF lifted the restrictions on providing commercial access to the Internet, and by 1991, it had removed all barriers to commercial trade on the network. With the sudden ascent of the web, thanks to Mosaic, Netscape, and Internet Explorer, new companies jumped into this high-tech gold rush. But at first, it wasn’t clear what the best business strategy was. Users expected everything on the web to be free, so how could you make money?

Many early web companies started as hobby projects. In 1994, Jerry Yang and David Filo were electrical engineering PhD students at Stanford University. After Mosaic started popping off, they began collecting and trading links to new websites. Thus, “Jerry’s Guide to the World Wide Web” was born, running on Yang’s Sun workstation. Renamed Yahoo! (Yet Another Hierarchical, Officious Oracle), the site exploded in popularity. Netscape put multiple links to Yahoo on its main navigation bar, which further accelerated growth. “We weren’t really sure if you could make a business out of it, though,” Yang told Fortune. Nevertheless, venture capital companies came calling. Sequoia, which had made millions investing in Apple, put in $1 million for 25 percent of Yahoo.

Yahoo.com as it would have appeared in 1995. Credit: Jeremy Reimer

Another hobby site, AuctionWeb, was started in 1995 by Pierre Omidyar. Running on his own home server using the regular $30 per month service from his ISP, the site let people buy and sell items of almost any kind. When traffic started growing, his ISP told him it was increasing his Internet fees to $250 per month, as befitting a commercial enterprise. Omidyar decided he would try to make it a real business, even though he didn’t have a merchant account for credit cards or even a way to enforce the new 5 percent or 2.5 percent royalty charges. That didn’t matter, as the checks started rolling in. He found a business partner, changed the name to eBay, and the rest was history.

AuctionWeb (later eBay) as it would have appeared in 1995. Credit: Jeremy Reimer

In 1993, Jeff Bezos, a senior vice president at a hedge fund company, was tasked with investigating business opportunities on the Internet. He decided to create a proof of concept for what he described as an “everything store.” He chose books as an ideal commodity to sell online, since a book in one store was identical to one in another, and a website could offer access to obscure titles that might not get stocked in physical bookstores.

He left the hedge fund company, gathered investors and software development talent, and moved to Seattle. There, he started Amazon. At first, the site wasn’t much more than an online version of an existing bookseller catalog called Books In Print. But over time, Bezos added inventory data from the two major book distributors, Ingram and Baker & Taylor. The promise of access to every book in the world was exciting for people, and the company grew quickly.

Amazon.com as it would have appeared in 1995. Credit: Jeremy Reimer

The explosive growth of these startups fueled a self-perpetuating cycle. As publications like Wired experimented with online versions of their magazines, they invented and sold banner ads to fund their websites. The best customers for these ads were other web startups. These companies wanted more traffic, and they knew ads on sites like Yahoo were the best way to get it. Yahoo salespeople could then turn around and point to their exponential ad sales curves, which caused Yahoo stock to rise. This encouraged people to fund more web startups, which would all need to advertise on Yahoo. These new startups also needed to buy servers from companies like Sun Microsystems, causing those stocks to rise as well.

The crash

In the latter half of the 1990s, it looked like everything was going great. The economy was booming, thanks in part to the rise of the World Wide Web and the huge boost it gave to computer hardware and software companies. The NASDAQ index of tech-focused stocks painted a clear picture of the boom.

The NASDAQ composite index in the 1990s. Credit: Jeremy Reimer

Federal Reserve chairman Alan Greenspan called this phenomenon “irrational exuberance” but didn’t seem to be in a hurry to stop it. The fact that most new web startups didn’t have a realistic business model didn’t seem to bother investors. Sure, WebVan might have been paying more to deliver groceries than they earned from customers, but look at that growth curve!

The exuberance couldn’t last forever. The NASDAQ peaked at 8,843.87 in February 2000 and started to go down. In one month, it lost 34 percent of its value, and by August 2001, it was down to 3,253.38. Web companies laid off employees or went out of business completely. The party was over.

Andreessen said that the tech crash scarred him. “The overwhelming message to our generation in the early nineties was ‘You’re dirty, you’re all about grunge—you guys are fucking losers!’ Then the tech boom hit, and it was ‘We are going to do amazing things!’ And then the roof caved in, and the wisdom was that the Internet was a mirage. I 100 percent believed that because the rejection was so personal—both what everybody thought of me and what I thought of myself.”

But while some companies quietly celebrated the end of the whole Internet thing, others would rise from the ashes of the dot-com collapse. That’s the subject of our third and final article.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

A history of the Internet, part 2: The high-tech gold rush begins Read More »

ex-fcc-chair-ajit-pai-is-now-a-wireless-lobbyist—and-enemy-of-cable-companies

Ex-FCC Chair Ajit Pai is now a wireless lobbyist—and enemy of cable companies


Pai’s return as CTIA lobbyist fuels industry-wide battle over spectrum rights.

Ajit Pai, former chairman of the Federal Communications Commission, during a Senate Commerce Committee hearing on Wednesday, April 9, 2025. Credit: Getty Images | Bloomberg

Ajit Pai is back on the telecom policy scene as chief lobbyist for the mobile industry, and he has quickly managed to anger a coalition that includes both cable companies and consumer advocates.

Pai was the Federal Communications Commission chairman during President Trump’s first term and then spent several years at private equity firm Searchlight Capital. He changed jobs in April, becoming the president and CEO of wireless industry lobby group CTIA. Shortly after, he visited the White House to discuss wireless industry priorities and had a meeting with Brendan Carr, the current FCC chairman who was part of Pai’s Republican majority at the FCC from 2017 to 2021.

Pai’s new job isn’t surprising. He was once a lawyer for Verizon, and it’s not uncommon for FCC chairs and commissioners to be lobbyists before or after terms in government.

Pai’s move to CTIA means he is now battling a variety of industry players and advocacy groups over the allocation of spectrum. As always, wireless companies AT&T, Verizon, and T-Mobile want more spectrum and the exclusive rights to use it. The fight puts Pai at odds with the cable industry that cheered his many deregulatory actions when he led the FCC.

Pai wrote a May 4 op-ed in The Wall Street Journal arguing that China is surging ahead of the US in 5G deployment and that “the US doesn’t even have enough licensed spectrum available to keep up with expected consumer demand.” He said that Congress must restore the FCC’s lapsed authority to auction spectrum licenses, and auction off “at least 600 megahertz of midband spectrum for future 5G services.”

“During the first Trump administration, the US was determined to lead the world in wireless innovation—and by 2021 it did,” Pai wrote. “But that urgency and sense of purpose have diminished. With Mr. Trump’s leadership, we can rediscover both.”

Pai’s op-ed drew a quick rebuke from a group called Spectrum for the Future, which alleged that Pai mangled the facts.

“Mr. Pai’s arguments are wrong on the facts—and wrong on how to accelerate America’s global wireless leadership,” the vaguely named group said in a May 8 press release that accused Pai of “stunning hypocrisy.” Spectrum for the Future said Pai is wrong about the existence of a spectrum shortage, wrong about how much money a spectrum auction could raise, and wrong about the cost of reallocating spectrum from the military to mobile companies.

“Mr. Pai attributes the US losing its lead in 5G availability to the FCC’s lapsed spectrum auction authority. He’d be more accurate to blame his own members’ failure to build out their networks,” the group said.

Big Cable finds allies

Pai’s op-ed said that auctioning 600 MHz “could raise as much as $200 billion” to support other US government priorities. Spectrum for the Future called this an “absurd claim” that “presumes that this auction of 600 MHz could approach the combined total ($233 billion) that has been raised by every prior spectrum auction (totaling nearly 6 GHz of bandwidth) in US history combined.”

The group also said Pai “completely ignores the immense cost to taxpayers to relocate incumbent military and intelligence systems out of the bands CTIA covets for its own use.” Spectrum for the Future didn’t mention that one of the previous auctions, for the 3.7–3.98 GHz band, netted over $81 billion in winning bids.

So who is behind Spectrum for the Future? The group’s website lists 18 members , including the biggest players in the cable industry. Comcast, Charter, Cox, and lobby group NCTA-The Internet & Television Association are all members of Spectrum for the Future. (Disclosure: The Advance/Newhouse Partnership, which owns 12 percent of Charter, is part of Advance Publications, which owns Ars Technica parent Condé Nast.)

When contacted by Ars, a CTIA spokesperson criticized cable companies for “fighting competition” and said the cable firms are being “disingenuous.” Charter and Cox declined to answer our questions about their involvement in Spectrum for the Future. Comcast and the NCTA didn’t respond to requests for comment.

The NCTA and big cable companies are no strangers to lobbying the FCC and Congress and could fight for CBRS entirely on their own. But as it happens, some consumer advocates who regularly oppose the cable industry on other issues are on cable’s side in this battle.

With Spectrum for the Future, the cable industry has allied not just with consumer advocates but also small wireless ISPs and operators of private networks that use spectrum the big mobile companies want for themselves. Another group that is part of the coalition represents schools and libraries that use spectrum to provide local services.

For cable, joining with consumer groups, small ISPs, and others in a broad coalition has an obvious advantage from a public relations standpoint. “This is a lot of different folks who are in it for their own reasons. Sometimes that’s a big advantage because it makes it more authentic,” said Harold Feld, senior VP of consumer advocacy group Public Knowledge, which is part of Spectrum of the Future.

In some cases, a big company will round up nonprofits to which it has donated to make a show of broad public support for one of the company’s regulatory priorities—like a needed merger approval. That’s not what happened here, according to Feld. While cable companies probably provided most of the funding for Spectrum for the Future, the other members are keenly interested in fighting the wireless lobby over spectrum access.

“There’s a difference between cable being a tentpole member and this being cable with a couple of friends on the side,” Feld told Ars. Cable companies “have the most to lose, they have the most initial resources. But all of these other guys who are in here, I’ve been on these calls, they’re pretty active. There are a lot of diverse interests in this, which sometimes makes it easier to lobby, sometimes makes it harder to lobby because you all want to talk about what’s important to you.”

Feld didn’t help write the group’s press release criticizing Pai but said the points made are “all things I agree with.”

The “everybody but Big Mobile” coalition

Public Knowledge and New America’s Open Technology Institute (OTI), another Spectrum for the Future member, are both longtime proponents of shared spectrum. OTI’s Wireless Future Project director, Michael Calabrese, told Ars that Spectrum for the Future is basically the “everybody but Big Mobile” wireless coalition and “a very broad but ad hoc coalition.”

While Public Knowledge and OTI advocate for shared spectrum in many frequency bands, Spectrum for the Future is primarily focused on one: the Citizens Broadband Radio Service (CBRS), which spans from 3550 MHz to 3700 MHz. The CBRS spectrum is used by the Department of Defense and shared with non-federal users.

CBRS users in the cable industry and beyond want to ensure that CBRS remains available to them and free of high-power mobile signals that would crowd out lower-power operations. They were disturbed by AT&T’s October 2024 proposal to move CBRS to the lower part of the 3 GHz band, which is also used by the Department of Defense, and auction existing CBRS frequencies to 5G wireless companies “for licensed, full-power use.”

The NCTA told the FCC in December that “AT&T’s proposal to reallocate the entire 3 GHz band is unwarranted, impracticable, and unworkable and is based on the false assertion that the CBRS band is underutilized.”

Big mobile companies want the CBRS spectrum because it is adjacent to frequencies that are already licensed to them. The Department of Defense seems to support AT&T’s idea, even though it would require moving some military operations and sharing the spectrum with non-federal users.

Pentagon plan similar to AT&T’s

In a May research note provided to Ars, New Street Research Policy Advisor Blair Levin reported some details of a Department of Defense proposal for several bands of spectrum, including CBRS. The White House asked the Department of Defense “to come up with a plan to enable allocation of mid-band exclusive-use spectrum,” and the Pentagon recently started circulating its initial proposal.

The Pentagon plan is apparently similar to AT&T’s, as it would reportedly move current CBRS licensees and users to the lower 3 GHz band to clear spectrum for auctions.

“It represents the first time we can think of where the government would change the license terms of one set of users to benefit a competitor of that first set of users… While the exclusive-use spectrum providers would see this as government exercising its eminent domain rights as it has traditionally done, CBRS users, particularly cable, would see this as the equivalent of a government exercis[ing] its eminent domain rights to condemn and tear down a Costco to give the land to a Walmart,” Levin wrote.

If the proposal is implemented, cable companies would likely sue the government “on the grounds that it violates their property rights” under the priority licenses they purchased to use CBRS, Levin wrote. Levin’s note said he doesn’t think this proposal is likely to be adopted, but it shows that “the game is afoot.”

CBRS is important to cable companies because they have increasingly focused on selling mobile service as another revenue source on top of their traditional TV and broadband businesses. Cable firms got into the mobile business by reselling network access from the likes of Verizon. They’ve been increasing the use of CBRS, reducing their reliance on the major mobile companies, although a recent Light Reading article indicates that cable’s progress with CBRS deployment has been slow.

Then-FCC Chairman Ajit Pai and FCC commissioner Brendan Carr stand next to each other in a Senate committee hearing room in 2018.

Then-FCC Chairman Ajit Pai with FCC Commissioner Brendan Carr before the start of a Senate Commerce Committee hearing on Thursday, Aug. 16, 2018.

Credit: Getty Images | Bill Clark

Then-FCC Chairman Ajit Pai with FCC Commissioner Brendan Carr before the start of a Senate Commerce Committee hearing on Thursday, Aug. 16, 2018. Credit: Getty Images | Bill Clark

In its statement to Ars, CTIA said the cable industry “opposes full-power 5G access in the US at every opportunity” in CBRS and other spectrum bands. Cable companies are “fighting competition” from wireless operators “every chance they can,” CTIA said. “With accelerating losses in the marketplace, their advocacy is now more aggressive and disingenuous.”

The DoD plan that reportedly mirrors AT&T’s proposal seems to represent a significant change from the Biden-era Department of Defense’s stance. In September 2023, the department issued a report saying that sharing the 3.1 GHz band with non-federal users would be challenging and potentially cause interference, even if rules were in place to protect DoD operations.

“DoD is concerned about the high possibility that non-Federal users will not adhere to the established coordination conditions at all times; the impacts related to airborne systems, due to their range and speed; and required upgrades to multiple classes of ships,” the 2023 report said. We contacted the Department of Defense and did not receive a response.

Levin quoted Calabrese as saying the new plan “would pull the rug out from under more than 1,000 CBRS operators that have deployed more than 400,000 base stations. While they could, in theory, share DoD spectrum lower in the band, that spectrum will now be so congested it’s unclear how or when that could be implemented.”

Small ISP slams “AT&T and its cabal of telecom giants”

AT&T argues that CBRS spectrum is underutilized and should be repurposed for commercial mobile use because it “resides between two crucial, high-power, licensed 5G bands”—specifically 3.45–3.55 GHz and 3.7–3.98 GHz. It said its proposal would expand the CBRS band’s total size from 150 MHz to 200 MHz by relocating it to 3.1–3.3 GHz.

Keefe John, CEO of a Wisconsin-based wireless home Internet provider called Ethoplex, argued that “AT&T and its cabal of telecom giants” are “scheming to rip this resource from the hands of small operators and hand it over to their 5G empire. This is nothing less than a brazen theft of America’s digital future, and we must fight back with unrelenting resolve.”

John is vice chairperson of the Wireless Internet Service Providers Association (WISPA), which represents small ISPs and is a member of Spectrum for the Future. He wrote that CBRS is a “vital spectrum band that has become the lifeblood of rural connectivity” because small ISPs use it to deliver fixed wireless Internet service to underserved areas.

John called the AT&T proposal “a deliberate scheme to kneecap WISPs, whose equipment, painstakingly deployed, would be rendered obsolete in the lower band.” Instead of moving CBRS from one band to another, John said CBRS should stay on its current spectrum and expand into additional spectrum “to ensure small providers have a fighting chance.”

An AT&T spokesperson told Ars that “CBRS can coexist with incumbents in the lower 3 GHz band, and with such high demand for spectrum, it should. Thinking creatively about how to most efficiently use scarce spectrum to meet crucial needs is simply good public policy.”

AT&T said that an auction “would provide reimbursement for costs associated with” moving CBRS users to other spectrum and that “the Department of Defense has already stated that incumbents in the lower 3 GHz could share with low-power commercial uses.”

“Having a low-power use sandwiched between two high-power use cases is an inefficient use of spectrum that doesn’t make sense. Our proposal would fix that inefficiency,” AT&T said.

AT&T has previously said that under its proposal, CBRS priority license holders “would have the choice of relocating to the new CBRS band, accepting vouchers they can use toward bidding on new high-power licenses, or receiving a cash payment in exchange for the relinquishment of their priority rights.”

Democrat warns of threat to naval operations

Reallocating spectrum could require the Navy to move from the current CBRS band to the lower part of 3 GHz. US Senator Maria Cantwell (D-Wash.) sent a letter urging the Department of Defense to avoid major changes, saying the current sharing arrangement “allows the Navy to continue using high-power surveillance and targeting radars to protect vessels and our coasts, while also enabling commercial use of the band when and where the Navy does not need access.”

Moving CBRS users would “disrupt critical naval operations and homeland defense” and “undermine an innovative ecosystem of commercial wireless technology that will be extremely valuable for robotic manufacturing, precision agriculture, ubiquitous connectivity in large indoor spaces, and private wireless networks,” Cantwell wrote.

Cantwell said she is also concerned that “a substantial number of military radar systems that operate in the lower 3 GHz band” will be endangered by moving CBRS. She pointed out that the DoD’s September 2023 report said the 3.1 GHz range has “unique spectrum characteristics” that “provide long detection ranges, tracking accuracy, and discrimination capability required for DoD radar systems.” The spectrum “is low enough in the frequency range to maintain a high-power aperture capability in a transportable system” and “high enough in the frequency range that a sufficient angular accuracy can be maintained for a radar track function for a fire control capability,” the DoD report said.

Spectrum for the Future members

In addition to joining the cable industry in Spectrum for the Future, public interest groups are fighting for CBRS on their own. Public Knowledge and OTI teamed up with the American Library Association, the Benton Institute for Broadband & Society, the Schools Health & Libraries Broadband (SHLB) Coalition, and others in a November 2024 FCC filing that praised the pro-consumer virtues of CBRS.

“CBRS has been the most successful innovation in wireless technology in the last decade,” the groups said. They accused the big three mobile carriers of “seeking to cripple CBRS as a band that promotes not only innovation, but also competition.”

These advocacy groups are interested in helping cable companies and small home Internet providers compete against the big three mobile carriers because that opens new options for consumers. But the groups also point to many other use cases for CBRS, writing:

CBRS has encouraged the deployment of “open networks” designed to host users needing greater flexibility and control than that offered by traditional CMRS [Commercial Mobile Radio Services] providers, at higher power and with greater interference protection than possible using unlicensed spectrum. Manufacturing campuses (such as John Deere and Dow Chemical), transit hubs (Miami International Airport, Port of Los Angeles), supply chain and logistic centers (US Marine Corps), sporting arenas (Philadelphia’s Wells Fargo Center), school districts and libraries (Fresno Unified School District, New York Public Library) are all examples of a growing trend toward local spectrum access fueling purpose-built private LTE/5G networks for a wide variety of use cases.

The SHLB told Ars that “CBRS spectrum plays a critical role in helping anchor institutions like schools and libraries connect their communities, especially in rural and underserved areas where traditional broadband options may be limited. A number of our members rely on access to shared and unlicensed spectrum to deliver remote learning and essential digital services, often at low or no cost to the user.”

Spectrum for the Future’s members also include companies that sell services to help customers deploy CBRS networks, as well as entities like Miami International Airport that deploy their own CBRS-based private cellular networks. The NCTA featured Miami International Airport’s private network in a recent press release, saying that CBRS helped the airport “deliver more reliable connectivity for visitors while also powering a robust Internet of Things network to keep the airport running smoothly.”

Spectrum for the Future doesn’t list any staff on its website. Media requests are routed to a third-party public relations firm. An employee of the public relations firm declined to answer our questions about how Spectrum for the Future is structured and operated but said it is “a member-driven coalition with a wide range of active supporters and contributors, including innovators, anchor institutions, and technology companies.”

Spectrum for the Future appears to be organized by Salt Point Strategies, a public affairs consulting firm. Salt Point Spectrum Policy Analyst David Wright is described as Spectrum for the Future’s policy director in an FCC filing. We reached out to Wright and didn’t receive a response.

One Big Beautiful Bill is a battleground

Senator Ted Cruz at a Senate committee hearing, sitting in his seat and using his hand to move a nameplate that says

Senate Commerce Committee Chairman Ted Cruz (R-Texas) at a hearing on Tuesday, January 28, 2025.

Credit: Getty Images | Tom Williams

Senate Commerce Committee Chairman Ted Cruz (R-Texas) at a hearing on Tuesday, January 28, 2025. Credit: Getty Images | Tom Williams

The Trump-backed “One Big Beautiful Bill,” approved by the House, is one area of interest for both sides of the CBRS debate. The bill would restore the FCC’s expired authority to auction spectrum and require new auctions. One question is whether the bill will simply require the FCC to auction a minimum amount of spectrum or if it will require specific bands to be auctioned.

WISPA provided us with a statement about the version that passed the House, saying the group is glad it “excludes the 5.9 GHz and 6 GHz bands from its call to auction off 600 megahertz of spectrum” but worried because the bill “does not exclude the widely used and previously auctioned Citizens Broadband Radio Service (CBRS) band from competitive bidding, leaving it vulnerable to sale and/or major disruption.”

WISPA said that “spectrum auctions are typically designed to favor large players” and “cut out small and rural providers who operate on the front lines of the digital divide.” WISPA said that over 60 percent of its members “use CBRS to deliver high-quality broadband to hard-to-serve and previously unserved Americans.”

On June 5, Sen. Ted Cruz (R-Texas) released the text of the Senate Commerce Committee proposal, which also does not exclude the 3550–3700 MHz from potential auctions. Pai and AT&T issued statements praising Cruz’s bill.

Pai said that Cruz’s “bold approach answers President Trump’s call to keep all options on the table and provides the President with full flexibility to identify the right bands to meet surging consumer demand, safeguard our economic competitiveness, and protect national security.” AT&T said that “by renewing the FCC’s auction authority and creating a pipeline of mid-band spectrum, the Senate is taking a strong step toward meeting consumers’ insatiable demand for mobile data.”

The NCTA said it welcomed the plan to restore the FCC’s auction authority but urged lawmakers to “reject the predictable calls from large mobile carriers that seek to cripple competition and new services being offered over existing Wi-Fi and CBRS bands.”

Licensed, unlicensed, and in-between

Spectrum is generally made available on a licensed or unlicensed basis. Wireless carriers pay big bucks for licenses that grant them exclusive use of spectrum bands on which they deploy nationwide cellular networks. Unlicensed spectrum—like the bands used in Wi-Fi—can be used by anyone without a license as long as they follow rules that prevent interference with other users and services.

The FCC issued rules for the CBRS band in 2015 during the Obama administration, using a somewhat different kind of system. The FCC rules allow “for dynamic spectrum sharing in the 3.5 GHz band between the Department of Defense (DoD) and commercial spectrum users,” the National Telecommunications and Information Administration notes. “DoD users have protected, prioritized use of the spectrum. When the government isn’t using the airwaves, companies and the public can gain access through a tiered framework.”

Instead of a binary licensed-versus-unlicensed system, the FCC implemented a three-tiered system of access. Tier 1 is for incumbent users of the band, including federal users and fixed satellite service. Tier 1 users receive protection against harmful interference from Tier 2 and Tier 3 users.

Tier 2 of CBRS consists of Priority Access Licenses (PALs) that are distributed on a county-by-county basis through competitive bidding. Tier 2 users get interference protection from users of Tier 3, which is made available in a manner similar to unlicensed spectrum.

Tier 3 “is licensed-by-rule to permit open, flexible access to the band for the widest possible group of potential users,” the FCC says. Tier 3 users can operate throughout the 3550–3700 MHz band but “must not cause harmful interference to Incumbent Access users or Priority Access Licensees and must accept interference from these users. GAA users also have no expectation of interference protection from other GAA users.”

The public interest groups’ November 2024 filing with the FCC said the unique approach to spectrum sharing “allow[s] all would-be users to operate where doing so does not threaten harmful interference” and provides a happy medium between high-powered operations in exclusively licensed spectrum bands and low-powered operations in unlicensed spectrum.

CTIA wants the ability to send higher-power signals in the band, arguing that full-power wireless transmissions would help the US match the efforts of other countries “where this spectrum has been identified as central to 5G.” The public interest groups urged the FCC to reject the mobile industry proposal to increase power levels, saying it “would disrupt and diminish the expanding diversity of GAA users and use cases that represent the central purpose of CBRS’s innovative three-tier, low-power and coordinated sharing framework.”

Pai helped carriers as FCC chair

The FCC’s original plan for PALs during the Obama administration was to auction them off for individual Census tracts, small areas containing between 1,200 and 8,000 people each. During President Trump’s first term, the Pai FCC granted a CTIA request to boost the size of license areas from census tracts to counties, making it harder for small companies to win at auction.

The FCC auctioned PALs in 2020, getting bids of nearly $4.6 billion from 228 bidders. The biggest winners were Verizon, Dish Network, Charter, Comcast, and Cox.

Although Verizon uses CBRS for parts of its network, that doesn’t mean it’s on the same side as cable users in the policy debate. Verizon urged the FCC to increase the allowed power levels in the band. Dish owner EchoStar also asked for power increases. Cable companies oppose raising the power levels, with the NCTA saying that doing so would “jeopardize the continued availability of the 3.5 GHz band for lower-power operations” and harm both federal and non-federal users.

As head of CTIA, one of Pai’s main jobs is to obtain more licensed spectrum for the exclusive use of AT&T, Verizon, T-Mobile, and other mobile companies that his group represents. Pai’s Wall Street Journal op-ed said that “traffic on wireless networks is expected to triple by 2029,” driven by “AI, 5G home broadband and other emerging technologies.” Pai cited a study commissioned by CTIA to argue that “wireless networks will be unable to meet a quarter of peak demand in as little as two years.”

Spectrum for the Future countered that Pai “omits that the overwhelming share of this traffic will travel over Wi-Fi, not cellular networks.” CTIA told Ars that “the Ericsson studies we use for traffic growth projections only consider demand over commercial networks using licensed spectrum.”

Spectrum for the Future pointed to statements made by the CEOs of wireless carriers that seem to contradict Pai’s warnings of a spectrum shortage:

Mr. Pai cites a CTIA-funded study to claim “wireless networks will be unable to meet a quarter of peak demand in as little as two years.” If that’s true, then why are his biggest members’ CEOs telling Wall Street the exact opposite?

Verizon’s CEO insists he’s sitting on “a generation of spectrum”—”years and years and years” of spectrum capacity still to deploy. The CEO of Verizon’s consumer group goes even further, insisting they have “almost unlimited spectrum.” T-Mobile agrees, bragging that it has “only deployed 60 percent of our mid-band spectrum on 5G,” leaving “lots of spectrum we haven’t put into the fight yet.”

Battle could last for years

Spectrum for the Future also scoffed at Pai’s comparison of the US to China. Pai’s op-ed said that China “has accelerated its efforts to dominate in wireless and will soon boast more than four times the amount of commercial midband spectrum than the US.” Pai added that “China isn’t only deploying 5G domestically. It’s exporting its spectrum policies, its equipment vendors (such as Huawei and ZTE), and its Communist Party-centric vision of innovation to the rest of the world.”

Spectrum for the Future responded that “China’s spectrum policy goes all-in on exclusive-license frameworks, such as 5G, because they limit spectrum access to just a small handful of regime-aligned telecom companies complicit in Beijing’s censorship regime… America’s global wireless leadership, by contrast, is fueled by spectrum innovations like unlicensed Wi-Fi and CBRS spectrum sharing, whose hardware markets are dominated by American and allied companies.”

Spectrum for the Future also said that Pai and CTIA “blasting China for ‘exporting its spectrum policies’—while asking the US to adopt the same approach—is stunning hypocrisy.”

CTIA’s statement to Ars disputed Spectrum for the Future’s description. “The system of auctioning spectrum licenses was pioneered in America but is not used in China. China does, however, allocate unlicensed spectrum in a similar manner to the United States,” CTIA told Ars.

The lobbying battle and potential legal war that has Pai and CTIA lined up against the “everybody but Big Mobile” wireless coalition could last throughout Trump’s second term. Levin’s research note about the DoD proposal said, “the path from adoption to auction to making the spectrum available to the winners of an auction is likely to be at least three years.” The fight could go on a lot longer if “current licensees object and litigate,” Levin wrote.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Ex-FCC Chair Ajit Pai is now a wireless lobbyist—and enemy of cable companies Read More »

google’s-nightmare:-how-a-search-spinoff-could-remake-the-web

Google’s nightmare: How a search spinoff could remake the web


Google has shaped the Internet as we know it, and unleashing its index could change everything.

Google may be forced to license its search technology when the final antitrust ruling comes down. Credit: Aurich Lawson

Google may be forced to license its search technology when the final antitrust ruling comes down. Credit: Aurich Lawson

Google wasn’t around for the advent of the World Wide Web, but it successfully remade the web on its own terms. Today, any website that wants to be findable has to play by Google’s rules, and after years of search dominance, the company has lost a major antitrust case that could reshape both it and the web.

The closing arguments in the case just wrapped up last week, and Google could be facing serious consequences when the ruling comes down in August. Losing Chrome would certainly change things for Google, but the Department of Justice is pursuing other remedies that could have even more lasting impacts. During his testimony, Google CEO Sundar Pichai seemed genuinely alarmed at the prospect of being forced to license Google’s search index and algorithm, the so-called data remedies in the case. He claimed this would be no better than a spinoff of Google Search. The company’s statements have sometimes derisively referred to this process as “white labeling” Google Search.

But does a white label Google Search sound so bad? Google has built an unrivaled index of the web, but the way it shows results has become increasingly frustrating. A handful of smaller players in search have tried to offer alternatives to Google’s search tools. They all have different approaches to retrieving information for you, but they agree that spinning off Google Search could change the web again. Whether or not those changes are positive depends on who you ask.

The Internet is big and noisy

As Google’s search results have changed over the years, more people have been open to other options. Some have simply moved to AI chatbots to answer their questions, hallucinations be damned. But for most people, it’s still about the 10 blue links (for now).

Because of the scale of the Internet, there are only three general web search indexes: Google, Bing, and Brave. Every search product (including AI tools) relies on one or more of these indexes to probe the web. But what does that mean?

“Generally, a search index is a service that, when given a query, is able to find relevant documents published on the Internet,” said Brave’s search head Josep Pujol.

A search index is essentially a big database, and that’s not the same as search results. According to JP Schmetz, Brave’s chief of ads, it’s entirely possible to have the best and most complete search index in the world and still show poor results for a given query. Sound like anyone you know?

Google’s technological lead has allowed it to crawl more websites than anyone else. It has all the important parts of the web, plus niche sites, abandoned blogs, sketchy copies of legitimate websites, copies of those copies, and AI-rephrased copies of the copied copies—basically everything. And the result of this Herculean digital inventory is a search experience that feels increasingly discombobulated.

“Google is running large-scale experiments in ways that no rival can because we’re effectively blinded,” said Kamyl Bazbaz, head of public affairs at DuckDuckGo, which uses the Bing index. “Google’s scale advantage fuels a powerful feedback loop of different network effects that ensure a perpetual scale and quality deficit for rivals that locks in Google’s advantage.”

The size of the index may not be the only factor that matters, though. Brave, which is perhaps best known for its browser, also has a search engine. Brave Search is the default in its browser, but you can also just go to the URL in your current browser. Unlike most other search engines, Brave doesn’t need to go to anyone else for results. Pujol suggested that Brave doesn’t need the scale of Google’s index to find what you need. And admittedly, Brave’s search results don’t feel meaningfully worse than Google’s—they may even be better when you consider the way that Google tries to keep you from clicking.

Brave’s index spans around 25 billion pages, but it leaves plenty of the web uncrawled. “We could be indexing five to 10 times more pages, but we choose not to because not all the web has signal. Most web pages are basically noise,” said Pujol.

The freemium search engine Kagi isn’t worried about having the most comprehensive index. Kagi is a meta search engine. It pulls in data from multiple indexes, like Bing and Brave, but it has a custom index of what founder and CEO Vladimir Prelovac calls the “non-commercial web.”

When you search with Kagi, some of the results (it tells you the proportion) come from its custom index of personal blogs, hobbyist sites, and other content that is poorly represented on other search engines. It’s reminiscent of the days when huge brands weren’t always clustered at the top of Google—but even these results are being pushed out of reach in favor of AI, ads, Knowledge Graph content, and other Google widgets. That’s a big part of why Kagi exists, according to Prelovac.

A Google spinoff could change everything

We’ve all noticed the changes in Google’s approach to search, and most would agree that they have made finding reliable and accurate information harder. Regardless, Google’s incredibly deep and broad index of the Internet is in demand.

Even with Bing and Brave available, companies are going to extremes to syndicate Google Search results. A cottage industry has emerged to scrape Google searches as a stand-in for an official index. These companies are violating Google’s terms, yet they appear in Google Search results themselves. Google could surely do something about this if it wanted to.

The DOJ calls Google’s mountain of data the “essential raw material” for building a general search engine, and it believes forcing the firm to license that material is key to breaking its monopoly. The sketchy syndication firms will evaporate if the DOJ’s data remedies are implemented, which would give competitors an official way to utilize Google’s index. And utilize it they will.

Google CEO Sundar Pichai decried the court’s efforts to force a “de facto divestiture” of Google’s search tech.

Credit: Ryan Whitwam

Google CEO Sundar Pichai decried the court’s efforts to force a “de facto divestiture” of Google’s search tech. Credit: Ryan Whitwam

According to Prelovac, this could lead to an explosion in search choices. “The whole purpose of the Sherman Act is to proliferate a healthy, competitive marketplace. Once you have access to a search index, then you can have thousands of search startups,” said Prelovac.

The Kagi founder suggested that licensing Google Search could allow entities of all sizes to have genuinely useful custom search tools. Cities could use the data to create deep, hyper-local search, and people who love cats could make a cat-specific search engine, in both cases pulling what they want from the most complete database of online content. And, of course, general search products like Kagi would be able to license Google’s tech for a “nominal fee,” as the DOJ puts it.

Prelovac didn’t hesitate when asked if Kagi, which offers a limited number of free searches before asking users to subscribe, would integrate Google’s index. “Yes, that is something we would do,” he said. “And that’s what I believe should happen.”

There may be some drawbacks to unleashing Google’s search services. Judge Amit Mehta has expressed concern that blocking Google’s search placement deals could reduce browser choice, and there is a similar issue with the data remedies. If Google is forced to license search as an API, its few competitors in web indexing could struggle to remain afloat. In a roundabout way, giving away Google’s search tech could actually increase its influence.

The Brave team worries about how open access to Google’s search technology could impact diversity on the web. “If implemented naively, it’s a big problem,” said Brave’s ad chief JP Schmetz, “If the court forces Google to provide search at a marginal cost, it will not be possible for Bing or Brave to survive until the remedy ends.”

The landscape of AI-based search could also change. We know from testimony given during the remedy trial by OpenAI’s Nick Turley that the ChatGPT maker tried and failed to get access to Google Search to ground its AI models—it currently uses Bing. If Google were suddenly an option, you can be sure OpenAI and others would rush to connect Google’s web data to their large language models (LLMs).

The attempt to reduce Google’s power could actually grant it new monopolies in AI, according to Brave Chief Business Officer Brian Brown. “All of a sudden, you would have a single monolithic voice of truth across all the LLMs, across all the web,” Brown said.

What if you weren’t the product?

If white labeling Google does expand choice, even at the expense of other indexes, it will give more kinds of search products a chance in the market—maybe even some that shun Google’s focus on advertising. You don’t see much of that right now.

For most people, web search is and always has been a free service supported by ads. Google, Brave, DuckDuckGo, and Bing offer all the search queries you want for free because they want eyeballs. It’s been said often, but it’s true: If you’re not paying for it, you’re the product. This is an arrangement that bothers Kagi’s founder.

“For something as important as information consumption, there should not be an intermediary between me and the information, especially one that is trying to sell me something,” said Prelovac.

Kagi search results acknowledge the negative impact of today’s advertising regime. Kagi users see a warning next to results with a high number of ads and trackers. According to Prelovac, that is by far the strongest indication that a result is of low quality. That icon also lets you adjust the prevalence of such sites in your personal results. You can demote a site or completely hide it, which is a valuable option in the age of clickbait.

Kagi search gives you a lot of control.

Credit: Ryan Whitwam

Kagi search gives you a lot of control. Credit: Ryan Whitwam

Kagi’s paid approach to search changes its relationship with your data. “We literally don’t need user data,” Prelovac said. “But it’s not only that we don’t need it. It’s a liability.”

Prelovac admitted that getting people to pay for search is “really hard.” Nevertheless, he believes ad-supported search is a dead end. So Kagi is planning for a future in five or 10 years when more people have realized they’re still “paying” for ad-based search with lost productivity time and personal data, he said.

We know how Google handles user data (it collects a lot of it), but what does that mean for smaller search engines like Brave and DuckDuckGo that rely on ads?

“I’m sure they mean well,” said Prelovac.

Brave said that it shields user data from advertisers, relying on first-party tracking to attribute clicks to Brave without touching the user. “They cannot retarget people later; none of that is happening,” said Brave’s JP Schmetz.

DuckDuckGo is a bit of an odd duck—it relies on Bing’s general search index, but it adds a layer of privacy tools on top. It’s free and ad-supported like Google and Brave, but the company says it takes user privacy seriously.

“Viewing ads is privacy protected by DuckDuckGo, and most ad clicks are managed by Microsoft’s ad network,” DuckDuckGo’s Kamyl Bazbaz said. He explained that DuckDuckGo has worked with Microsoft to ensure its network does not track users or create any profiles based on clicks. He added that the company has a similar privacy arrangement with TripAdvisor for travel-related ads.

It’s AI all the way down

We can’t talk about the future of search without acknowledging the artificially intelligent elephant in the room. As Google continues its shift to AI-based search, it’s tempting to think of the potential search spin-off as a way to escape that trend. However, you may find few refuges in the coming years. There’s a real possibility that search is evolving beyond the 10 blue links and toward an AI assistant model.

All non-Google search engines have AI integrations, with the most prominent being Microsoft Bing, which has a partnership with OpenAI. But smaller players have AI search features, too. The folks working on these products agree with Microsoft and Google on one important point: They see AI as inevitable.

Today’s Google alternatives all have their own take on AI Overviews, which generates responses to queries based on search results. They’re generally not as in-your-face as Google AI, though. While Google and Microsoft are intensely focused on increasing the usage of AI search, other search operators aren’t pushing for that future. They are along for the ride, though.

AI overview on phone

AI Overviews are integrated with Google’s search results, and most other players have their own version.

Credit: Google

AI Overviews are integrated with Google’s search results, and most other players have their own version. Credit: Google

“We’re finding that some people prefer to start in chat mode and then jump into more traditional search results when needed, while others prefer the opposite,” Bazbaz said. “So we thought the best thing to do was offer both. We made it easy to move between them, and we included an off switch for those who’d like to avoid AI altogether.”

The team at Brave views AI as a core means of accessing search and one that will continue to grow. Brave generates AI answers for many searches and prominently cites sources. You can also disable Brave’s AI if you prefer. But according to search chief Josep Pujol, the move to AI search is inevitable for a pretty simple reason: It’s convenient, and people will always choose convenience. So AI is changing the web as we know it, for better or worse, because LLMs can save a smidge of time, especially for more detailed “long-tail” queries. These AI features may give you false information while they do it, but that’s not always apparent.

This is very similar to the language Google uses when discussing agentic search, although it expresses it in a more nuanced way. By understanding the task behind a query, Google hopes to provide AI answers that save people time, even if the model needs a few ticks to fan out and run multiple searches to generate a more comprehensive report on a topic. That’s probably still faster than running multiple searches and manually reviewing the results, and it could leave traditional search as an increasingly niche service, even in a world with more choices.

“Will the 10 blue links continue to exist in 10 years?” Pujol asked. “Actually, one question would be, does it even exist now? In 10 years, [search] will have evolved into more of an AI conversation behavior or even agentic. That is probably the case. What, for sure, will continue to exist is the need to search. Search is a verb, an action that you do, and whether you will do it directly or whether it will be done through an agent, it’s a search engine.”

Vlad from Kagi sees AI becoming the default way we access information in the long term, but his search engine doesn’t force you to use it. On Kagi, you can expand the AI box for your searches and ask follow-ups, and the AI will open automatically if you use a question mark in your search. But that’s just the start.

“You watch Star Trek, nobody’s clicking on links there—I do believe in that vision in science fiction movies,” Prelovac said. “I don’t think my daughter will be clicking links in 10 years. The only question is if the current technology will be the one that gets us there. LLMs have inherent flaws. I would even tend to say it’s likely not going to get us to Star Trek.”

If we think of AI mainly as a way to search for information, the future becomes murky. With generative AI in the driver’s seat, questions of authority and accuracy may be left to language models that often behave in unpredictable and difficult-to-understand ways. Whether we’re headed for an AI boom or bust—for continued Google dominance or a new era of choice—we’re facing fundamental changes to how we access information.

Maybe if we get those thousands of search startups, there will be a few that specialize in 10 blue links. We can only hope.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google’s nightmare: How a search spinoff could remake the web Read More »

what-solar?-what-wind?-texas-data-centers-build-their-own-gas-power-plants

What solar? What wind? Texas data centers build their own gas power plants


Data center operators are turning away from the grid to build their own power plants.

Sisters Abigail and Jennifer Lindsey stand on their rural property on May 27 outside New Braunfels, Texas, where they posted a sign in opposition to a large data center and power plant planned across the street. Credit: Dylan Baddour/Inside Climate News

NEW BRAUNFELS, Texas—Abigail Lindsey worries the days of peace and quiet might be nearing an end at the rural, wooded property where she lives with her son. On the old ranch across the street, developers want to build an expansive complex of supercomputers for artificial intelligence, plus a large, private power plant to run it.

The plant would be big enough to power a major city, with 1,200 megawatts of planned generation capacity fueled by West Texas shale gas. It will only supply the new data center, and possibly other large data centers recently proposed, down the road.

“It just sucks,” Lindsey said, sitting on her deck in the shade of tall oak trees, outside the city of New Braunfels. “They’ve come in and will completely destroy our way of life: dark skies, quiet and peaceful.”

The project is one of many others like it proposed in Texas, where a frantic race to boot up energy-hungry data centers has led many developers to plan their own gas-fired power plants rather than wait for connection to the state’s public grid. Egged on by supportive government policies, this buildout promises to lock in strong gas demand for a generation to come.

The data center and power plant planned across from Lindsey’s home is a partnership between an AI startup called CloudBurst and the natural gas pipeline giant Energy Transfer. It was Energy Transfer’s first-ever contract to supply gas for a data center, but it is unlikely to be its last. In a press release, the company said it was “in discussions with a number of data center developers and expects this to be the first of many agreements.”

Previously, conventional wisdom assumed that this new generation of digital infrastructure would be powered by emissions-free energy sources like wind, solar and battery power, which have lately seen explosive growth. So far, that vision isn’t panning out, as desires to build quickly overcome concerns about sustainability.

“There is such a shortage of data center capacity and power,” said Kent Draper, chief commercial officer at Australian data center developer IREN, which has projects in West Texas. “Even the large hyperscalers are willing to turn a blind eye to their renewable goals for some period of time in order to get access.”

The Hays Energy Project is a 990 MW gas-fired power plant near San Marcos, Texas.

Credit: Dylan Baddour/Inside Climate News

The Hays Energy Project is a 990 MW gas-fired power plant near San Marcos, Texas. Credit: Dylan Baddour/Inside Climate News

IREN prioritizes renewable energy for its data centers—giant warehouses full of advanced computers and high-powered cooling systems that can be configured to produce crypto currency or generate artificial intelligence. In Texas, that’s only possible because the company began work here years ago, early enough to secure a timely connection to the state’s grid, Draper said.

There were more than 2,000 active generation interconnection requests as of April 30, totalling 411,600 MW of capacity, according to grid operator ERCOT. A bill awaiting signature on Gov. Greg Abbott’s desk, S.B. 6, looks to filter out unserious large-load projects bloating the queue by imposing a $100,000 fee for interconnection studies.

Wind and solar farms require vast acreage and generate energy intermittently, so they work best as part of a diversified electrical grid that collectively provides power day and night. But as the AI gold rush gathered momentum, a surge of new project proposals has created years-long wait times to connect to the grid, prompting many developers to bypass it and build their own power supply.

Operating alone, a wind or solar farm can’t run a data center. Battery technologies still can’t store such large amounts of energy for the length of time required to provide steady, uninterrupted power for 24 hours per day, as data centers require. Small nuclear reactors have been touted as a means to meet data center demand, but the first new units remain a decade from commercial deployment, while the AI boom is here today.

Now, Draper said, gas companies approach IREN all the time, offering to quickly provide additional power generation.

Gas provides almost half of all power generation capacity in Texas, far more than any other source. But the amount of gas power in Texas has remained flat for 20 years, while wind and solar have grown sharply, according to records from the US Energy Information Administration. Facing a tidal wave of proposed AI projects, state lawmakers have taken steps to try to slow the expansion of renewable energy and position gas as the predominant supply for a new era of demand.

This buildout promises strong demand and high gas prices for a generation to come, a boon to Texas’ fossil fuel industry, the largest in the nation. It also means more air pollution and emissions of planet-warming greenhouse gases, even as the world continues to barrel past temperature records.

Texas, with 9 percent of the US population, accounted for about 15 percent of current gas-powered generation capacity in the country but 26 percent of planned future generation at the end of 2024, according to data from Global Energy Monitor. Both the current and planned shares are far more than any other state.

GEM identified 42 new gas turbine projects under construction, in development, or announced in Texas before the start of this year. None of those projects are sited at data centers. However, other projects announced since then, like CloudBurst and Energy Transfer outside New Braunfels, will include dedicated gas power plants on site at data centers.

For gas companies, the boom in artificial intelligence has quickly become an unexpected gold mine. US gas production has risen steadily over 20 years since the fracking boom began, but gas prices have tumbled since 2024, dragged down by surging supply and weak demand.

“The sudden emergence of data center demand further brightens the outlook for the renaissance in gas pricing,” said a 2025 oil and gas outlook report by East Daley Analytics, a Colorado-based energy intelligence firm. “The obvious benefit to producers is increased drilling opportunities.”

It forecast up to a 20 percent increase in US gas production by 2030, driven primarily by a growing gas export sector on the Gulf Coast. Several large export projects will finish construction in the coming years, with demand for up to 12 billion cubic feet of gas per day, the report said, while new power generation for data centers would account for 7 billion cubic feet per day of additional demand. That means profits for power providers, but also higher costs for consumers.

Natural gas, a mixture primarily composed of methane, burns much cleaner than coal but still creates air pollution, including soot, some hazardous chemicals, and greenhouse gases. Unburned methane released into the atmosphere has more than 80 times the near-term warming effect of carbon dioxide, leading some studies to conclude that ubiquitous leaks in gas supply infrastructure make it as impactful as coal to the global climate.

Credit: Dylan Baddour/Inside Climate News

It’s a power source that’s heralded for its ability to get online fast, said Ed Hirs, an energy economics lecturer at the University of Houston. But the years-long wait times for turbines have quickly become the industry’s largest constraint in an otherwise positive outlook.

“If you’re looking at a five-year lead time, that’s not going to help Alexa or Siri today,” Hirs said.

The reliance on gas power for data centers is a departure from previous thought, said Larry Fink, founder of global investment firm BlackRock, speaking to a crowd of industry executives at an oil and gas conference in Houston in March.

About four years ago, if someone said they were building a data center, they said it must be powered by renewables, he recounted. Two years ago, it was a preference.

“Today?” Fink said. “They care about power.”

Gas plants for data centers

Since the start of this year, developers have announced a flurry of gas power deals for data centers. In the small city of Abilene, the builders of Stargate, one of the world’s largest data center projects, applied for permits in January to build 360 MW of gas power generation, authorized to emit 1.6 million tons of greenhouse gases and 14 tons of hazardous air pollutants per year. Later, the company announced the acquisition of an additional 4,500 MW of gas power generation capacity.

Also in January, a startup called Sailfish announced ambitious plans for a 2,600-acre, 5,000 MW cluster of data centers in the tiny North Texas town of Tolar, population 940.

“Traditional grid interconnections simply can’t keep pace with hyperscalers’ power demands, especially as AI accelerates energy requirements,” Sailfish founder Ryan Hughes told the website Data Center Dynamics at the time. “Our on-site natural gas power islands will let customers scale quickly.”

CloudBurst and Energy Transfer announced their data center and power plant outside New Braunfels in February, and another company partnership also announced plans for a 250 MW gas plant and data center near Odessa in West Texas. In May, a developer called Tract announced a 1,500-acre, 2,000 MW data center campus with some on-site generation and some purchased gas power near the small Central Texas town of Lockhart.

Not all new data centers need gas plants. A 120 MW South Texas data center project announced in April would use entirely wind power, while an enormous, 5,000 MW megaproject outside Laredo announced in March hopes to eventually run entirely on private wind, solar, and hydrogen power (though it will use gas at first). Another collection of six data centers planned in North Texas hopes to draw 1,400 MW from the grid.

Altogether, Texas’ grid operator predicts statewide power demand will nearly double within five years, driven largely by data centers for artificial intelligence. It mirrors a similar situation unfolding across the country, according to analysis by S&P Global.

“There is huge concern about the carbon footprint of this stuff,” said Dan Stanzione, executive director of the Texas Advanced Computing Center at the University of Texas at Austin. “If we could decarbonize the power grid, then there is no carbon footprint for this.”

However, despite massive recent expansions of renewable power generation, the boom in artificial intelligence appears to be moving the country farther from, not closer to, its decarbonization goals.

Restrictions on renewable energy

Looking forward to a buildout of power supply, state lawmakers have proposed or passed new rules to support the deployment of more gas generation and slow the surging expansion of wind and solar power projects. Supporters of these bills say they aim to utilize Texas’ position as the nation’s top gas producer.

Some energy experts say the rules proposed throughout the legislative session could dismantle the state’s leadership in renewables as well as the state’s ability to provide cheap and reliable power.

“It absolutely would [slow] if not completely stop renewable energy,” said Doug Lewin, a Texas energy consultant, about one of the proposed rules in March. “That would really be extremely harmful to the Texas economy.”

While the bills deemed as “industry killers” for renewables missed key deadlines, failing to reach Abbott’s desk, they illustrate some lawmakers’ aspirations for the state’s energy industry.

One failed bill, S.B. 388, would have required every watt of new solar brought online to be accompanied by a watt of new gas. Another set of twin bills, H.B. 3356 and S.B. 715, would have forced existing wind and solar companies to buy fossil-fuel based power or connect to a battery storage resource to cover the hours the energy plants are not operating.

When the Legislature last met in 2023, it created a $5 billion public “energy fund” to finance new gas plants but not wind or solar farms. It also created a new tax abatement program that excluded wind and solar. This year’s budget added another $5 billion to double the fund.

Bluebonnet Electric Cooperative is currently completing construction on a 190 MW gas-fired peaker plant near the town of Maxwell in Caldwell County.

Credit: Dylan Baddour/Inside Climate News

Bluebonnet Electric Cooperative is currently completing construction on a 190 MW gas-fired peaker plant near the town of Maxwell in Caldwell County. Credit: Dylan Baddour/Inside Climate News

Among the lawmakers leading the effort to scale back the state’s deployment of renewables is state Sen. Lois Kolkhorst, a Republican from Brenham. One bill she co-sponsored, S.B. 819, aimed to create new siting rules for utility-scale renewable projects and would have required them to get permits from the Public Utility Commission that no other energy source—coal, gas or nuclear—needs. “It’s just something that is clearly meant to kneecap an industry,” Lewin said about the bill, which failed to pass.

Kolkhorst said the bill sought to balance the state’s need for power while respecting landowners across the state.

Former state Rep. John Davis, now a board member at Conservative Texans for Energy Innovation, said the session shows how renewables have become a red meat issue.

More than 20 years ago, Davis and Kolkhorst worked together in the Capitol as Texas deregulated its energy market, which encouraged renewables to enter the grid’s mix, he said. Now Davis herds sheep and goats on his family’s West Texas ranch, where seven wind turbines provide roughly 40 percent of their income.

He never could have dreamed how significant renewable energy would become for the state grid, he said. That’s why he’s disappointed with the direction the legislature is headed with renewables.

“I can’t think of anything more conservative, as a conservative, than wind and solar,” Davis said. “These are things God gave us—use them and harness them.”

A report published in April finds that targeted limitations on solar and wind development in Texas could increase electricity costs for consumers and businesses. The report, done by Aurora Energy Research for the Texas Association of Business, said restricting the further deployment of renewables would drive power prices up 14 percent by 2035.

“Texas is at a crossroads in its energy future,” said Olivier Beaufils, a top executive at Aurora Energy Research. “We need policies that support an all-of-the-above approach to meet the expected surge in power demand.”

Likewise, the commercial intelligence firm Wood Mackenzie expects the power demand from data centers to drive up prices of gas and wholesale consumer electricity.

Pollution from gas plants

Even when new power plants aren’t built on the site of data centers, they might still be developed because of demand from the server farms.

For example, in 2023, developer Marathon Digital started up a Bitcoin mine in the small town of Granbury on the site of the 1,100 MW Wolf Hollow II gas power plant. It held contracts to purchase 300 MW from the plant.

One year later, the power plant operator sought permits to install eight additional “peaker” gas turbines able to produce up to 352 MW of electricity. These small units, designed to turn on intermittently during hours of peak demand, release more pollution than typical gas turbines.

Those additional units would be approved to release 796,000 tons per year of greenhouse gases, 251 tons per year of nitrogen oxides and 56 tons per year of soot, according to permitting documents. That application is currently facing challenges from neighboring residents in state administrative courts.

About 150 miles away, neighbors are challenging another gas plant permit application in the tiny town of Blue. At 1,200 MW, the $1.2 billion plant proposed by Sandow Lakes Energy Co. would be among the largest in the state and would almost entirely serve private customers, likely including the large data centers that operate about 20 miles away.

Travis Brown and Hugh Brown, no relation, stand by a sign marking the site of a proposed 1,200 MW gas-fired power plant in their town of Blue on May 7.

Credit: Dylan Baddour/Inside Climate News

Travis Brown and Hugh Brown, no relation, stand by a sign marking the site of a proposed 1,200 MW gas-fired power plant in their town of Blue on May 7. Credit: Dylan Baddour/Inside Climate News

This plan bothers Hugh Brown, who moved out to these green, rolling hills of rural Lee County in 1975, searching for solitude. Now he lives on 153 wooded acres that he’s turned into a sanctuary for wildlife.

“What I’ve had here is a quiet, thoughtful life,” said Brown, skinny with a long grey beard. “I like not hearing what anyone else is doing.”

He worries about the constant roar of giant cooling fans, the bright lights overnight and the air pollution. According to permitting documents, the power plant would be authorized to emit 462 tons per year of ammonia gas, 254 tons per year of nitrogen oxides, 153 tons per year of particulate matter, or soot, and almost 18 tons per year of “hazardous air pollutants,” a collection of chemicals that are known to cause cancer or other serious health impacts.

It would also be authorized to emit 3.9 million tons of greenhouse gases per year, about as much as 72,000 standard passenger vehicles.

“It would be horrendous,” Brown said. “There will be a constant roaring of gigantic fans.”

In a statement, Sandow Lakes Energy denied that the power plant will be loud. “The sound level at the nearest property line will be similar to a quiet library,” the statement said.

Sandow Lakes Energy said the plant will support the local tax base and provide hundreds of temporary construction jobs and dozens of permanent jobs. Sandow also provided several letters signed by area residents who support the plant.

“We recognize the critical need for reliable, efficient, and environmentally responsible energy production to support our region’s growth and economic development,” wrote Nathan Bland, president of the municipal development district in Rockdale, about 20 miles from the project site.

Brown stands next to a pond on his property ringed with cypress trees he planted 30 years ago.

Credit: Dylan Baddour/Inside Climate News

Brown stands next to a pond on his property ringed with cypress trees he planted 30 years ago. Credit: Dylan Baddour/Inside Climate News

Sandow says the plant will be connected to Texas’ public grid, and many supporting letters for the project cited a need for grid reliability. But according to permitting documents, the 1,200 MW plant will supply only 80 MW to the grid and only temporarily, with the rest going to private customers.

“Electricity will continue to be sold to the public until all of the private customers have completed projects slated to accept the power being generated,” said a permit review by the Texas Commission on Environmental Quality.

Sandow has declined to name those customers. However, the plant is part of Sandow’s massive, master-planned mixed-use development in rural Lee and Milam counties, where several energy-hungry tenants are already operating, including Riot Platforms, the largest cryptocurrency mine on the continent. The seven-building complex in Rockdale is built to use up to 700 MW, and in April, it announced the acquisition of a neighboring, 125 MW cryptocurrency mine, previously operated by Rhodium. Another mine by Bitmain, also one of the world’s largest Bitcoin companies, has 560 MW of operating capacity with plans to add 180 more in 2026.

In April, residents of Blue gathered at the volunteer fire department building for a public meeting with Texas regulators and Sandow to discuss questions and concerns over the project. Brown, owner of the wildlife sanctuary, spoke into a microphone and noted that the power plant was placed at the far edge of Sandow’s 33,000-acre development, 20 miles from the industrial complex in Rockdale but near many homes in Blue.

“You don’t want to put it up into the middle of your property where you could deal with the negative consequences,” Brown said, speaking to the developers. “So it looks to me like you are wanting to make money, in the process of which you want to strew grief in your path and make us bear the environmental costs of your profit.”

Inside Climate News’ Peter Aldhous contributed to this report.

This story originally appeared on Inside Climate News.

Photo of Inside Climate News

What solar? What wind? Texas data centers build their own gas power plants Read More »

meta-and-yandex-are-de-anonymizing-android-users’-web-browsing-identifiers

Meta and Yandex are de-anonymizing Android users’ web browsing identifiers


Abuse allows Meta and Yandex to attach persistent identifiers to detailed browsing histories.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Tracking code that Meta and Russia-based Yandex embed into millions of websites is de-anonymizing visitors by abusing legitimate Internet protocols, causing Chrome and other browsers to surreptitiously send unique identifiers to native apps installed on a device, researchers have discovered. Google says it’s investigating the abuse, which allows Meta and Yandex to convert ephemeral web identifiers into persistent mobile app user identities.

The covert tracking—implemented in the Meta Pixel and Yandex Metrica trackers—allows Meta and Yandex to bypass core security and privacy protections provided by both the Android operating system and browsers that run on it. Android sandboxing, for instance, isolates processes to prevent them from interacting with the OS and any other app installed on the device, cutting off access to sensitive data or privileged system resources. Defenses such as state partitioning and storage partitioning, which are built into all major browsers, store site cookies and other data associated with a website in containers that are unique to every top-level website domain to ensure they’re off-limits for every other site.

A blatant violation

“One of the fundamental security principles that exists in the web, as well as the mobile system, is called sandboxing,” Narseo Vallina-Rodriguez, one of the researchers behind the discovery, said in an interview. “You run everything in a sandbox, and there is no interaction within different elements running on it. What this attack vector allows is to break the sandbox that exists between the mobile context and the web context. The channel that exists allowed the Android system to communicate what happens in the browser with the identity running in the mobile app.”

The bypass—which Yandex began in 2017 and Meta started last September—allows the companies to pass cookies or other identifiers from Firefox and Chromium-based browsers to native Android apps for Facebook, Instagram, and various Yandex apps. The companies can then tie that vast browsing history to the account holder logged into the app.

This abuse has been observed only in Android, and evidence suggests that the Meta Pixel and Yandex Metrica target only Android users. The researchers say it may be technically feasible to target iOS because browsers on that platform allow developers to programmatically establish localhost connections that apps can monitor on local ports.

In contrast to iOS, however, Android imposes fewer controls on local host communications and background executions of mobile apps, the researchers said, while also implementing stricter controls in app store vetting processes to limit such abuses. This overly permissive design allows Meta Pixel and Yandex Metrica to send web requests with web tracking identifiers to specific local ports that are continuously monitored by the Facebook, Instagram, and Yandex apps. These apps can then link pseudonymous web identities with actual user identities, even in private browsing modes, effectively de-anonymizing users’ browsing habits on sites containing these trackers.

Meta Pixel and Yandex Metrica are analytics scripts designed to help advertisers measure the effectiveness of their campaigns. Meta Pixel and Yandex Metrica are estimated to be installed on 5.8 million and 3 million sites, respectively.

Meta and Yandex achieve the bypass by abusing basic functionality built into modern mobile browsers that allows browser-to-native app communications. The functionality lets browsers send web requests to local Android ports to establish various services, including media connections through the RTC protocol, file sharing, and developer debugging.

A conceptual diagram representing the exchange of identifiers between the web trackers running on the browser context and native Facebook, Instagram, and Yandex apps for Android.

A conceptual diagram representing the exchange of identifiers between the web trackers running on the browser context and native Facebook, Instagram, and Yandex apps for Android.

While the technical underpinnings differ, both Meta Pixel and Yandex Metrica are performing a “weird protocol misuse” to gain unvetted access that Android provides to localhost ports on the 127.0.0.1 IP address. Browsers access these ports without user notification. Facebook, Instagram, and Yandex native apps silently listen on those ports, copy identifiers in real time, and link them to the user logged into the app.

A representative for Google said the behavior violates the terms of service for its Play marketplace and the privacy expectations of Android users.

“The developers in this report are using capabilities present in many browsers across iOS and Android in unintended ways that blatantly violate our security and privacy principles,” the representative said, referring to the people who write the Meta Pixel and Yandex Metrica JavaScript. “We’ve already implemented changes to mitigate these invasive techniques and have opened our own investigation and are directly in touch with the parties.”

Meta didn’t answer emailed questions for this article, but provided the following statement: “We are in discussions with Google to address a potential miscommunication regarding the application of their policies. Upon becoming aware of the concerns, we decided to pause the feature while we work with Google to resolve the issue.”

Yandex representatives didn’t answer an email seeking comment.

How Meta and Yandex de-anonymize Android users

Meta Pixel developers have abused various protocols to implement the covert listening since the practice began last September. They started by causing apps to send HTTP requests to port 12387. A month later, Meta Pixel stopped sending this data, even though Facebook and Instagram apps continued to monitor the port.

In November, Meta Pixel switched to a new method that invoked WebSocket, a protocol for two-way communications, over port 12387.

That same month, Meta Pixel also deployed a new method that used WebRTC, a real-time peer-to-peer communication protocol commonly used for making audio or video calls in the browser. This method used a complicated process known as SDP munging, a technique for JavaScript code to modify Session Description Protocol data before it’s sent. Still in use today, the SDP munging by Meta Pixel inserts key _fbp cookie content into fields meant for connection information. This causes the browser to send that data as part of a STUN request to the Android local host, where the Facebook or Instagram app can read it and link it to the user.

In May, a beta version of Chrome introduced a mitigation that blocked the type of SDP munging that Meta Pixel used. Within days, Meta Pixel circumvented the mitigation by adding a new method that swapped the STUN requests with the TURN requests.

In a post, the researchers provided a detailed description of the _fbp cookie from a website to the native app and, from there, to the Meta server:

1. The user opens the native Facebook or Instagram app, which eventually is sent to the background and creates a background service to listen for incoming traffic on a TCP port (12387 or 12388) and a UDP port (the first unoccupied port in 12580–12585). Users must be logged-in with their credentials on the apps.

2. The user opens their browser and visits a website integrating the Meta Pixel.

3. At this stage, some websites wait for users’ consent before embedding Meta Pixel. In our measurements of the top 100K website homepages, we found websites that require consent to be a minority (more than 75% of affected sites does not require user consent)…

4. The Meta Pixel script is loaded and the _fbp cookie is sent to the native Instagram or Facebook app via WebRTC (STUN) SDP Munging.

5. The Meta Pixel script also sends the _fbp value in a request to https://www.facebook.com/tr along with other parameters such as page URL (dl), website and browser metadata, and the event type (ev) (e.g., PageView, AddToCart, Donate, Purchase).

6. The Facebook or Instagram apps receive the _fbp cookie from the Meta JavaScripts running on the browser and transmits it to the GraphQL endpoint (https://graph[.]facebook[.]com/graphql) along with other persistent user identifiers, linking users’ fbp ID (web visit) with their Facebook or Instagram account.

Detailed flow of the way the Meta Pixel leaks the _fbp cookie from Android browsers to it’s Facebook and Instagram apps.

Detailed flow of the way the Meta Pixel leaks the _fbp cookie from Android browsers to it’s Facebook and Instagram apps.

The first known instance of Yandex Metrica linking websites visited in Android browsers to app identities was in May 2017, when the tracker started sending HTTP requests to local ports 29009 and 30102. In May 2018, Yandex Metrica also began sending the data through HTTPS to ports 29010 and 30103. Both methods remained in place as of publication time.

An overview of Yandex identifier sharing

An overview of Yandex identifier sharing

A timeline of web history tracking by Meta and Yandex

A timeline of web history tracking by Meta and Yandex

Some browsers for Android have blocked the abusive JavaScript in trackers. DuckDuckGo, for instance, was already blocking domains and IP addresses associated with the trackers, preventing the browser from sending any identifiers to Meta. The browser also blocked most of the domains associated with Yandex Metrica. After the researchers notified DuckDuckGo of the incomplete blacklist, developers added the missing addresses.

The Brave browser, meanwhile, also blocked the sharing of identifiers due to its extensive blocklists and existing mitigation to block requests to the localhost without explicit user consent. Vivaldi, another Chromium-based browser, forwards the identifiers to local Android ports when the default privacy setting is in place. Changing the setting to block trackers appears to thwart browsing history leakage, the researchers said.

Tracking blocker settings in Vivaldi for Android.

There’s got to be a better way

The various remedies DuckDuckGo, Brave, Vivaldi, and Chrome have put in place are working as intended, but the researchers caution they could become ineffective at any time.

“Any browser doing blocklisting will likely enter into a constant arms race, and it’s just a partial solution,” Vallina Rodriguez said of the current mitigations. “Creating effective blocklists is hard, and browser makers will need to constantly monitor the use of this type of capability to detect other hostnames potentially abusing localhost channels and then updating their blocklists accordingly.”

He continued:

While this solution works once you know the hostnames doing that, it’s not the right way of mitigating this issue, as trackers may find ways of accessing this capability (e.g., through more ephemeral hostnames). A long-term solution should go through the design and development of privacy and security controls for localhost channels, so that users can be aware of this type of communication and potentially enforce some control or limit this use (e.g., a permission or some similar user notifications).

Chrome and most other Chromium-based browsers executed the JavaScript as Meta and Yandex intended. Firefox did as well, although for reasons that aren’t clear, the browser was not able to successfully perform the SDP munging specified in later versions of the code. After blocking the STUN variant of SDP munging in the early May beta release, a production version of Chrome released two weeks ago began blocking both the STUN and TURN variants. Other Chromium-based browsers are likely to implement it in the coming weeks. A representative for Firefox-maker Mozilla said the organization prioritizes user privacy and is taking the report seriously

“We are actively investigating the reported behavior, and working to fully understand its technical details and implications,” Mozilla said in an email. “Based on what we’ve seen so far, we consider these to be severe violations of our anti-tracking policies, and are assessing solutions to protect against these new tracking techniques.”

The researchers warn that the current fixes are so specific to the code in the Meta and Yandex trackers that it would be easy to bypass them with a simple update.

“They know that if someone else comes in and tries a different port number, they may bypass this protection,” said Gunes Acar, the researcher behind the initial discovery, referring to the Chrome developer team at Google. “But our understanding is they want to send this message that they will not tolerate this form of abuse.”

Fellow researcher Vallina-Rodriguez said the more comprehensive way to prevent the abuse is for Android to overhaul the way it handles access to local ports.

“The fundamental issue is that the access to the local host sockets is completely uncontrolled on Android,” he explained. “There’s no way for users to prevent this kind of communication on their devices. Because of the dynamic nature of JavaScript code and the difficulty to keep blocklists up to date, the right way of blocking this persistently is by limiting this type of access at the mobile platform and browser level, including stricter platform policies to limit abuse.”

Got consent?

The researchers who made this discovery are:

  • Aniketh Girish, PhD student at IMDEA Networks
  • Gunes Acar, assistant professor in Radboud University’s Digital Security Group & iHub
  • Narseo Vallina-Rodriguez, associate professor at IMDEA Networks
  • Nipuna Weerasekara, PhD student at IMDEA Networks
  • Tim Vlummens, PhD student at COSIC, KU Leuven

Acar said he first noticed Meta Pixel accessing local ports while visiting his own university’s website.

There’s no indication that Meta or Yandex has disclosed the tracking to either websites hosting the trackers or end users who visit those sites. Developer forums show that many websites using Meta Pixel were caught off guard when the scripts began connecting to local ports.

“Since 5th September, our internal JS error tracking has been flagging failed fetch requests to localhost: 12387,” one developer wrote. “No changes have been made on our side, and the existing Facebook tracking pixel we use loads via Google Tag Manager.”

“Is there some way I can disable this?” another developer encountering the unexplained local port access asked.

It’s unclear whether browser-to-native-app tracking violates any privacy laws in various countries. Both Meta and companies hosting its Meta Pixel, however, have faced a raft of lawsuits in recent years alleging that the data collected violates privacy statutes. A research paper from 2023 found that Meta pixel, then called the Facebook Pixel, “tracks a wide range of user activities on websites with alarming detail, especially on websites classified as sensitive categories under GDPR,” the abbreviation for the European Union’s General Data Protection Regulation.

So far, Google has provided no indication that it plans to redesign the way Android handles local port access. For now, the most comprehensive protection against Meta Pixel and Yandex Metrica tracking is to refrain from installing the Facebook, Instagram, or Yandex apps on Android devices.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Meta and Yandex are de-anonymizing Android users’ web browsing identifiers Read More »

breaking-down-why-apple-tvs-are-privacy-advocates’-go-to-streaming-device

Breaking down why Apple TVs are privacy advocates’ go-to streaming device


Using the Apple TV app or an Apple account means giving Apple more data, though.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Every time I write an article about the escalating advertising and tracking on today’s TVs, someone brings up Apple TV boxes. Among smart TVs, streaming sticks, and other streaming devices, Apple TVs are largely viewed as a safe haven.

“Just disconnect your TV from the Internet and use an Apple TV box.”

That’s the common guidance you’ll hear from Ars readers for those seeking the joys of streaming without giving up too much privacy. Based on our research and the experts we’ve consulted, that advice is pretty solid, as Apple TVs offer significantly more privacy than other streaming hardware providers.

But how private are Apple TV boxes, really? Apple TVs don’t use automatic content recognition (ACR, a user-tracking technology leveraged by nearly all smart TVs and streaming devices), but could that change? And what about the software that Apple TV users do use—could those apps provide information about you to advertisers or Apple?

In this article, we’ll delve into what makes the Apple TV’s privacy stand out and examine whether users should expect the limited ads and enhanced privacy to last forever.

Apple TV boxes limit tracking out of the box

One of the simplest ways Apple TVs ensure better privacy is through their setup process, during which you can disable Siri, location tracking, and sending analytics data to Apple. During setup, users also receive several opportunities to review Apple’s data and privacy policies. Also off by default is the boxes’ ability to send voice input data to Apple.

Most other streaming devices require users to navigate through pages of settings to disable similar tracking capabilities, which most people are unlikely to do. Apple’s approach creates a line of defense against snooping, even for those unaware of how invasive smart devices can be.

Apple TVs running tvOS 14.5 and later also make third-party app tracking more difficult by requiring such apps to request permission before they can track users.

“If you choose Ask App Not to Track, the app developer can’t access the system advertising identifier (IDFA), which is often used to track,” Apple says. “The app is also not permitted to track your activity using other information that identifies you or your device, like your email address.”

Users can access the Apple TV settings and disable the ability of third-party apps to ask permission for tracking. However, Apple could further enhance privacy by enabling this setting by default.

The Apple TV also lets users control which apps can access the set-top box’s Bluetooth functionality, photos, music, and HomeKit data (if applicable), and the remote’s microphone.

“Apple’s primary business model isn’t dependent on selling targeted ads, so it has somewhat less incentive to harvest and monetize incredible amounts of your data,” said RJ Cross, director of the consumer privacy program at the Public Interest Research Group (PIRG). “I personally trust them more with my data than other tech companies.”

What if you share analytics data?

If you allow your Apple TV to share analytics data with Apple or app developers, that data won’t be personally identifiable, Apple says. Any collected personal data is “not logged at all, removed from reports before they’re sent to Apple, or protected by techniques, such as differential privacy,” Apple says.

Differential privacy, which injects noise into collected data, is one of the most common methods used for anonymizing data. In support documentation (PDF), Apple details its use of differential privacy:

The first step we take is to privatize the information using local differential privacy on the user’s device. The purpose of privatization is to assure that Apple’s servers don’t receive clear data. Device identifiers are removed from the data, and it is transmitted to Apple over an encrypted channel. The Apple analysis system ingests the differentially private contributions, dropping IP addresses and other metadata. The final stage is aggregation, where the privatized records are processed to compute the relevant statistics, and the aggregate statistics are then shared with relevant Apple teams. Both the ingestion and aggregation stages are performed in a restricted access environment so even the privatized data isn’t broadly accessible to Apple employees.

What if you use an Apple account with your Apple TV?

Another factor to consider is Apple’s privacy policy regarding Apple accounts, formerly Apple IDs.

Apple support documentation says you “need” an Apple account to use an Apple TV, but you can use the hardware without one. Still, it’s common for people to log into Apple accounts on their Apple TV boxes because it makes it easier to link with other Apple products. Another reason someone might link an Apple TV box with an Apple account is to use the Apple TV app, a common way to stream on Apple TV boxes.

So what type of data does Apple harvest from Apple accounts? According to its privacy policy, the company gathers usage data, such as “data about your activity on and use of” Apple offerings, including “app launches within our services…; browsing history; search history; [and] product interaction.”

Other types of data Apple may collect from Apple accounts include transaction information (Apple says this is “data about purchases of Apple products and services or transactions facilitated by Apple, including purchases on Apple platforms”), account information (“including email address, devices registered, account status, and age”), device information (including serial number and browser type), contact information (including physical address and phone number), and payment information (including bank details). None of that is surprising considering the type of data needed to make an Apple account work.

Many Apple TV users can expect Apple to gather more data from their Apple account usage on other devices, such as iPhones or Macs. However, if you use the same Apple account across multiple devices, Apple recognizes that all the data it has collected from, for example, your iPhone activity, also applies to you as an Apple TV user.

A potential workaround could be maintaining multiple Apple accounts. With an Apple account solely dedicated to your Apple TV box and Apple TV hardware and software tracking disabled as much as possible, Apple would have minimal data to ascribe to you as an Apple TV owner. You can also use your Apple TV box without an Apple account, but then you won’t be able to use the Apple TV app, one of the device’s key features.

Data collection via the Apple TV app

You can download third-party apps like Netflix and Hulu onto an Apple TV box, but most TV and movie watching on Apple TV boxes likely occurs via the Apple TV app. The app is necessary for watching content on the Apple TV+ streaming service, but it also drives usage by providing access to the libraries of many (but not all) popular streaming apps in one location. So understanding the Apple TV app’s privacy policy is critical to evaluating how private Apple TV activity truly is.

As expected, some of the data the app gathers is necessary for the software to work. That includes, according to the app’s privacy policy, “information about your purchases, downloads, activity in the Apple TV app, the content you watch, and where you watch it in the Apple TV app and in connected apps on any of your supported devices.” That all makes sense for ensuring that the app remembers things like which episode of Severance you’re on across devices.

Apple collects other data, though, that isn’t necessary for functionality. It says it gathers data on things like the “features you use (for example, Continue Watching or Library),” content pages you view, how you interact with notifications, and approximate location information (that Apple says doesn’t identify users) to help improve the app.

Additionally, Apple tracks the terms you search for within the app, per its policy:

We use Apple TV search data to improve models that power Apple TV. For example, aggregate Apple TV search queries are used to fine-tune the Apple TV search model.

This data usage is less intrusive than that of other streaming devices, which might track your activity and then sell that data to third-party advertisers. But some people may be hesitant about having any of their activities tracked to benefit a multi-trillion-dollar conglomerate.

Data collected from the Apple TV app used for ads

By default, the Apple TV app also tracks “what you watch, your purchases, subscriptions, downloads, browsing, and other activities in the Apple TV app” to make personalized content recommendations. Content recommendations aren’t ads in the traditional sense but instead provide a way for Apple to push you toward products by analyzing data it has on you.

You can disable the Apple TV app’s personalized recommendations, but it’s a little harder than you might expect since you can’t do it through the app. Instead, you need to go to the Apple TV settings and then select Apps > TV > Use Play History > Off.

The most privacy-conscious users may wish that personalized recommendations were off by default. Darío Maestro, senior legal fellow at the nonprofit Surveillance Technology Oversight Project (STOP), noted to Ars that even though Apple TV users can opt out of personalized content recommendations, “many will not realize they can.”

Apple can also use data it gathers on you from the Apple TV app to serve traditional ads. If you allow your Apple TV box to track your location, the Apple TV app can also track your location. That data can “be used to serve geographically relevant ads,” according to the Apple TV app privacy policy. Location tracking, however, is off by default on Apple TV boxes.

Apple’s tvOS doesn’t have integrated ads. For comparison, some TV OSes, like Roku OS and LG’s webOS, show ads on the OS’s home screen and/or when showing screensavers.

But data gathered from the Apple TV app can still help Apple’s advertising efforts. This can happen if you allow personalized ads in other Apple apps serving targeted apps, such as Apple News, the App Store, or Stocks. In such cases, Apple may apply data gathered from the Apple TV app, “including information about the movies and TV shows you purchase from Apple, to serve ads in those apps that are more relevant to you,” the Apple TV app privacy policy says.

Apple also provides third-party advertisers and strategic partners with “non-personal data” gathered from the Apple TV app:

We provide some non-personal data to our advertisers and strategic partners that work with Apple to provide our products and services, help Apple market to customers, and sell ads on Apple’s behalf to display on the App Store and Apple News and Stocks.

Apple also shares non-personal data from the Apple TV with third parties, such as content owners, so they can pay royalties, gauge how much people are watching their shows or movies, “and improve their associated products and services,” Apple says.

Apple’s policy notes:

For example, we may share non-personal data about your transactions, viewing activity, and region, as well as aggregated user demographics[,] such as age group and gender (which may be inferred from information such as your name and salutation in your Apple Account), to Apple TV strategic partners, such as content owners, so that they can measure the performance of their creative work [and] meet royalty and accounting requirements.

When reached for comment, an Apple spokesperson told Ars that Apple TV users can clear their play history from the app.

All that said, the Apple TV app still shares far less data with third parties than other streaming apps. Netflix, for example, says it discloses some personal information to advertising companies “in order to select Advertisements shown on Netflix, to facilitate interaction with Advertisements, and to measure and improve effectiveness of Advertisements.”

Warner Bros. Discovery says it discloses information about Max viewers “with advertisers, ad agencies, ad networks and platforms, and other companies to provide advertising to you based on your interests.” And Disney+ users have Nielsen tracking on by default.

What if you use Siri?

You can easily deactivate Siri when setting up an Apple TV. But those who opt to keep the voice assistant and the ability to control Apple TV with their voice take somewhat of a privacy hit.

According to the privacy policy accessible in Apple TV boxes’ settings, Apple boxes automatically send all Siri requests to Apple’s servers. If you opt into using Siri data to “Improve Siri and Dictation,” Apple will store your audio data. If you opt out, audio data won’t be stored, but per the policy:

In all cases, transcripts of your interactions will be sent to Apple to process your requests and may be stored by Apple.

Apple TV boxes also send audio and transcriptions of dictation input to Apple servers for processing. Apple says it doesn’t store the audio but may store transcriptions of the audio.

If you opt to “Improve Siri and Dictation,” Apple says your history of voice requests isn’t tied to your Apple account or email. But Apple is vague about how long it may store data related to voice input performed with the Apple TV if you choose this option.

The policy states:

Your request history, which includes transcripts and any related request data, is associated with a random identifier for up to six months and is not tied to your Apple Account or email address. After six months, you request history is disassociated from the random identifier and may be retained for up to two years. Apple may use this data to develop and improve Siri, Dictation, Search, and limited other language processing functionality in Apple products …

Apple may also review a subset of the transcripts of your interactions and this … may be kept beyond two years for the ongoing improvements of products and services.

Apple promises not to use Siri and voice data to build marketing profiles or sell them to third parties, but it hasn’t always adhered to that commitment. In January, Apple agreed to pay $95 million to settle a class-action lawsuit accusing Siri of recording private conversations and sharing them with third parties for targeted ads. In 2019, contractors reported hearing private conversations and recorded sex via Siri-gathered audio.

Outside of Apple, we’ve seen voice request data used questionably, including in criminal trials and by corporate employees. Siri and dictation data also represent additional ways a person’s Apple TV usage might be unexpectedly analyzed to fuel Apple’s business.

Automatic content recognition

Apple TVs aren’t preloaded with automatic content recognition (ACR), an Apple spokesperson confirmed to Ars, another plus for privacy advocates. But ACR is software, so Apple could technically add it to Apple TV boxes via a software update at some point.

Sherman Li, the founder of Enswers, the company that first put ACR in Samsung TVs, confirmed to Ars that it’s technically possible for Apple to add ACR to already-purchased Apple boxes. Years ago, Enswers retroactively added ACR to other types of streaming hardware, including Samsung and LG smart TVs. (Enswers was acquired by Gracenote, which Nielsen now owns.)

In general, though, there are challenges to adding ACR to hardware that people already own, Li explained:

Everyone believes, in theory, you can add ACR anywhere you want at any time because it’s software, but because of the way [hardware is] architected… the interplay between the chipsets, like the SoCs, and the firmware is different in a lot of situations.

Li pointed to numerous variables that could prevent ACR from being retroactively added to any type of streaming hardware, “including access to video frame buffers, audio streams, networking connectivity, security protocols, OSes, and app interface communication layers, especially at different levels of the stack in these devices, depending on the implementation.”

Due to the complexity of Apple TV boxes, Li suspects it would be difficult to add ACR to already-purchased Apple TVs. It would likely be simpler for Apple to release a new box with ACR if it ever decided to go down that route.

If Apple were to add ACR to old or new Apple TV boxes, the devices would be far less private, and the move would be highly unpopular and eliminate one of the Apple TV’s biggest draws.

However, Apple reportedly has a growing interest in advertising to streaming subscribers. The Apple TV+ streaming service doesn’t currently show commercials, but the company is rumored to be exploring a potential ad tier. The suspicions stem from a reported meeting between Apple and the United Kingdom’s ratings body, Barb, to discuss how it might track ads on Apple TV+, according to a July report from The Telegraph.

Since 2023, Apple has also hired several prominent names in advertising, including a former head of advertising at NBCUniversal and a new head of video ad sales. Further, Apple TV+ is one of the few streaming services to remain ad-free, and it’s reported to be losing Apple $1 billion per year since its launch.

One day soon, Apple may have much more reason to care about advertising in streaming and being able to track the activities of people who use its streaming offerings. That has implications for Apple TV box users.

“The more Apple creeps into the targeted ads space, the less I’ll trust them to uphold their privacy promises. You can imagine Apple TV being a natural progression for selling ads,” PIRG’s Cross said.

Somewhat ironically, Apple has marketed its approach to privacy as a positive for advertisers.

“Apple’s commitment to privacy and personal relevancy builds trust amongst readers, driving a willingness to engage with content and ads alike,” Apple’s advertising guide for buying ads on Apple News and Stocks reads.

The most private streaming gadget

It remains technologically possible for Apple to introduce intrusive tracking or ads to Apple TV boxes, but for now, the streaming devices are more private than the vast majority of alternatives, save for dumb TVs (which are incredibly hard to find these days). And if Apple follows its own policies, much of the data it gathers should be kept in-house.

However, those with strong privacy concerns should be aware that Apple does track certain tvOS activities, especially those that happen through Apple accounts, voice interaction, or the Apple TV app. And while most of Apple’s streaming hardware and software settings prioritize privacy by default, some advocates believe there’s room for improvement.

For example, STOP’s Maestro said:

Unlike in the [European Union], where the upcoming Data Act will set clearer rules on transfers of data generated by smart devices, the US has no real legislation governing what happens with your data once it reaches Apple’s servers. Users are left with little way to verify those privacy promises.

Maestro suggested that Apple could address these concerns by making it easier for people to conduct security research on smart device software. “Allowing the development of alternative or modified software that can evaluate privacy settings could also increase user trust and better uphold Apple’s public commitment to privacy,” Maestro said.

There are ways to limit the amount of data that advertisers can get from your Apple TV. But if you use the Apple TV app, Apple can use your activity to help make business decisions—and therefore money.

As you might expect from a device that connects to the Internet and lets you stream shows and movies, Apple TV boxes aren’t totally incapable of tracking you. But they’re still the best recommendation for streaming users seeking hardware with more privacy and fewer ads.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Breaking down why Apple TVs are privacy advocates’ go-to streaming device Read More »

my-3d-printing-journey,-part-2:-printing-upgrades-and-making-mistakes

My 3D printing journey, part 2: Printing upgrades and making mistakes


3D-printing new parts for the A1 taught me a lot about plastic, and other things.

Different plastic filament is good for different things (and some kinds don’t work well with the A1 and other open-bed printers). Credit: Andrew Cunningham

Different plastic filament is good for different things (and some kinds don’t work well with the A1 and other open-bed printers). Credit: Andrew Cunningham

For the last three months or so, I’ve been learning to use (and love) a Bambu Labs A1 3D printer, a big, loud machine that sits on my desk and turns pictures on my computer screen into real-world objects.

In the first part of my series about diving into the wild world of 3D printers, I covered what I’d learned about the different types of 3D printers, some useful settings in the Bambu Studio app (which should also be broadly useful to know about no matter what printer you use), and some initial, magical-feeling successes in downloading files that I turned into useful physical items using a few feet of plastic filament and a couple hours of time.

For this second part, I’m focusing on what I learned when I embarked on my first major project—printing upgrade parts for the A1 with the A1. It was here that I made some of my first big 3D printing mistakes, mistakes that prompted me to read up on the different kinds of 3D printer filament, what each type of filament is good for, and which types the A1 is (and is not) good at handling as an un-enclosed, bed-slinging printer.

As with the information in part one, I share this with you not because it is groundbreaking but because there’s a lot of information out there, and it can be an intimidating hobby to break into. By sharing what I learned and what I found useful early in my journey, I hope I can help other people who have been debating whether to take the plunge.

Adventures in recursion: 3D-printing 3D printer parts

A display cover for the A1’s screen will protect it from wear and tear and allow you to easily hide it when you want to. Credit: Andrew Cunningham

My very first project was a holder for my office’s ceiling fan remote. My second, similarly, was a wall-mounted holder for the Xbox gamepad and wired headset I use with my gaming PC, which normally just had to float around loose on my desk when I wasn’t using them.

These were both relatively quick, simple prints that showed the printer was working like it was supposed to—all of the built-in temperature settings, the textured PEI plate, the printer’s calibration and auto-bed-leveling routines added up to make simple prints as dead-easy as Bambu promised they would be. It made me eager to seek out other prints, including stuff on the Makerworld site I hadn’t thought to try yet.

The first problem I had? Well, as part of its pre-print warmup routine, the A1 spits a couple of grams of filament out and tosses it to the side. This is totally normal—it’s called “purging,” and it gets rid of filament that’s gone brittle from being heated too long. If you’re changing colors, it also clears any last bits of the previous color that are still in the nozzle. But it didn’t seem particularly elegant to have the printer eternally launching little knots of plastic onto my desk.

The A1’s default design just ejects little molten wads of plastic all over your desk when it’s changing or purging filament. This is one of many waste bin (or “poop bucket”) designs made to catch and store these bits and pieces. Credit: Andrew Cunningham

The solution to this was to 3D-print a purging bucket for the A1 (also referred to, of course, as a “poop bucket” or “poop chute.”) In fact, there are tons of purging buckets designed specifically for the A1 because it’s a fairly popular budget model and there’s nothing stopping people from making parts that fit it like a glove.

I printed this bucket, as well as an additional little bracket that would “catch” the purged filament and make sure it fell into the bucket. And this opened the door to my first major printing project: printing additional parts for the printer itself.

I took to YouTube and watched a couple of videos on the topic because I’m apparently far from the first person who has had this reaction to the A1. After much watching and reading, here are the parts I ended up printing:

  • Bambu Lab AMS Lite Top Mount and Z-Axis Stiffener: The Lite version of Bambu’s Automated Materials System (AMS) is the optional accessory that enables multi-color printing for the A1. And like the A1 itself, it’s a lower-cost, open-air version of the AMS that works with Bambu’s more expensive printers.
    • The AMS Lite comes with a stand that you can use to set it next to the A1, but that’s more horizontal space than I had to spare. This top mount is Bambu’s official solution for putting the AMS Lite on top of the A1 instead, saving you some space.
    • The top mount actually has two important components: the top mount itself and a “Z-Axis Stiffener,” a pair of legs that extend behind the A1 to make the whole thing more stable on a desk or table. Bambu already recommends 195 mm (or 7.7 inches) of “safety margin” behind the A1 to give the bed room to sling, so if you’ve left that much space behind the printer, you probably have enough space for these legs.
    • After installing all of these parts, the top mount, and a fully loaded AMS, it’s probably a good idea to run the printer’s calibration cycle again to account for the difference in balance.
    • You may want to print the top mount itself with PETG, which is a bit stronger and more impact-resistant than PLA plastic.
  • A1 Purge Waste Bin and Deflector, by jimbobble. There are approximately 1 million different A1 purge bucket designs, each with its own appeal. But this one is large and simple and includes a version that is compatible with the printer Z-Axis Stiffener legs.
  • A1 rectangular fan cover, by Arzhang Lotfi. There are a bunch of options for this, including fun ones, but you can find dozens of simple grille designs that snap in place and protect the fan on the A1’s print head.
  • Bambu A1 Adjustable Camera Holder, by mlodybuk: This one’s a little more complicated because it does require some potentially warranty-voiding disassembly of components. The A1’s camera is also pretty awful no matter how you position it, with sub-1 FPS video that’s just barely suitable for checking on whether a print has been ruined or not.
    • But if you want to use it, I’d highly recommend moving it from the default location, which is low down and at an odd angle, so you’re not getting the best view of your print that you can.
    • This print includes a redesigned cover for the camera area, a filler piece to fill the hole where the camera used to be to keep dust and other things from getting inside the printer, and a small camera receptacle that snaps in place onto the new cover and can be turned up and down.
    • If you’re not comfortable modding your machine like this, the camera is livable as-is, but this got me a much better vantage point on my prints.

With a little effort, this print allows you to reposition the A1’s camera, giving you a better angle on your prints and making it adjustable. Credit: Andrew Cunningham

  • A1 Screen Protector New Release, by Rox3D: Not strictly necessary, but an unobtrusive way to protect (and to “turn off”) the A1’s built-in LCD screen when it’s not in use. The hinge mechanism of this print is stiff enough that the screen cover can be lifted partway without flopping back down.
  • A1 X-Axis Cover, by Moria3DPStudio: Another only-if-you-want-it print, this foldable cover slides over the A1’s exposed rail when you’re not using it. Just make sure you take it back off before you try to print anything—it won’t break anything, but the printer won’t be happy with you. Not that I’m speaking from experience.
  • Ultimate Filament Spool Enclosure for the AMS Lite, by Supergrapher: Here’s the big one, and it’s a true learning experience for all kinds of things. The regular Bambu AMS system for the P- and X-series printers is enclosed, which is useful not just for keeping dust from settling on your filament spools but for controlling humidity and keeping spools you’ve dried from re-absorbing moisture. There’s no first-party enclosure for the AMS Lite, but this user-created enclosure is flexible and popular, and it can be used to enclose the AMS Lite whether you have it mounted on top of or to the side of the A1. The small plastic clips that keep the lids on are mildly irritating to pop on and off, relative to a lid that you can just lift up and put back down, but the benefits are worth it.
  • 3D Disc for A1 – “Pokéball,” by BS 3D Print: One of the few purely cosmetic parts I’ve printed. The little spinning bit on the front of the A1’s print head shows you when the filament is being extruded, but it’s not a functional part. This is just one of dozens and dozens of cosmetic replacements for it if you choose to pop it off.
  • Sturdy Modular Filament Spool Rack, by Antiphrasis: Not technically an upgrade for the A1, but an easy recommendation for any new 3D printers who suddenly find themselves with a rainbow of a dozen-plus different filaments you want to try. Each shelf here holds three spools of filament, and you can print additional shelves to spread them out either horizontally, vertically, or both, so you can make something that exactly meets your needs and fits your space. A two-by-three shelf gave me room for 18 spools, and I can print more if I need them.

There are some things that others recommend for the A1 that I haven’t printed yet—mainly guides for cables, vibration dampeners for the base, and things to reinforce areas of possible stress for the print head and the A1’s loose, dangly wire.

Part of the fun is figuring out what your problems are, identifying prints that could help solve the problem, and then trying them out to see if they do solve your problem. (The parts have also given my A1 its purple accents, since a bright purple roll of filament was one of the first ones my 5-year-old wanted to get.)

Early mistakes

The “Z-Axis stiffener,” an extra set of legs for the A1 that Bambu recommends if you top-mount your AMS Lite. This took me three tries to print, mainly because of my own inexperience. Credit: Andrew Cunningham

Printing each of these parts gave me a solid crash course into common pitfalls and rookie mistakes.

For example, did you know that ABS plastic doesn’t print well on an open-bed printer? Well, it doesn’t! But I didn’t know that when I bought a spool of ABS to print some parts that I wanted to be sturdier and more resistant to wear and tear. I’d open the window and leave the room to deal with the fumes and be fine, I figured.

I tried printing the Z-Axis Stiffener supports for the A1 in ABS, but they went wonky. Lower bed temperature and (especially) ambient temperature tends to make ABS warp and curl upward, and extrusion-based printers rely on precision to do their thing. Once a layer—any layer!—gets screwed up during a print, that will reverberate throughout the entire rest of the object. Which is why my first attempt at supports ended up being totally unusable.

Large ABS plastic prints are tough to do on an open-bed printer. You can see here how that lower-left corner peeled upward slightly from the print bed, and any unevenness in the foundation of your print is going to reverberate in the layers that are higher up. Credit: Andrew Cunningham

I then tried printing another set of supports with PLA plastic, ones that claimed to maintain their sturdiness while using less infill (that is, how much plastic is actually used inside the print to give it rigidity—around 15 percent is typically a good balance between rigidity and wasting plastic that you’ll never see, though there may be times when you want more or less). I’m still not sure what I did, but the prints I got were squishy and crunchy to the touch, a clear sign that the amount and/or type of infill wasn’t sufficient. It wasn’t until my third try—the original Bambu-made supports, in PLA instead of ABS—that I made supports I could actually use.

An attempt at printing the same part with PLA, but with insufficient infill plastic that left my surfaces rough and the interiors fragile and crunchy. I canceled this one about halfway through when it became clear that something wasn’t right. Credit: Andrew Cunningham

After much reading and research, I learned that for most things, PETG plastic is what you use if you want to make sturdier (and outdoor-friendly) prints on an open bed. Great! I decided I’d print most of the A1 ABS enclosure with clear PETG filament to make something durable that I could also see through when I wanted to see how much filament was left on a given spool.

This ended up being a tricky first experiment with PETG plastic for three different reasons. For one, printing “clear” PETG that actually looks clear is best done with a larger nozzle (Bambu offers 0.2 mm, 0.6 mm, and 0.8 mm nozzles for the A1, in addition to the default 0.4 mm) because you can get the same work done in fewer layers, and the more layers you have, the less “clear” that clear plastic will be. Fine!

The Inland-brand clear PETG+ I bought from our local Micro Center also didn’t love the default temperature settings for generic PETG that the A1 uses, both for the heatbed and the filament itself; plastic flowed unevenly from the nozzle and was prone to coming detached from the bed. If this is happening to you (or if you want to experiment with lowering your temperatures to save a bit of energy), going into Bambu Studio, nudging temperatures by 5 degrees in either direction, and trying a quick test print (I like this one) helped me dial in my settings when using unfamiliar filament.

This homebrewed enclosure for the AMS Lite multi-color filament switcher (and the top mount that sticks it on the top of the printer) has been my biggest and most complex print to date. An 0.8 mm nozzle and some settings changes are recommended to maximize the transparency of transparent PETG filament. Credit: Andrew Cunningham

Finally, PETG is especially prone to absorbing ambient moisture. When that moisture hits a 260° nozzle, it quickly evaporates, and that can interfere with the evenness of the flow rate and the cleanliness of your print (this usually manifests as “stringing,” fine, almost cotton-y strands that hang off your finished prints).

You can buy dedicated filament drying boxes or stick spools in an oven at a low temperature for a few hours if this really bothers you or if it’s significant enough to affect the quality of your prints. One of the reasons to have an enclosure is to create a humidity-controlled environment to keep your spools from absorbing too much moisture in the first place.

The temperature and nozzle-size adjustments made me happy enough with my PETG prints that I was fine to pick off the little fuzzy stringers that were on my prints afterward, but your mileage may vary.

These are just a few examples of the kinds of things you learn if you jump in with both feet and experiment with different prints and plastics in rapid succession. Hopefully, this advice helps you avoid my specific mistakes. But the main takeaway is that experience is the best teacher.

The wide world of plastics

I used filament to print a modular filament shelf for my filaments. Credit: Andrew Cunningham

My wife had gotten me two spools of filament, a white and a black spool of Bambu’s own PLA Basic. What does all of that mean?

No matter what you’re buying, it’s most commonly sold in 1 kilogram spools (the weight of the plastic, not the plastic and the spool together). Each thing you print will give you an estimate of how much filament, in grams, you’ll need to print it.

There are quite a few different types of plastics out there, on Bambu’s site and in other stores. But here are the big ones I found out about almost immediately:

Polylactic acid, or PLA

By far the most commonly used plastic, PLA is inexpensive, available in a huge rainbow of colors and textures, and has a relatively low melting point, making it an easy material for most 3D printers to work with. It’s made of renewable material rather than petroleum, which makes it marginally more environmentally friendly than some other kinds of plastic. And it’s easy to “finish” PLA-printed parts if you’re trying to make props, toys, or other objects that you don’t want to have that 3D printed look about them, whether you’re sanding those parts or using a chemical to smooth the finish.

The downside is that it’s not particularly resilient—sitting in a hot car or in direct sunlight for very long is enough to melt or warp it, which makes it a bad choice for anything that needs to survive outdoors or anything load-bearing. Its environmental bona fides are also a bit oversold—it is biodegradable, but it doesn’t do so quickly outside of specialized composting facilities. If you throw it in the trash and it goes to a landfill, it will still take its time returning to nature.

You’ll find a ton of different kinds of PLA out there. Some have additives that give them a matte or silky texture. Some have little particles of wood or metal or even coffee or spent beer grains embedded in them, meant to endow 3D printed objects with the look, feel, or smell of those materials.

Some PLA just has… some other kind of unspecified additive in it. You’ll see “PLA+” all over the place, but as far as I can tell, there is no industry-wide agreed-upon standard for what the plus is supposed to mean. Manufacturers sometimes claim it’s stronger than regular PLA; other terms like “PLA Pro” and “PLA Max” are similarly non-standardized and vague.

Polyethylene terephthalate glycol, or PETG

PET is a common household plastic, and you’ll find it in everything from clothing fibers to soda bottles. PETG is the same material, with ethylene glycol (the “G”) added to lower the melting point and make it less prone to crystallizing and warping. It also makes it more transparent, though trying to print anything truly “transparent” with an extrusion printer is difficult.

PETG has a higher melting point than PLA, but it’s still lower than other kinds of plastics. This makes PETG a good middle ground for some types of printing. It’s better than PLA for functional load-bearing parts and outdoor use because it’s stronger and able to bend a bit without warping, but it’s still malleable enough to print well on all kinds of home 3D printers.

PETG can still be fussier to work with than PLA. I more frequently had issues with the edges of my PETG prints coming unstuck from the bed of the printer before the print was done.

PETG filament is also especially susceptible to absorbing moisture from the air, which can make extrusion messier. My PETG prints have usually had lots of little wispy strings of plastic hanging off them by the end—not enough to affect the strength or utility of the thing I’ve printed but enough that I needed to pull the strings off to clean up the print once it was done. Drying the filament properly could help with that if I ever need the prints to be cleaner in the first place.

It’s also worth noting that PETG is the strongest kind of filament that an open-bed printer like the A1 can handle reliably. You can succeed with other plastics, but Reddit anecdotes, my own personal experience, and Bambu’s filament guide all point to a higher level of difficulty.

Acrylonitrile butadiene styrene, or ABS

“Going to look at the filament wall at Micro Center” is a legit father-son activity at this point. Credit: Andrew Cunningham

You probably have a lot of ABS plastic in your life. Game consoles and controllers, the plastic keys on most keyboards, Lego bricks, appliances, plastic board game pieces—it’s mostly ABS.

Thin layers of ABS stuck together aren’t as strong or durable as commercially manufactured injection-molded ABS, but it’s still more heat-resistant and durable than 3D-printed PLA or PETG.

There are two big issues specific to ABS, which are also outlined in Bambu’s FAQ for the A1. The first is that it doesn’t print well on an open-bed printer, especially for larger prints. The corners are more prone to pulling up off the print bed, and as with a house, any problems in your foundation will reverberate throughout the rest of your print.

The second is fumes. All 3D-printed plastics emit fumes when they’ve been melted, and a good rule of thumb is to at least print things in a room where you can open the window (and not in a room where anyone or anything sleeps). But ABS and ASA plastics in particular can emit fumes that cause eye and respiratory irritation, headaches, and nausea if you’re printing them indoors with insufficient ventilation.

As for what quantity of printing counts as “dangerous,” there’s no real consensus, and the studies that have been done mostly land in inconclusive “further study is needed” territory. At a bare minimum, it’s considered a best practice to at least be able to open a window if you’re printing with ABS or to use a closed-bed printer in an unoccupied part of your home, like a garage, shed, or workshop space (if you have one).

Acrylonitrile styrene acrylate, or ASA

Described to me by Ars colleague Lee Hutchinson as “ABS but with more UV resistance,” this material is even better suited for outdoor applications than the other plastics on this list.

But also like ABS, you’ll have a hard time getting good results with an open-bed printer, and the fumes are more harmful to inhale. You’ll want a closed-bed printer and decent ventilation for good results.

Thermoplastic polyurethane, or TPU

TPU is best known for its flexibility relative to the other kinds of plastics on this list. It doesn’t get as brittle when it’s cold and has more impact-resistance, and it can print reasonably well on an open-bed printer.

One downside of TPU is that you need to print slowly to get reliably good results—a pain, when even relatively simple fidget toys can take an hour or two to print at full speed using PLA. Longer prints mean more power use and more opportunities for your print to peel off the print bed. A roll of TPU filament will also usually run you a few dollars more than a roll of PLA, PETG, or ABS.

First- or third-party filament?

The first-party Bambu spools have RFID chips in them that Bambu printers can scan to automatically show the type and color of filament that it is and to keep track of how much filament you have remaining. Bambu also has temperature and speed presets for all of its first-party filaments built into the printer and the Bambu Studio software. There are presets for a few other filament brands in the printer, but I usually ended up using the “generic” presets, which may need some tuning to ensure the best possible adhesion to the print bed and extrusion from the nozzle.

I mostly ended up using Inland-branded filament I picked up from my local Micro Center—both because it’s cheaper than Bambu’s first-party stuff and because it’s faster and easier for me to get to. If you don’t have a brick-and-mortar hobby store with filaments in stock, the A1 and other printers sometimes come with some sample filament swatches so you can see the texture and color of the stuff you’re buying online.

What’s next?

Part of the fun of 3D printing is that it can be used for a wide array of projects—organizing your desk or your kitchen, printing out little fidget-toy favors for your kid’s birthday party, printing out replacement parts for little plastic bits and bobs that have broken, or just printing out decorations and other objects you’ll enjoy looking at.

Once you’re armed with all of the basic information in this guide, the next step is really up to you. What would you find fun or useful? What do you need? How can 3D printing help you with other household tasks or hobbies that you might be trying to break into? For the last part of this series, the Ars staffers with 3D printers at home will share some of their favorite prints—hearing people talk about what they’d done themselves really opened my eyes to the possibilities and the utility of these devices, and more personal testimonials may help those of you who are on the fence to climb down off of it.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

My 3D printing journey, part 2: Printing upgrades and making mistakes Read More »

where-hyperscale-hardware-goes-to-retire:-ars-visits-a-very-big-itad-site

Where hyperscale hardware goes to retire: Ars visits a very big ITAD site

Inside the laptop/desktop examination bay at SK TES’s Fredericksburg, Va. site.

Credit: SK tes

Inside the laptop/desktop examination bay at SK TES’s Fredericksburg, Va. site. Credit: SK tes

The details of each unit—CPU, memory, HDD size—are taken down and added to the asset tag, and the device is sent on to be physically examined. This step is important because “many a concealed drive finds its way into this line,” Kent Green, manager of this site, told me. Inside the machines coming from big firms, there are sometimes little USB, SD, SATA, or M.2 drives hiding out. Some were make-do solutions installed by IT and not documented, and others were put there by employees tired of waiting for more storage. “Some managers have been pretty surprised when they learn what we found,” Green said.

With everything wiped and with some sense of what they’re made of, each device gets a rating. It’s a three-character system, like “A-3-6,” based on function, cosmetic condition, and component value. Based on needs, trends, and other data, devices that are cleared for resale go to either wholesale, retail, component harvesting, or scrap.

Full-body laptop skins

Wiping down and prepping a laptop, potentially for a full-cover adhesive skin.

Credit: SK TES

Wiping down and prepping a laptop, potentially for a full-cover adhesive skin. Credit: SK TES

If a device has retail value, it heads into a section of this giant facility where workers do further checks. Automated software plays sounds on the speakers, checks that every keyboard key is sending signals, and checks that laptop batteries are at 80 percent capacity or better. At the end of the line is my favorite discovery: full-body laptop skins.

Some laptops—certain Lenovo, Dell, and HP models—are so ubiquitous in corporate fleets that it’s worth buying an adhesive laminating sticker in their exact shape. They’re an uncanny match for the matte black, silver, and slightly less silver finishes of the laptops, covering up any blemishes and scratches. Watching one of the workers apply this made me jealous of their ability to essentially reset a laptop’s condition (so one could apply whole new layers of swag stickers, of course). Once rated, tested, and stickered, laptops go into a clever “cradle” box, get the UN 3481 “battery inside” sticker, and can be sold through retail.

Where hyperscale hardware goes to retire: Ars visits a very big ITAD site Read More »

200-mph-for-500-miles:-how-indycar-drivers-prepare-for-the-big-race

200 mph for 500 miles: How IndyCar drivers prepare for the big race


Andretti Global’s Kyle Kirkwood and Marcus Ericsson talk to us about the Indy 500.

INDIANAPOLIS, INDIANA - MAY 15: #28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana.

#28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana. Credit: Brandon Badraoui/Lumen via Getty Images

#28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana. Credit: Brandon Badraoui/Lumen via Getty Images

This coming weekend is a special one for most motorsport fans. There are Formula 1 races in Monaco and NASCAR races in Charlotte. And arguably towering over them both is the Indianapolis 500, being held this year for the 109th time. America’s oldest race is also one of its toughest: The track may have just four turns, but the cars negotiate them going three times faster than you drive on the highway, inches from the wall. For hours. At least at Le Mans, you have more than one driver per car.

This year’s race promises to be an exciting one. The track is sold out for the first time since the centenary race in 2016. A rookie driver and a team new to the series took pole position. Two very fast cars are starting at the back thanks to another conflict-of-interest scandal involving Team Penske, the second in two years for a team whose owner also owns the track and the series. And the cars are trickier to drive than they have been for many years, thanks to a new supercapacitor-based hybrid system that has added more than 100 lbs to the rear of the car, shifting the weight distribution further back.

Ahead of Sunday’s race, I spoke with a couple of IndyCar drivers and some engineers to get a better sense of how they prepare and what to expect.

INDIANAPOLIS, INDIANA - MAY 17: #28, Marcus Ericsson, Andretti Global Honda during qualifying for the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 17, 2025 in Indianapolis, Indiana.

This year, the cars are harder to drive thanks to a hybrid system that has altered the weight balance. Credit: Geoff MIller/Lumen via Getty Images

Concentrate

It all comes “from months of preparation,” said Marcus Ericsson, winner of the race in 2022 and one of Andretti Global’s drivers in this year’s event. “When we get here to the month of May, it’s just such a busy month. So you’ve got to be prepared mentally—and basically before you get to the month of May because if you start doing it now, it’s too late,” he told me.

The drivers spend all month at the track, with a race on the road course earlier this month. Then there’s testing on the historic oval, followed by qualifying last weekend and the race this coming Sunday. “So all those hours you put in in the winter, really, and leading up here to the month of May—it’s what pays off now,” Ericsson said. That work involved multiple sessions of physical training each week, and Ericsson says he also does weekly mental coaching sessions.

“This is a mental challenge,” Ericsson told me. “Doing those speeds with our cars, you can’t really afford to have a split second of loss of concentration because then you might be in the wall and your day is over and you might hurt yourself.”

When drivers get tired or their focus slips, that’s when mistakes happen, and a mistake at Indy often has consequences.

A racing driver stands in front of four mechanics, who are facing away from him. The mechanics have QR codes on the back of their shirts.

Ericsson is sponsored by the antihistamine Allegra and its anti-drowsy-driving campaign. Fans can scan the QR codes on the back of his pit crew’s shirts for a “gamified experience.” Credit: Andretti Global/Allegra

Simulate

Being mentally and physically prepared is part of it. It also helps if you can roll the race car off the transporter and onto the track with a setup that works rather than spending the month chasing the right combination of dampers, springs, wing angles, and so on. And these days, that means a lot of simulation testing.

The multi-axis driver in the loop simulators might look like just a very expensive video game, but these multimillion-dollar setups aren’t about having fun. “Everything that you are feeling or changing in the sim is ultimately going to reflect directly to what happens on track,” explained Kyle Kirkwood, teammate to Ericsson at Andretti Global and one of only two drivers to have won an Indycar race in 2025.

Andretti, like the other teams using Honda engines, uses the new HRC simulator in Indiana. “And yes, it’s a very expensive asset, but it’s also likely cheaper than going to the track and doing the real thing,” Kirkwood said. “And it’s a much more controlled environment than being at the track because temperature changes or track conditions or wind direction play a huge factor with our car.”

A high degree of correlation between the simulation and the track is what makes it a powerful tool. “We run through a sim, and you only get so many opportunities, especially at a place like Indianapolis, where you go from one day to the next and the temperature swings, or the wind conditions, or whatever might change drastically,” Kirkwood said. “You have to be able to sim it and be confident with the sim that you’re running to go out there and have a similar balance or a similar performance.”

Kyle Kirkwood's indycar drives past the IMS logo on one of the track walls.

Andretti Global’s Kyle Kirkwood is the only driver other than Álex Palou to have won an IndyCar race in 2025. Credit: Alison Arena/Andretti Global

“So you have to make adjustments, whether it’s a spring rate, whether it’s keel ballast or just overall, maybe center of pressure, something like that,” Kirkwood said. “You have to be able to adjust to it. And that’s where the sim tool comes in play. You move the weight balance back, and you’re like, OK, now what happens with the balance? How do I tune that back in? And you run that all through the sim, and for us, it’s been mirror-perfect going to the track when we do that.”

More impressively, a lot of that work was done months ago. “I would say most of it, we got through it before the start of this season,” Kirkwood said. “Once we get into the season, we only get a select few days because every Honda team has to run on the same simulator. Of course, it’s different with the engineering sim; those are running nonstop.”

Sims are for engineers, too

An IndyCar team is more than just its drivers—”the spacer between the seat and the wheel,” according to Kirkwood—and the engineers rely heavily on sim work now that real-world testing is so highly restricted. And they use a lot more than just driver-in-the-loop (DiL).

“Digital simulation probably goes to a higher level,” explained Scott Graves, engineering manager at Andretti Global. “A lot of the models we develop work in the DiL as well as our other digital tools. We try to develop universal models, whether that’s tire models, engine models, or transmission models.”

“Once you get into to a fully digital model, then I think your optimization process starts kicking in,” Graves said. “You’re not just changing the setting and running a pretend lap with a driver holding a wheel. You’re able to run through numerous settings and optimization routines and step through a massive number of permutations on a car. Obviously, you’re looking for better lap times, but you’re also looking for fuel efficiency and a lot of other parameters that go into crossing the finish line first.”

A screenshot of a finite element analysis tool

Parts like this anti-roll bar are simulated thousands of times. Credit: Siemens/Andretti Global

As an example, Graves points to the dampers. “The shock absorber is a perfect example where that’s a highly sophisticated piece of equipment on the car and it’s very open for team development. So our cars have fully customized designs there that are optimized for how we run the car, and they may not be good on another team’s car because we’re so honed in on what we’re doing with the car,” he said.

“The more accurate a digital twin is, the more we are able to use that digital twin to predict the performance of the car,” said David Taylor, VP of industry strategy at Siemens DISW, which has partnered with Andretti for some years now. “It will never be as complete and accurate as we want it to be. So it’s a continuous pursuit, and we keep adding technology to our portfolio and acquiring companies to try to provide more and more tools to people like Scott so they can more accurately predict that performance.”

What to expect on Sunday?

Kirkwood was bullish about his chances despite starting relatively deep in the field, qualifying in 23rd place. “We’ve been phenomenal in race trim and qualifying,” he said. “We had a bit of a head-scratcher if I’m being honest—I thought we would definitely be a top-six contender, if not a front row contender, and it just didn’t pan out that way on Saturday qualifying.”

“But we rolled back out on Monday—the car was phenomenal. Once again, we feel very, very racy in traffic, which is a completely different animal than running qualifying,” Kirkwood said. “So I’m happy with it. I think our chances are good. We’re starting deep in the field, but so are a lot of other drivers. So you can expect a handful of us to move forward.”

The more nervous hybrid IndyCars with their more rearward weight bias will probably result in more cautions, according to Ericsson, who will line up sixth for the start of the race on Sunday.

“Whereas in previous years you could have a bit of a moment and it would scare you, you usually get away with it,” he said. “This year, if you have a moment, it usually ends up with you being in the fence. I think that’s why we’ve seen so many crashes this year—because a pendulum effect from the rear of the car that when you start losing it, this is very, very difficult or almost impossible to catch.”

“I think it’s going to mean that the race is going to be quite a few incidents with people making mistakes,” Ericsson said. “In practice, if your car is not behaving well, you bring it to the pit lane, right? You can do adjustments, whereas in the race, you have to just tough it out until the next pit stop and then make some small adjustments. So if you have a bad car at the start a race, it’s going to be a tough one. So I think it’s going to be a very dramatic and entertaining race.”

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

200 mph for 500 miles: How IndyCar drivers prepare for the big race Read More »

what-i-learned-from-my-first-few-months-with-a-bambu-lab-a1-3d-printer,-part-1

What I learned from my first few months with a Bambu Lab A1 3D printer, part 1


One neophyte’s first steps into the wide world of 3D printing.

The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham

The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham

For a couple of years now, I’ve been trying to find an excuse to buy a decent 3D printer.

Friends and fellow Ars staffers who had them would gush about them at every opportunity, talking about how useful they can be and how much can be printed once you get used to the idea of being able to create real, tangible objects with a little time and a few bucks’ worth of plastic filament.

But I could never quite imagine myself using one consistently enough to buy one. Then, this past Christmas, my wife forced the issue by getting me a Bambu Lab A1 as a present.

Since then, I’ve been tinkering with the thing nearly daily, learning more about what I’ve gotten myself into and continuing to find fun and useful things to print. I’ve gathered a bunch of thoughts about my learning process here, not because I think I’m breaking new ground but to serve as a blueprint for anyone who has been on the fence about Getting Into 3D Printing. “Hyperfixating on new hobbies” is one of my go-to coping mechanisms during times of stress and anxiety, and 3D printing has turned out to be the perfect combination of fun, practical, and time-consuming.

Getting to know my printer

My wife settled on the Bambu A1 because it’s a larger version of the A1 Mini, Wirecutter’s main 3D printer pick at the time (she also noted it was “hella on sale”). Other reviews she read noted that it’s beginner-friendly, easy to use, and fun to tinker with, and it has a pretty active community for answering questions, all assessments I agree with so far.

Note that this research was done some months before Bambu earned bad headlines because of firmware updates that some users believe will lead to a more locked-down ecosystem. This is a controversy I understand—3D printers are still primarily the realm of DIYers and tinkerers, people who are especially sensitive to the closing of open ecosystems. But as a beginner, I’m already leaning mostly on the first-party tools and built-in functionality to get everything going, so I’m not really experiencing the sense of having “lost” features I was relying on, and any concerns I did have are mostly addressed by Bambu’s update about its update.

I hadn’t really updated my preconceived notions of what home 3D printing was since its primordial days, something Ars has been around long enough to have covered in some depth. I was wary of getting into yet another hobby where, like building your own gaming PC, fiddling with and maintaining the equipment is part of the hobby. Bambu’s printers (and those like them) are capable of turning out fairly high-quality prints with minimal fuss, and nothing will draw you into the hobby faster than a few successful prints.

Basic terminology

Extrusion-based 3D printers (also sometimes called “FDM,” for “fused deposition modeling”) work by depositing multiple thin layers of melted plastic filament on a heated bed. Credit: Andrew Cunningham

First things first: The A1 is what’s called an “extrusion” printer, meaning that it functions by melting a long, slim thread of plastic (filament) and then depositing this plastic onto a build plate seated on top of a heated bed in tens, hundreds, or even thousands of thin layers. In the manufacturing world, this is also called “fused deposition modeling,” or FDM. This layer-based extrusion gives 3D-printed objects their distinct ridged look and feel and is also why a 3D printed piece of plastic is less detailed-looking and weaker than an injection-molded piece of plastic like a Lego brick.

The other readily available home 3D printing technology takes liquid resin and uses UV light to harden it into a plastic structure, using a process called “stereolithography” (SLA). You can get inexpensive resin printers in the same price range as the best cheap extrusion printers, and the SLA process can create much more detailed, smooth-looking, and watertight 3D prints (it’s popular for making figurines for tabletop games). Some downsides are that the print beds in these printers are smaller, resin is a bit fussier than filament, and multi-color printing isn’t possible.

There are two main types of home extrusion printers. The Bambu A1 is a Cartesian printer, or in more evocative and colloquial terms, a “bed slinger.” In these, the head of the printer can move up and down on one or two rails and from side to side on another rail. But the print bed itself has to move forward and backward to “move” the print head on the Y axis.

More expensive home 3D printers, including higher-end Bambu models in the P- and X-series, are “CoreXY” printers, which include a third rail or set of rails (and more Z-axis rails) that allow the print head to travel in all three directions.

The A1 is also an “open-bed” printer, which means that it ships without an enclosure. Closed-bed printers are more expensive, but they can maintain a more consistent temperature inside and help contain the fumes from the melted plastic. They can also reduce the amount of noise coming from your printer.

Together, the downsides of a bed-slinger (introducing more wobble for tall prints, more opportunities for parts of your print to come loose from the plate) and an open-bed printer (worse temperature, fume, and dust control) mainly just mean that the A1 isn’t well-suited for printing certain types of plastic and has more potential points of failure for large or delicate prints. My experience with the A1 has been mostly positive now that I know about those limitations, but the printer you buy could easily change based on what kinds of things you want to print with it.

Setting up

Overall, the setup process was reasonably simple, at least for someone who has been building PCs and repairing small electronics for years now. It’s not quite the same as the “take it out of the box, remove all the plastic film, and plug it in” process of setting up a 2D printer, but the directions in the start guide are well-illustrated and clearly written; if you can put together prefab IKEA furniture, that’s roughly the level of complexity we’re talking about here. The fact that delicate electronics are involved might still make it more intimidating for the non-technical, but figuring out what goes where is fairly simple.

The only mistake I made while setting the printer up involved the surface I initially tried to put it on. I used a spare end table, but as I discovered during the printer’s calibration process, the herky-jerky movement of the bed and print head was way too much for a little table to handle. “Stable enough to put a lamp on” is not the same as “stable enough to put a constantly wobbling contraption” on—obvious in retrospect, but my being new to this is why this article exists.

After some office rearrangement, I was able to move the printer to my sturdy L-desk full of cables and other doodads to serve as ballast. This surface was more than sturdy enough to let the printer complete its calibration process—and sturdy enough not to transfer the printer’s every motion to our kid’s room below, a boon for when I’m trying to print something after he has gone to bed.

The first-party Bambu apps for sending files to the printer are Bambu Handy (for iOS/Android, with no native iPad version) and Bambu Studio (for Windows, macOS, and Linux). Handy works OK for sending ready-made models from MakerWorld (a mostly community-driven but Bambu-developer repository for 3D printable files) and for monitoring prints once they’ve started. But I’ll mostly be relaying my experience with Bambu Studio, a much more fully featured app. Neither app requires sign-in, at least not yet, but the path of least resistance is to sign into your printer and apps with the same account to enable easy communication and syncing.

Bambu Studio: A primer

Bambu Studio is what’s known in the hobby as a “slicer,” software that takes existing 3D models output by common CAD programs (Tinkercad, FreeCAD, SolidWorks, Autodesk Fusion, others) and converts them into a set of specific movement instructions that the printer can follow. Bambu Studio allows you to do some basic modification of existing models—cloning parts, resizing them, adding supports for overhanging bits that would otherwise droop down, and a few other functions—but it’s primarily there for opening files, choosing a few settings, and sending them off to the printer to become tangible objects.

Bambu Studio isn’t the most approachable application, but if you’ve made it this far, it shouldn’t be totally beyond your comprehension. For first-time setup, you’ll choose your model of printer (all Bambu models and a healthy selection of third-party printers are officially supported), leave the filament settings as they are, and sign in if you want to use Bambu’s cloud services. These sync printer settings and keep track of the models you save and download from MakerWorld, but a non-cloud LAN mode is available for the Bambu skeptics and privacy-conscious.

For any newbie, pretty much all you need to do is connect your printer, open a .3MF or .STL file you’ve downloaded from MakerWorld or elsewhere, select your filament from the drop-down menu, click “slice plate,” and then click “print.” Things like the default 0.4 mm nozzle size and Bambu’s included Textured PEI Build Plate are generally already factored in, though you may need to double-check these selections when you open a file for the first time.

When you slice your build plate for the first time, the app will spit a pile of numbers back at you. There are two important ones for 3D printing neophytes to track. One is the “total filament” figure, which tells you how many grams of filament the printer will use to make your model (filament typically comes in 1 kg spools, and the printer generally won’t track usage for you, so if you want to avoid running out in the middle of the job, you may want to keep track of what you’re using). The second is the “total time” figure, which tells you how long the entire print will take from the first calibration steps to the end of the job.

Selecting your filament and/or temperature presets. If you have the Automatic Material System (AMS), this is also where you’ll manage multicolor printing. Andrew Cunningham

When selecting filament, people who stick to Bambu’s first-party spools will have the easiest time, since optimal settings are already programmed into the app. But I’ve had almost zero trouble with the “generic” presets and the spools of generic Inland-branded filament I’ve bought from our local Micro Center, at least when sticking to PLA (polylactic acid, the most common and generally the easiest-to-print of the different kinds of filament you can buy). But we’ll dive deeper into plastics in part 2 of this series.

I won’t pretend I’m skilled enough to do a deep dive on every single setting that Bambu Studio gives you access to, but here are a few of the odds and ends I’ve found most useful:

  • The “clone” function, accessed by right-clicking an object and clicking “clone.” Useful if you’d like to fit several copies of an object on the build plate at once, especially if you’re using a filament with a color gradient and you’d like to make the gradient effect more pronounced by spreading it out over a bunch of prints.
  • The “arrange all objects” function, the fourth button from the left under the “prepare” tab. Did you just clone a bunch of objects? Did you delete an individual object from a model because you didn’t need to print that part? Bambu Studio will arrange everything on your build plate to optimize the use of space.
  • Layer height, located in the sidebar directly beneath “Process” (which is directly underneath the area where you select your filament. For many functional parts, the standard 0.2 mm layer height is fine. Going with thinner layer heights adds to the printing time but can preserve more detail on prints that have a lot of it and slightly reduce the visible layer lines that give 3D-printed objects their distinct look (for better or worse). Thicker layer heights do the opposite, slightly reducing the amount of time a model takes to print but preserving less detail.
  • Infill percentage and wall loops, located in the Strength tab beneath the “Process” sidebar item. For most everyday prints, you don’t need to worry about messing with these settings much; the infill percentage determines the amount of your print’s interior that’s plastic and the part that’s empty space (15 percent is a good happy medium most of the time between maintaining rigidity and overusing plastic). The number of wall loops determines how many layers the printer uses for the outside surface of the print, with more walls using more plastic but also adding a bit of extra strength and rigidity to functional prints that need it (think hooks, hangers, shelves and brackets, and other things that will be asked to bear some weight).

My first prints

A humble start: My very first print was a wall bracket for the remote for my office’s ceiling fan. Credit: Andrew Cunningham

When given the opportunity to use a 3D printer, my mind went first to aggressively practical stuff—prints for organizing the odds and ends that eternally float around my office or desk.

When we moved into our current house, only one of the bedrooms had a ceiling fan installed. I put up remote-controlled ceiling fans in all the other bedrooms myself. And all those fans, except one, came with a wall-mounted caddy to hold the remote control. The first thing I decided to print was a wall-mounted holder for that remote control.

MakerWorld is just one of several resources for ready-made 3D-printable files, but the ease with which I found a Hampton Bay Ceiling Fan Remote Wall Mount is pretty representative of my experience so far. At this point in the life cycle of home 3D printing, if you can think about it and it’s not a terrible idea, you can usually find someone out there who has made something close to what you’re looking for.

I loaded up my black roll of PLA plastic—generally the cheapest, easiest-to-buy, easiest-to-work-with kind of 3D printer filament, though not always the best for prints that need more structural integrity—into the basic roll-holder that comes with the A1, downloaded that 3MF file, opened it in Bambu Studio, sliced the file, and hit print. It felt like there should have been extra steps in there somewhere. But that’s all it took to kick the printer into action.

After a few minutes of warmup—by default, the A1 has a thorough pre-print setup process where it checks the levelness of the bed and tests the flow rate of your filament for a few minutes before it begins printing anything—the nozzle started laying plastic down on my build plate, and inside of an hour or so, I had my first 3D-printed object.

Print No. 2 was another wall bracket, this time for my gaming PC’s gamepad and headset. Credit: Andrew Cunningham

It wears off a bit after you successfully execute a print, but I still haven’t quite lost the feeling of magic of printing out a fully 3D object that comes off the plate and then just exists in space along with me and all the store-bought objects in my office.

The remote holder was, as I’d learn, a fairly simple print made under near-ideal conditions. But it was an easy success to start off with, and that success can help embolden you and draw you in, inviting more printing and more experimentation. And the more you experiment, the more you inevitably learn.

This time, I talked about what I learned about basic terminology and the different kinds of plastics most commonly used by home 3D printers. Next time, I’ll talk about some of the pitfalls I ran into after my initial successes, what I learned about using Bambu Studio, what I’ve learned about fine-tuning settings to get good results, and a whole bunch of 3D-printable upgrades and mods available for the A1.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

What I learned from my first few months with a Bambu Lab A1 3D printer, part 1 Read More »