Author name: Beth Washington

trump-can’t-fire-us,-ftc-democrats-tell-court-after-being-ejected-from-office

Trump can’t fire us, FTC Democrats tell court after being ejected from office

Two Democratic members of the Federal Trade Commission who were fired by President Trump sued him today, saying their removals are “in direct violation of a century of federal law and Supreme Court precedent.”

“Plaintiffs bring this action to vindicate their right to serve the remainder of their respective terms, to defend the integrity of the Commission, and to continue their work for the American people,” said the lawsuit filed by Rebecca Kelly Slaughter and Alvaro Bedoya in US District Court for the District of Columbia.

Trump last week sent Slaughter and Bedoya notices that said, “I am writing to inform you that you have been removed from the Federal Trade Commission, effective immediately.” They were then cut off from their FTC email addresses, asked to return electronic devices, and denied access to their offices.

There are legal restrictions on the president’s authority to remove FTC commissioners. US law says any FTC commissioner “may be removed by the President for inefficiency, neglect of duty, or malfeasance in office.”

The Supreme Court unanimously held in a 1935 case, Humphrey’s Executor v. United States, that “Congress intended to restrict the power of removal to one or more of those causes.” The case involved President Franklin Roosevelt’s firing of Commissioner William Humphrey.

Trump’s Department of Justice has argued the ruling was incorrect, but it is still in effect. “Congress has continually relied on Humphrey’s Executor, and the Supreme Court has repeatedly refused to upset this landmark precedent,” the Slaughter/Bedoya lawsuit said. “As Humphrey’s Executor recognized, providing some protection from removal at the President’s whim is essential to ensuring that agency officials can exercise their own judgment.”

The lawsuit continued:

In short, it is bedrock, binding precedent that a President cannot remove an FTC Commissioner without cause. And yet that is precisely what has happened here: President Trump has purported to terminate Plaintiffs as FTC Commissioners, not because they were inefficient, neglectful of their duties, or engaged in malfeasance, but simply because their “continued service on the FTC is” supposedly “inconsistent with [his] Administration’s priorities.”

“Indefensible under governing law”

In addition to Trump, the lawsuit’s defendants include FTC Chairman Andrew Ferguson, FTC Commissioner Melissa Holyoak, and FTC Executive Director David Robbins. The Democratic commissioners asked the court to “declare the President’s attempted removals unlawful and ineffective,” and “permanently enjoin the FTC Chairman, Commissioner Holyoak, and the FTC Executive Director from taking any action that would prevent Plaintiffs from fulfilling their duties as Commissioners and serving out the remainder of their terms.”

Trump can’t fire us, FTC Democrats tell court after being ejected from office Read More »

pillars-of-eternity-is-getting-turn-based-combat,-all-but-demanding-replays

Pillars of Eternity is getting turn-based combat, all but demanding replays

More than just rolling for initiative

Obsidian added a turn-based mode to Pillars of Eternity II: Deadfire in patch 4.1, roughly eight months after the game’s initial release. Designer Josh Sawyer, who worked on Baldur’s Gate II and directed both PoE games, said in a 2023 interview with Touch Arcade that the real-time systems in the PoE games were largely a concession to the old-school CRPG fans that crowdfunded both games.

Turn-based was Sawyer’s stated preference, and he thinks Baldur’s Gate 3 largely put an end to the debate in modern times:

I just think it’s easier to design more intricate combats. I like games with a lot of stats, obviously. (He laughs). But the problem with real time with pause is that it’s honestly very difficult for people to actually parse all of that information, and one of the things I’ve heard a lot from people who’ve played Deadfire in turn based, is that there were things about the game like the affliction and inspiration system that they didn’t really understand very clearly until they played it in turn based.

But both Pillars games were designed with real-time combat in mind, such that, even with his appreciation for the turn-based addition to PoE 2, Sawyer knows “the game wasn’t designed for it,” he told Touch Arcade. This is almost certainly going to be the case, too, for the original PoE, but there could be lessons learned from PoE 2‘s transformation to apply. Other games from that era might also lure folks like me back, though perhaps they, too, have a density of encounters and maps that just can’t cut it for turn-based.

Beyond this notably big “patch” coming to the original PoE, the 10th anniversary patch should make it easier for Mac and Linux (through Proton) users to stay up to date on bug fixes, and for players on GOG and Epic to get Kickstarter rewards and achievements. Lots of audio and visual effects were fixed up, along with a whole heap of mechanical and combat fixes.

Pillars of Eternity is getting turn-based combat, all but demanding replays Read More »

as-nasa-faces-cuts,-china-reveals-ambitious-plans-for-planetary-exploration

As NASA faces cuts, China reveals ambitious plans for planetary exploration

All of these grand Chinese plans come as NASA faces budget cuts. Although nothing is final, Ars reported earlier this year that some officials in the Trump administration want to cut science programs at the US space agency by as much as 50 percent, and that would include significant reductions for planetary science. Such cuts, one planetary officials told Ars, would represent an “extinction level” event for space science and exploration in the United States.

This raises the prospect that the United States could cede the lead in space exploration to China in the coming decades.

So what will happen?

To date, the majority of China’s space science objectives have been successful, bringing credibility to a government that sees space exploration as a projection of its soft power. By becoming a major actor in space and surpassing the United States in some areas, China can both please its own population and become a more attractive partner to other countries around the world.

However, if there are high-profile (and to some in China’s leadership, embarrassing) failures, would China be so willing to fund such an ambitious program? With the objectives listed above, China would be attempting some unprecedented and technically demanding missions. Some of them, certainly, will face setbacks.

Additionally, China is also investing in a human lunar program, seeking to land its own astronauts on the surface of the Moon by 2030. Simultaneously funding ambitious human and robotic programs would very likely require significantly more resources than the government has invested to date. How deep are China’s pockets?

It’s probably safe to say, therefore, that some of these mission concepts and time frames are aspirational.

At the same time, the US Congress is likely to block some of the deepest cuts in planetary exploration, should they be proposed by the Trump administration. So NASA still has a meaningful future in planetary exploration. And if companies like K2 are successful in lowering the cost of satellite buses, the combination of lower-cost launch and planetary missions would allow NASA to do more with less in deep space.

The future, therefore, has yet to be won. But when it comes to deep space planetary exploration, NASA, for the first time since the 1960s, has a credible challenger.

As NASA faces cuts, China reveals ambitious plans for planetary exploration Read More »

after-50-million-miles,-waymos-crash-a-lot-less-than-human-drivers

After 50 million miles, Waymos crash a lot less than human drivers


Waymo has been in dozens of crashes. Most were not Waymo’s fault.

A driverless Waymo in Los Angeles. Credit: P_Wei via Getty

The first ever fatal crash involving a fully driverless vehicle occurred in San Francisco on January 19. The driverless vehicle belonged to Waymo, but the crash was not Waymo’s fault.

Here’s what happened: A Waymo with no driver or passengers stopped for a red light. Another car stopped behind the Waymo. Then, according to Waymo, a human-driven SUV rear-ended the other vehicles at high speed, causing a six-car pileup that killed one person and injured five others. Someone’s dog also died in the crash.

Another major Waymo crash occurred in October in San Francisco. Once again, a driverless Waymo was stopped for a red light. According to Waymo, a vehicle traveling in the opposite direction crossed the double yellow line and crashed into an SUV that was stopped to the Waymo’s left. The force of the impact shoved the SUV into the Waymo. One person was seriously injured.

These two incidents produced worse injuries than any other Waymo crash in the last nine months. But in other respects, they were typical Waymo crashes. Most Waymo crashes involve a Waymo vehicle scrupulously following the rules while a human driver flouts them, speeding, running red lights, careening out of their lanes, and so forth.

Waymo’s service will only grow in the coming months and years. So Waymo will inevitably be involved in more crashes—including some crashes that cause serious injuries and even death.

But as this happens, it’s crucial to keep the denominator in mind. Since 2020, Waymo has reported roughly 60 crashes serious enough to trigger an airbag or cause an injury. But those crashes occurred over more than 50 million miles of driverless operations. If you randomly selected 50 million miles of human driving—that’s roughly 70 lifetimes behind the wheel—you would likely see far more serious crashes than Waymo has experienced to date.

Federal regulations require Waymo to report all significant crashes, whether or not the Waymo vehicle was at fault—indeed, whether or not the Waymo is even moving at the time of the crash. I’ve spent the last few days poring over Waymo’s crash reports from the last nine months. Let’s dig in.

Last September, I analyzed Waymo crashes through June 2024. So this section will focus on crashes between July 2024 and February 2025. During that period, Waymo reported 38 crashes that were serious enough to either cause an (alleged) injury or an airbag deployment.

In my view, only one of these crashes was clearly Waymo’s fault. Waymo may have been responsible for three other crashes—there wasn’t enough information to say for certain. The remaining 34 crashes seemed to be mostly or entirely the fault of others:

  • The two serious crashes I mentioned at the start of this article are among 16 crashes where another vehicle crashed into a stationary Waymo (or caused a multi-car pileup involving a stationary Waymo). This included 10 rear-end crashes, three side-swipe crashes, and three crashes where a vehicle coming from the opposite direction crossed the center line.
  • Another eight crashes involved another car (or in one case a bicycle) rear-ending a moving Waymo.
  • A further five crashes involved another vehicle veering into a Waymo’s right of way. This included a car running a red light, a scooter running a red light, and a car running a stop sign.
  • Three crashes occurred while Waymo was dropping a passenger off. The passenger opened the door and hit a passing car or bicycle. Waymo has a “Safe Exit” program to alert passengers and prevent this kind of crash, but it’s not foolproof.

There were two incidents where it seems like no crash happened at all:

  • In one incident, Waymo says that its vehicle “slowed and moved slightly to the left within its lane, preparing to change lanes due to a stopped truck ahead.” This apparently spooked an SUV driver in the next lane, who jerked the wheel to the left and ran into the opposite curb. Waymo says its vehicle never left its lane or made contact with the SUV.
  • In another incident, a pedestrian walked in front of a stopped Waymo. The Waymo began moving after the pedestrian had passed, but then the pedestrian “turned around and approached the Waymo AV.” According to Waymo, the pedestrian “may have made contact with the driver side of the Waymo AV” and “later claimed to have a minor injury.” Waymo’s report stops just short of calling this pedestrian a liar.

So that’s a total of 34 crashes. I don’t want to make categorical statements about these crashes because in most cases, I only have Waymo’s side of the story. But it doesn’t seem like Waymo was at fault in any of them.

There was one crash where Waymo clearly seemed to be at fault: In December, a Waymo in Los Angeles ran into a plastic crate, pushing it into the path of a scooter in the next lane. The scooterist hit the crate and fell down. Waymo doesn’t know whether the person riding the scooter was injured.

I had trouble judging the final three crashes, all of which involved another vehicle making an unprotected left turn across a Waymo’s lane of travel. In two of these cases, Waymo says its vehicle slammed on the brakes but couldn’t stop in time to avoid a crash. In the third case, the other vehicle hit the Waymo from the side. Waymo’s summaries make it sound like the other car was at fault in all three cases, but I don’t feel like I have enough information to make a definite judgment.

Even if we assume all three of these crashes were Waymo’s fault, that would still mean that a large majority of the 38 serious crashes were not Waymo’s fault. And as we’ll see, Waymo vehicles are involved in many fewer serious crashes than human-driven vehicles.

Another way to evaluate the safety of Waymo vehicles is by comparing their per-mile crash rate to human drivers. Waymo has been regularly publishing data about this over the last couple of years. Its most recent release came last week, when Waymo updated its safety data hub to cover crashes through the end of 2024.

Waymo knows exactly how many times its vehicles have crashed. What’s tricky is figuring out the appropriate human baseline, since human drivers don’t necessarily report every crash. Waymo has tried to address this by estimating human crash rates in its two biggest markets—Phoenix and San Francisco. Waymo’s analysis focused on the 44 million miles Waymo had driven in these cities through December, ignoring its smaller operations in Los Angeles and Austin.

Using human crash data, Waymo estimated that human drivers on the same roads would get into 78 crashes serious enough to trigger an airbag. By comparison, Waymo’s driverless vehicles only got into 13 airbag crashes. That represents an 83 percent reduction in airbag crashes relative to typical human drivers.

This is slightly worse than last September, when Waymo estimated an 84 percent reduction in airbag crashes over Waymo’s first 21 million miles.

Over the same 44 million miles, Waymo estimates that human drivers would get into 190 crashes serious enough to cause an injury. Instead, Waymo only got in 36 injury-causing crashes across San Francisco or Phoenix. That’s an 81 percent reduction in injury-causing crashes.

This is a significant improvement over last September, when Waymo estimated its cars had 73 percent fewer injury-causing crashes over its first 21 million driverless miles.

The above analysis counts all crashes, whether or not Waymo’s technology was at fault. Things look even better for Waymo if we focus on crashes where Waymo was determined to be responsible for a crash.

To assess this, Waymo co-authored a study in December with the insurance giant Swiss Re. It focused on crashes that led to successful insurance claims against Waymo. This data seems particularly credible because third parties, not Waymo, decide when a crash is serious enough to file an insurance claim. And claims adjusters, not Waymo, decide whether to hold Waymo responsible for a crash.

But one downside is that it takes a few months for insurance claims to be filed. So the December report focused on crashes that occurred through July 2024.

Waymo had completed 25 million driverless miles by July 2024. And by the end of November 2024, Waymo had faced only two potentially successful claims for bodily injury. Both claims are pending, which means they could still be resolved in Waymo’s favor.

One of them was this crash that I described at the beginning of my September article about Waymo’s safety record:

On a Friday evening last November, police chased a silver sedan across the San Francisco Bay Bridge. The fleeing vehicle entered San Francisco and went careening through the city’s crowded streets. At the intersection of 11th and Folsom streets, it sideswiped the fronts of two other vehicles, veered onto a sidewalk, and hit two pedestrians.

According to a local news story, both pedestrians were taken to the hospital, with one suffering major injuries. The driver of the silver sedan was injured, as was a passenger in one of the other vehicles. No one was injured in the third car, a driverless Waymo robotaxi.

It seems unlikely that an insurance adjuster will ultimately hold Waymo responsible for these injuries.

The other pending injury claim doesn’t seem like a slam dunk, either. In that case, another vehicle steered into a bike lane before crashing into a Waymo as it was making a left turn.

But let’s assume that both crashes are judged to be Waymo’s fault. That would still be a strong overall safety record.

Based on insurance industry records, Waymo and Swiss Re estimate that human drivers in San Francisco and Phoenix would generate about 26 successful bodily injury claims over 25 million miles of driving. So even if both of the pending claims against Waymo succeed, two injuries represent a more than 90 percent reduction in successful injury claims relative to typical human drivers.

The reduction in property damage claims is almost as dramatic. Waymo’s vehicles generated nine successful or pending property damage claims over its first 25 million miles. Waymo and Swiss Re estimate that human drivers in the same geographic areas would have generated 78 property damage claims. So Waymo generated 88 percent fewer property damage claims than typical human drivers.

Timothy B. Lee was on staff at Ars Technica from 2017 to 2021. Today he writes Understanding AI, a newsletter that explores how AI works and how it’s changing our world. You can subscribe here.

Photo of Timothy B. Lee

Timothy is a senior reporter covering tech policy and the future of transportation. He lives in Washington DC.

After 50 million miles, Waymos crash a lot less than human drivers Read More »

gemini-2.5-pro-is-here-with-bigger-numbers-and-great-vibes

Gemini 2.5 Pro is here with bigger numbers and great vibes

Just a few months after releasing its first Gemini 2.0 AI models, Google is upgrading again. The company says the new Gemini 2.5 Pro Experimental is its “most intelligent” model yet, offering a massive context window, multimodality, and reasoning capabilities. Google points to a raft of benchmarks that show the new Gemini clobbering other large language models (LLMs), and our testing seems to back that up—Gemini 2.5 Pro is one of the most impressive generative AI models we’ve seen.

Gemini 2.5, like all Google’s models going forward, has reasoning built in. The AI essentially fact-checks itself along the way to generating an output. We like to call this “simulated reasoning,” as there’s no evidence that this process is akin to human reasoning. However, it can go a long way to improving LLM outputs. Google specifically cites the model’s “agentic” coding capabilities as a beneficiary of this process. Gemini 2.5 Pro Experimental can, for example, generate a full working video game from a single prompt. We’ve tested this, and it works with the publicly available version of the model.

Gemini 2.5 Pro builds a game in one step.

Google says a lot of things about Gemini 2.5 Pro; it’s smarter, it’s context-aware, it thinks—but it’s hard to quantify what constitutes improvement in generative AI bots. There are some clear technical upsides, though. Gemini 2.5 Pro comes with a 1 million token context window, which is common for the big Gemini models but massive compared to competing models like OpenAI GPT or Anthropic Claude. You could feed multiple very long books to Gemini 2.5 Pro in a single prompt, and the output maxes out at 64,000 tokens. That’s the same as Flash 2.0, but it’s still objectively a lot of tokens compared to other LLMs.

Naturally, Google has run Gemini 2.5 Experimental through a battery of benchmarks, in which it scores a bit higher than other AI systems. For example, it squeaks past OpenAI’s o3-mini in GPQA and AIME 2025, which measure how well the AI answers complex questions about science and math, respectively. It also set a new record in the Humanity’s Last Exam benchmark, which consists of 3,000 questions curated by domain experts. Google’s new AI managed a score of 18.8 percent to OpenAI’s 14 percent.

Gemini 2.5 Pro is here with bigger numbers and great vibes Read More »

esa-finally-has-a-commercial-launch-strategy,-but-will-member-states-pay?

ESA finally has a commercial launch strategy, but will member states pay?


Late this year, European governments will have the opportunity to pay up or shut up.

The European Space Agency is inviting proposals to inject competition into the European launch market, an important step toward fostering a dynamic multiplayer industry officials hope, one day, will mimic that of the United States.

The near-term plan for the European Launcher Challenge is for ESA to select companies for service contracts to transport ESA and other European government payloads to orbit from 2026 through 2030. A second component of the challenge is for companies to perform at least one demonstration of an upgraded launch vehicle by 2028. The competition is open to any European company working in the launch business.

“What we expect is that these companies will make a step in improving and upgrading their capacity with respect to what they’re presently working on,” said Toni Tolker-Nielsen, ESA’s acting director of space transportation. “In terms of economics and physics, it’s better to have a bigger launcher than a smaller launcher in terms of price per kilogram to orbit.”

“The ultimate goal is, we should be establishing privately developed competitive launch services in Europe, which will allow us to procure launch services in open competition,” Tolker-Nielsen said in an interview with Ars.

From one to many?

ESA and other European institutions currently have just one European provider, Arianespace, to award launch contracts for the continent’s scientific, Earth observation, navigation, and military satellites. Arianespace operates the Ariane 6 and Vega C rockets. Vega C operations will soon be taken over by Italian aerospace company Avio. Both rockets were developed with ESA funding.

The launcher challenge is modeled on NASA’s use of commercial contracting methods beginning nearly 20 years ago with the agency’s commercial cargo program, which kickstarted the development of SpaceX’s Dragon and Northrop Grumman’s Cygnus resupply freighters for the International Space Station. NASA later applied the same model to commercial crew, and most recently for commercial lunar landers.

Uncharacteristically for ESA, the agency is taking a hands-off approach for the launcher challenge. One of the few major requirements is that the winners should offer a “European launch service” that flies from European territory, which includes the French-run Guiana Space Center in South America.

Europe’s second Ariane 6 rocket lifted off March 6 with a French military spy satellite. Credit: European Space Agency

“We are trying something different, where they are completely free to organize themselves,” Tolker-Nielsen said. “We are not pushing anything. We are in a complete service-oriented model here. That’s the principal difference between the new approach and the old approach.”

ESA also isn’t setting requirements on launcher performance, reusability, or the exact number of companies it will select in the challenge. But ESA would like to limit the number of challengers “to a minimum” to ensure the agency’s support is meaningful, without spreading its funding too thin, Tolker-Nielsen said.

“For the ESA-developed launchers, which are Ariane 6 and Vega C, we own the launch system,” Tolker-Nielsen said. “We finished the development, and the deliverables were the launch systems that we own at ESA, and we make it available to an operator—Arianespace, and Avio soon for Vega C—to exploit.”

These ESA-led launcher projects were expensive. The development of Ariane 6 cost European governments more than $4 billion. Ariane 6 is now flying, but none of the up-and-coming European alternatives is operational.

Next steps

It has taken a while to set up the European Launcher Challenge, which won preliminary approval from ESA’s 23 member states at a ministerial-level meeting in 2023. ESA released an “invitation to tender,” soliciting proposals from European launch companies Monday, with submissions due by May 5. This summer, ESA expects to select the top proposals and prepare a funding package for consideration by its member states at the next ministerial meeting in November.

The top factors ESA will consider in this first phase of the challenge are each proposer’s business plan, technical credibility, and financial credibility.

In a statement, ESA said it has allotted up to 169 million euros ($182 million at today’s exchange rates) per challenger. This is significant funding for Europe’s crop of cash-hungry launch startups, each of which has raised no more than a few hundred million euros. But this allotment comes with a catch. ESA’s leaders and the winners of the launch challenge must persuade their home governments to pay up.

Let’s take a moment to compare Europe’s launch industry with that of the United States.

There are multiple viable US commercial launch companies. In the United States, it’s easier to attract venture capital, the government has been a more reliable proponent of commercial spaceflight, and billionaires are part of the launch landscape. SpaceX, led by Elon Musk, dominates the market. Jeff Bezos’s space company, Blue Origin, and United Launch Alliance are also big players with heavy-lift rockets.

Rocket Lab and Firefly Aerospace fly smaller, privately developed launchers. Northrop Grumman’s medium-class launch division is currently in between rockets, although it still occasionally launches small US military satellites on Minotaur rockets derived from decommissioned ICBMs.

Of course, it’s not surprising the sum of US launch companies is higher than in Europe. According to the World Bank, the US economy is about 50 percent larger than the European Union’s. But six American companies with operational orbital rockets, compared to one in Europe today? That is woefully out of proportion.

European officials would like to regain a leading position in the global commercial launch market. With SpaceX’s dominance, that’s a tall hill to climb. At the very least, European politicians don’t want to rely on other countries for access to space. In the last three years, they’ve seen their access to Russian launchers dry up after Russia’s invasion of Ukraine, and after signing a few launch contracts with SpaceX to bridge the gap before the first flight of Ariane 6, they now view the US government and Elon Musk as unreliable partners.

Open your checkbook, please

ESA’s governance structure isn’t favorable for taking quick action. On one hand, ESA member states approve the agency’s budget in multiyear increments, giving its projects a sense of stability over time. However, it takes time to get new projects approved, and ESA’s member states expect to receive benefits—jobs, investment, and infrastructure—commensurate with their spending on European space programs. This policy is known as geographical return, or geo-return.

For example, France has placed a high strategic importance on fielding an independent European launch capability for more than 60 years. The administration of French President Charles de Gaulle made this determination during the Cold War, around the same time he decided France should have a nuclear deterrent fully independent of the United States and NATO.

In order to match this policy, France has been more willing than other European nations to invest in launchers. This means the Ariane rocket family, developed and funded through ESA contracts, has been largely a French enterprise since the first Ariane launch in 1979.

This model is becoming antiquated in the era of commercial spaceflight. Startups across Europe, primarily in France, Germany, the United Kingdom, and Spain, are developing small launchers designed to carry up to 1.5 metric tons of payload to low-Earth orbit. This is too small to directly compete with the Ariane 6 rocket, but eventually, these companies would like to develop larger launchers.

Some European officials, including the former head of the French space agency, blamed geo-return as a reason the Ariane 6 rocket missed its price target.

Toni Tolker-Nielsen, ESA’s acting director of space transportation, speaks at an event in 2021. Credit: ESA/V. Stefanelli

With the European Launcher Challenge, ESA will experiment with a new funding model for the first time. This new “fair contribution” approach will see ESA leadership put forward a plan to its member states at the next big ministerial conference in November. The space agency will ask the countries that benefit most from the winners of the launcher challenge to provide the bulk of the funding for the challengers’ contracts.

So, let’s say Isar Aerospace, which is set to launch its first rocket as soon as this week, is one of the challenge winners. Isar is headquartered in Munich, and its current launch site is in Norway. In this case, expect ESA to ask the governments of Germany and Norway to contribute the most money to pay for Isar’s contract.

MaiaSpace, a French subsidiary of ArianeGroup, the parent company of Arianespace, is also a contender in the launcher challenge. MaiaSpace plans to launch from French Guiana. Therefore, if MaiaSpace gets a contract, France would be on the hook for the lion’s share of the deal’s funding.

Tolker-Nielsen said he anticipates a “number” of the launch challengers will win the backing of their home countries in November, but “maybe not all.”

“So, first there is this criteria that they have to be eligible, and then they have to be funded as well,” he said. “We don’t want to propose funding for companies that we don’t see as credible.”

Assuming the challengers’ contracts get funded, ESA will then work with the European Commission to assign specific satellites to launch on the new commercial rockets.

“The way I look at this is we are not going to choose winners,” Tolker-Nielsen said. “The challenge is not the competition we are doing right now. It is to deliver on the contract. That’s the challenge.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

ESA finally has a commercial launch strategy, but will member states pay? Read More »

open-source-devs-say-ai-crawlers-dominate-traffic,-forcing-blocks-on-entire-countries

Open Source devs say AI crawlers dominate traffic, forcing blocks on entire countries


AI bots hungry for data are taking down FOSS sites by accident, but humans are fighting back.

Software developer Xe Iaso reached a breaking point earlier this year when aggressive AI crawler traffic from Amazon overwhelmed their Git repository service, repeatedly causing instability and downtime. Despite configuring standard defensive measures—adjusting robots.txt, blocking known crawler user-agents, and filtering suspicious traffic—Iaso found that AI crawlers continued evading all attempts to stop them, spoofing user-agents and cycling through residential IP addresses as proxies.

Desperate for a solution, Iaso eventually resorted to moving their server behind a VPN and creating “Anubis,” a custom-built proof-of-work challenge system that forces web browsers to solve computational puzzles before accessing the site. “It’s futile to block AI crawler bots because they lie, change their user agent, use residential IP addresses as proxies, and more,” Iaso wrote in a blog post titled “a desperate cry for help.” “I don’t want to have to close off my Gitea server to the public, but I will if I have to.”

Iaso’s story highlights a broader crisis rapidly spreading across the open source community, as what appear to be aggressive AI crawlers increasingly overload community-maintained infrastructure, causing what amounts to persistent distributed denial-of-service (DDoS) attacks on vital public resources. According to a comprehensive recent report from LibreNews, some open source projects now see as much as 97 percent of their traffic originating from AI companies’ bots, dramatically increasing bandwidth costs, service instability, and burdening already stretched-thin maintainers.

Kevin Fenzi, a member of the Fedora Pagure project’s sysadmin team, reported on his blog that the project had to block all traffic from Brazil after repeated attempts to mitigate bot traffic failed. GNOME GitLab implemented Iaso’s “Anubis” system, requiring browsers to solve computational puzzles before accessing content. GNOME sysadmin Bart Piotrowski shared on Mastodon that only about 3.2 percent of requests (2,690 out of 84,056) passed their challenge system, suggesting the vast majority of traffic was automated. KDE’s GitLab infrastructure was temporarily knocked offline by crawler traffic originating from Alibaba IP ranges, according to LibreNews, citing a KDE Development chat.

While Anubis has proven effective at filtering out bot traffic, it comes with drawbacks for legitimate users. When many people access the same link simultaneously—such as when a GitLab link is shared in a chat room—site visitors can face significant delays. Some mobile users have reported waiting up to two minutes for the proof-of-work challenge to complete, according to the news outlet.

The situation isn’t exactly new. In December, Dennis Schubert, who maintains infrastructure for the Diaspora social network, described the situation as “literally a DDoS on the entire internet” after discovering that AI companies accounted for 70 percent of all web requests to their services.

The costs are both technical and financial. The Read the Docs project reported that blocking AI crawlers immediately decreased their traffic by 75 percent, going from 800GB per day to 200GB per day. This change saved the project approximately $1,500 per month in bandwidth costs, according to their blog post “AI crawlers need to be more respectful.”

A disproportionate burden on open source

The situation has created a tough challenge for open source projects, which rely on public collaboration and typically operate with limited resources compared to commercial entities. Many maintainers have reported that AI crawlers deliberately circumvent standard blocking measures, ignoring robots.txt directives, spoofing user agents, and rotating IP addresses to avoid detection.

As LibreNews reported, Martin Owens from the Inkscape project noted on Mastodon that their problems weren’t just from “the usual Chinese DDoS from last year, but from a pile of companies that started ignoring our spider conf and started spoofing their browser info.” Owens added, “I now have a prodigious block list. If you happen to work for a big company doing AI, you may not get our website anymore.”

On Hacker News, commenters in threads about the LibreNews post last week and a post on Iaso’s battles in January expressed deep frustration with what they view as AI companies’ predatory behavior toward open source infrastructure. While these comments come from forum posts rather than official statements, they represent a common sentiment among developers.

As one Hacker News user put it, AI firms are operating from a position that “goodwill is irrelevant” with their “$100bn pile of capital.” The discussions depict a battle between smaller AI startups that have worked collaboratively with affected projects and larger corporations that have been unresponsive despite allegedly forcing thousands of dollars in bandwidth costs on open source project maintainers.

Beyond consuming bandwidth, the crawlers often hit expensive endpoints, like git blame and log pages, placing additional strain on already limited resources. Drew DeVault, founder of SourceHut, reported on his blog that the crawlers access “every page of every git log, and every commit in your repository,” making the attacks particularly burdensome for code repositories.

The problem extends beyond infrastructure strain. As LibreNews points out, some open source projects began receiving AI-generated bug reports as early as December 2023, first reported by Daniel Stenberg of the Curl project on his blog in a post from January 2024. These reports appear legitimate at first glance but contain fabricated vulnerabilities, wasting valuable developer time.

Who is responsible, and why are they doing this?

AI companies have a history of taking without asking. Before the mainstream breakout of AI image generators and ChatGPT attracted attention to the practice in 2022, the machine learning field regularly compiled datasets with little regard to ownership.

While many AI companies engage in web crawling, the sources suggest varying levels of responsibility and impact. Dennis Schubert’s analysis of Diaspora’s traffic logs showed that approximately one-fourth of its web traffic came from bots with an OpenAI user agent, while Amazon accounted for 15 percent and Anthropic for 4.3 percent.

The crawlers’ behavior suggests different possible motivations. Some may be collecting training data to build or refine large language models, while others could be executing real-time searches when users ask AI assistants for information.

The frequency of these crawls is particularly telling. Schubert observed that AI crawlers “don’t just crawl a page once and then move on. Oh, no, they come back every 6 hours because lol why not.” This pattern suggests ongoing data collection rather than one-time training exercises, potentially indicating that companies are using these crawls to keep their models’ knowledge current.

Some companies appear more aggressive than others. KDE’s sysadmin team reported that crawlers from Alibaba IP ranges were responsible for temporarily knocking their GitLab offline. Meanwhile, Iaso’s troubles came from Amazon’s crawler. A member of KDE’s sysadmin team told LibreNews that Western LLM operators like OpenAI and Anthropic were at least setting proper user agent strings (which theoretically allows websites to block them), while some Chinese AI companies were reportedly more deceptive in their approaches.

It remains unclear why these companies don’t adopt more collaborative approaches and, at a minimum, rate-limit their data harvesting runs so they don’t overwhelm source websites. Amazon, OpenAI, Anthropic, and Meta did not immediately respond to requests for comment, but we will update this piece if they reply.

Tarpits and labyrinths: The growing resistance

In response to these attacks, new defensive tools have emerged to protect websites from unwanted AI crawlers. As Ars reported in January, an anonymous creator identified only as “Aaron” designed a tool called “Nepenthes” to trap crawlers in endless mazes of fake content. Aaron explicitly describes it as “aggressive malware” intended to waste AI companies’ resources and potentially poison their training data.

“Any time one of these crawlers pulls from my tarpit, it’s resources they’ve consumed and will have to pay hard cash for,” Aaron explained to Ars. “It effectively raises their costs. And seeing how none of them have turned a profit yet, that’s a big problem for them.”

On Friday, Cloudflare announced “AI Labyrinth,” a similar but more commercially polished approach. Unlike Nepenthes, which is designed as an offensive weapon against AI companies, Cloudflare positions its tool as a legitimate security feature to protect website owners from unauthorized scraping, as we reported at the time.

“When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them,” Cloudflare explained in its announcement. The company reported that AI crawlers generate over 50 billion requests to their network daily, accounting for nearly 1 percent of all web traffic they process.

The community is also developing collaborative tools to help protect against these crawlers. The “ai.robots.txt” project offers an open list of web crawlers associated with AI companies and provides premade robots.txt files that implement the Robots Exclusion Protocol, as well as .htaccess files that return error pages when detecting AI crawler requests.

As it currently stands, both the rapid growth of AI-generated content overwhelming online spaces and aggressive web-crawling practices by AI firms threaten the sustainability of essential online resources. The current approach taken by some large AI companies—extracting vast amounts of data from open-source projects without clear consent or compensation—risks severely damaging the very digital ecosystem on which these AI models depend.

Responsible data collection may be achievable if AI firms collaborate directly with the affected communities. However, prominent industry players have shown little incentive to adopt more cooperative practices. Without meaningful regulation or self-restraint by AI firms, the arms race between data-hungry bots and those attempting to defend open source infrastructure seems likely to escalate further, potentially deepening the crisis for the digital ecosystem that underpins the modern Internet.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Open Source devs say AI crawlers dominate traffic, forcing blocks on entire countries Read More »

no-cloud-needed:-nvidia-creates-gaming-centric-ai-chatbot-that-runs-on-your-gpu

No cloud needed: Nvidia creates gaming-centric AI chatbot that runs on your GPU

Nvidia has seen its fortunes soar in recent years as its AI-accelerating GPUs have become worth their weight in gold. Most people use their Nvidia GPUs for games, but why not both? Nvidia has a new AI you can run at the same time, having just released its experimental G-Assist AI. It runs locally on your GPU to help you optimize your PC and get the most out of your games. It can do some neat things, but Nvidia isn’t kidding when it says this tool is experimental.

G-Assist is available in the Nvidia desktop app, and it consists of a floating overlay window. After invoking the overlay, you can either type or speak to G-Assist to check system stats or make tweaks to your settings. You can ask basic questions like, “How does DLSS Frame Generation work?” but it also has control over some system-level settings.

By calling up G-Assist, you can get a rundown of how your system is running, including custom data charts created on the fly by G-Assist. You can also ask the AI to tweak your machine, for example, optimizing the settings for a particular game or toggling on or off a setting. G-Assist can even overclock your GPU if you so choose, complete with a graph of expected performance gains.

Nvidia on G-Assist.

Nvidia demoed G-Assist last year with some impressive features tied to the active game. That version of G-Assist could see what you were doing and offer suggestions about how to reach your next objective. The game integration is sadly quite limited in the public version, supporting just a few games, like Ark: Survival Evolved.

There is, however, support for a number of third-party plug-ins that give G-Assist control over Logitech G, Corsair, MSI, and Nanoleaf peripherals. So, for instance, G-Assist could talk to your MSI motherboard to control your thermal profile or ping Logitech G to change your LED settings.

No cloud needed: Nvidia creates gaming-centric AI chatbot that runs on your GPU Read More »

napster-to-become-a-music-marketing-metaverse-firm-after-being-sold-for-$207m

Napster to become a music-marketing metaverse firm after being sold for $207M

Infinite Reality, a media, ecommerce, and marketing company focused on 3D and AI-powered experiences, has entered an agreement to acquired Napster. That means that the brand originally launched in 1999 as a peer-to-peer (P2P) music file-sharing service is set to be reborn again. This time, new owners are reshaping the brand into one focused on marketing musicians in the metaverse.

Infinite announced today a definitive agreement to buy Napster for $207 million. The Norwalk, Connecticut-based company plans to turn Napster into a “social music platform that prioritizes active fan engagement over passive listening, allowing artists to connect with, own, and monetize the relationship with their fans.” Jon Vlassopulos, who became Napster CEO in 2022, will continue with his role at the brand.

Since 2016, Napster has been operating as a (legal) streaming service. It claims to have over 110 million high-fidelity tracks, with some supporting lossless audio. Napster subscribers can also listen offline and watch music videos. The service currently starts at $11 per month.

Since 2022, Napster has been owned by Web3 and blockchain firms Hivemind and Algorand. Infinite also develops Web3 tech, and CEO John Acunto told CNBC that Algorand’s blockchain background was appealing, as was Napster’s licenses for streaming millions of songs.

To market musicians, Infinite has numerous ideas for helping Napster users interact more with the platform than they do with the current music streaming service. The company shared goals of using Napster to offer “branded 3D virtual spaces where fans can enjoy virtual concerts, social listening parties, and other immersive and community-based experiences” and more “gamification.” Infinite also wants musicians to use Napster as a platform where fans can purchase tickets for performances, physical and virtual merchandise, and “exclusive digital content.” The 6-year-old firm also plans to offer artists abilities to use “AI-powered customer service, sales, and community management agents” and “enhanced analytics dashboards to better understand fan behavior” with Napster.

Napster to become a music-marketing metaverse firm after being sold for $207M Read More »

we’ve-outsourced-our-confirmation-biases-to-search-engines

We’ve outsourced our confirmation biases to search engines

So, the researchers decided to see if they could upend it.

Keeping it general

The simplest way to change the dynamics of this was simply to change the results returned by the search. So, the researchers did a number of experiments where they gave all of the participants the same results, regardless of the search terms they had used. When everybody gets the same results, their opinions after reading them tend to move in the same direction, suggesting that search results can help change people’s opinions.

The researchers also tried giving everyone the results of a broad, neutral search, regardless of the terms they’d entered. This weakened the probability that beliefs would last through the process of formulating and executing a search. In other words, avoiding the sorts of focused, biased search terms allowed some participants to see information that could change their minds.

Despite all the swapping, participants continued to rate the search results relevant. So, providing more general search results even when people were looking for more focused information doesn’t seem to harm people’s perception of the service. In fact, Leung and Urminsky found that the AI version of Bing search would reformulate narrow questions into more general ones.

That said, making this sort of change wouldn’t be without risks. There are a lot of subject areas where a search shouldn’t return a broad range of information—where grabbing a range of ideas would expose people to fringe and false information.

Nevertheless, it can’t hurt to be aware of how we can use search services to reinforce our biases. So, in the words of Leung and Urminsky, “When search engines provide directionally narrow search results in response to users’ directionally narrow search terms, the results will reflect the users’ existing beliefs, instead of promoting belief updating by providing a broad spectrum of related information.”

PNAS, 2025. DOI: 10.1073/pnas.2408175122  (About DOIs).

We’ve outsourced our confirmation biases to search engines Read More »

as-preps-continue,-it’s-looking-more-likely-nasa-will-fly-the-artemis-ii-mission

As preps continue, it’s looking more likely NASA will fly the Artemis II mission

NASA’s existing architecture still has a limited shelf life, and the agency will probably have multiple options for transporting astronauts to and from the Moon in the 2030s. A decision on the long-term future of SLS and Orion isn’t expected until the Trump administration’s nominee for NASA administrator, Jared Isaacman, takes office after confirmation by the Senate.

So, what is the plan for SLS?

There are different degrees of cancellation options. The most draconian would be an immediate order to stop work on Artemis II preparations. This is looking less likely than it did a few months ago and would come with its own costs. It would cost untold millions of dollars to disassemble and dispose of parts of Artemis II’s SLS rocket and Orion spacecraft. Canceling multibillion-dollar contracts with Boeing, Northrop Grumman, and Lockheed Martin would put NASA on the hook for significant termination costs.

Of course, these liabilities would be less than the $4.1 billion NASA’s inspector general estimates each of the first four Artemis missions will cost. Most of that money has already been spent for Artemis II, but if NASA spends several billion dollars on each Artemis mission, there won’t be much money left over to do other cool things.

Other options for NASA might be to set a transition point when the Artemis program would move off of the Space Launch System rocket, and perhaps even the Orion spacecraft, and switch to new vehicles.

Looking down on the Space Launch System for Artemis II. Credit: NASA/Frank Michaux

Another possibility, which seems to be low-hanging fruit for Artemis decision-makers, could be to cancel the development of a larger Exploration Upper Stage for the SLS rocket. If there are a finite number of SLS flights on NASA’s schedule, it’s difficult to justify the projected $5.7 billion cost of developing the upgraded Block 1B version of the Space Launch System. There are commercial options available to replace the rocket’s Boeing-built Exploration Upper Stage, as my colleague Eric Berger aptly described in a feature story last year.

For now, it looks like NASA’s orange behemoth has a little life left in it. All the hardware for the Artemis II mission has arrived at the launch site in Florida.

The Trump administration will release its fiscal-year 2026 budget request in the coming weeks. Maybe then NASA will also have a permanent administrator, and the veil will lift over the White House’s plans for Artemis.

As preps continue, it’s looking more likely NASA will fly the Artemis II mission Read More »

you-can-now-download-the-source-code-that-sparked-the-ai-boom

You can now download the source code that sparked the AI boom

On Thursday, Google and the Computer History Museum (CHM) jointly released the source code for AlexNet, the convolutional neural network (CNN) that many credit with transforming the AI field in 2012 by proving that “deep learning” could achieve things conventional AI techniques could not.

Deep learning, which uses multi-layered neural networks that can learn from data without explicit programming, represented a significant departure from traditional AI approaches that relied on hand-crafted rules and features.

The Python code, now available on CHM’s GitHub page as open source software, offers AI enthusiasts and researchers a glimpse into a key moment of computing history. AlexNet served as a watershed moment in AI because it could accurately identify objects in photographs with unprecedented accuracy—correctly classifying images into one of 1,000 categories like “strawberry,” “school bus,” or “golden retriever” with significantly fewer errors than previous systems.

Like viewing original ENIAC circuitry or plans for Babbage’s Difference Engine, examining the AlexNet code may provide future historians insight into how a relatively simple implementation sparked a technology that has reshaped our world. While deep learning has enabled advances in health care, scientific research, and accessibility tools, it has also facilitated concerning developments like deepfakes, automated surveillance, and the potential for widespread job displacement.

But in 2012, those negative consequences still felt like far-off sci-fi dreams to many. Instead, experts were simply amazed that a computer could finally recognize images with near-human accuracy.

Teaching computers to see

As the CHM explains in its detailed blog post, AlexNet originated from the work of University of Toronto graduate students Alex Krizhevsky and Ilya Sutskever, along with their advisor Geoffrey Hinton. The project proved that deep learning could outperform traditional computer vision methods.

The neural network won the 2012 ImageNet competition by recognizing objects in photos far better than any previous method. Computer vision veteran Yann LeCun, who attended the presentation in Florence, Italy, immediately recognized its importance for the field, reportedly standing up after the presentation and calling AlexNet “an unequivocal turning point in the history of computer vision.” As Ars detailed in November, AlexNet marked the convergence of three critical technologies that would define modern AI.

You can now download the source code that sparked the AI boom Read More »