Author name: Mike M.

trump-order-declares-independent-us-agencies-aren’t-independent-anymore

Trump order declares independent US agencies aren’t independent anymore

The White House fact sheet said the goal of this provision is to ensure that the president and attorney general “interpret the law for the executive branch, instead of having separate agencies adopt conflicting interpretations.”

John Bergmayer, legal director of consumer advocacy group Public Knowledge, said Trump’s order is based on a “unitary executive” theory that “has made its way from the fringes of academia to the halls of power.”

“In this latest Executive Order, the Trump regime purports to seize for itself the power Congress delegated to independent regulatory agencies, and as written, declares the White House’s interpretation of the law as ‘authoritative,’ with no mention of the courts,” Bergmayer said. “Of course, the president is not, and never has been, the final arbiter of what is lawful. Lawyers working for the government owe their allegiance to the American people, not to President Donald J. Trump.”

Trump’s OMB director, Russell Vought, told Tucker Carlson in a recent interview that “there are no independent agencies. Congress may have viewed them as such—SEC or the FCC, CFPB, the whole alphabet soup—but that is not something that the Constitution understands. So there may be different strategies with each one of them about how you dismantle them, but as an administration, the whole notion of an independent agency should be thrown out.”

Extending Trump’s grip

Although the president nominates commissioners and appoints chairs at agencies like the FCC, independent agencies are supposed to make their own decisions. A 2023 report by the Congressional Research Service said an independent agency is “a freestanding executive branch organization that is not part of any department or other agency,” and which has “greater autonomy from the President’s leadership and insulation from partisan politics than is typical of executive branch agencies.”

Other independent agencies include the National Labor Relations Board and Consumer Financial Protection Bureau, the report said. Laws approved by Congress specify the authority of independent agencies along with the agencies’ “goals, principles, missions, and mandates,” the report said.

Trump order declares independent US agencies aren’t independent anymore Read More »

microsoft-shows-progress-toward-real-time-ai-generated-game-worlds

Microsoft shows progress toward real-time AI-generated game worlds

For a while now, many AI researchers have been working to integrate a so-called “world model” into their systems. Ideally, these models could infer a simulated understanding of how in-game objects and characters should behave based on video footage alone, then create fully interactive video that instantly simulates new playable worlds based on that understanding.

Microsoft Research’s new World and Human Action Model (WHAM), revealed today in a paper published in the journal Nature, shows how quickly those models have advanced in a short time. But it also shows how much further we have to go before the dream of AI crafting complete, playable gameplay footage from just some basic prompts and sample video footage becomes a reality.

More consistent, more persistent

Much like Google’s Genie model before it, WHAM starts by training on “ground truth” gameplay video and input data provided by actual players. In this case, that data comes from Bleeding Edge, a four-on-four online brawler released in 2020 by Microsoft subsidiary Ninja Theory. By collecting actual player footage since launch (as allowed under the game’s user agreement), Microsoft gathered the equivalent of seven player-years’ worth of gameplay video paired with real player inputs.

Early in that training process, Microsoft Research’s Katja Hoffman said the model would get easily confused, generating inconsistent clips that would “deteriorate [into] these blocks of color.” After 1 million training updates, though, the WHAM model started showing basic understanding of complex gameplay interactions, such as a power cell item exploding after three hits from the player or the movements of a specific character’s flight abilities. The results continued to improve as the researchers threw more computing resources and larger models at the problem, according to the Nature paper.

To see just how well the WHAM model generated new gameplay sequences, Microsoft tested the model by giving it up to one second’s worth of real gameplay footage and asking it to generate what subsequent frames would look like based on new simulated inputs. To test the model’s consistency, Microsoft used actual human input strings to generate up to two minutes of new AI-generated footage, which was then compared to actual gameplay results using the Frechet Video Distance metric.

Microsoft shows progress toward real-time AI-generated game worlds Read More »

microsoft-demonstrates-working-qubits-based-on-exotic-physics

Microsoft demonstrates working qubits based on exotic physics

Microsoft’s first entry into quantum hardware comes in the form of Majorana 1, a processor with eight of these qubits.

Given that some of its competitors have hardware that supports over 1,000 qubits, why does the company feel it can still be competitive? Nayak described three key features of the hardware that he feels will eventually give Microsoft an advantage.

The first has to do with the fundamental physics that governs the energy needed to break apart one of the Cooper pairs in the topological superconductor, which could destroy the information held in the qubit. There are a number of ways to potentially increase this energy, from lowering the temperature to making the indium arsenide wire longer. As things currently stand, Nayak said that small changes in any of these can lead to a large boost in the energy gap, making it relatively easy to boost the system’s stability.

Another key feature, he argued, is that the hardware is relatively small. He estimated that it should be possible to place a million qubits on a single chip. “Even if you put in margin for control structures and wiring and fan out, it’s still a few centimeters by a few centimeters,” Nayak said. “That was one of the guiding principles of our qubits.” So unlike some other technologies, the topological qubits won’t require anyone to figure out how to link separate processors into a single quantum system.

Finally, all the measurements that control the system run through the quantum dot, and controlling that is relatively simple. “Our qubits are voltage-controlled,” Nayak told Ars. “What we’re doing is just turning on and off coupling of quantum dots to qubits to topological nano wires. That’s a digital signal that we’re sending, and we can generate those digital signals with a cryogenic controller. So we actually put classical control down in the cold.”

Microsoft demonstrates working qubits based on exotic physics Read More »

streamer-completes-hitless-run-of-seven-fromsoft-soulslikes-without-leveling-up

Streamer completes hitless run of seven FromSoft Soulslikes without leveling up

What now?

In a follow-up stream on Monday, Nico called his latest gaming achievement “by far the most difficult run I have ever completed. We did the same run leveled, but it is not even close to as difficult as the level 1 run. The level 1 run, the difficulty level is just insane.”

Aside from being an incredible individual achievement, Nico’s level 1 God Run helps put FromSoft’s reputation for difficulty into perspective. While these games can punish failure very harshly—and require lots of arcane knowledge to play well—Nico shows that they’re also designed to be fair to players with the steely nerves to attack and dodge with perfect timing.

After almost 2 years of attempts!

IT’S FINALLY DONE!

7 Games, 0 Hits, Character Level 1!

WE DID IT!

Here is the last fight! (and sorry that it is cinder again) 😅https://t.co/0g24MY4wRy

— Nico (@dinossindgeil) February 16, 2025

With his ultimate FromSoft achievement now complete, Nico said he’s “going to take a vacation now. And by vacation, I mean I’ll continue doing hitless runs, I will continue being live every day… but we’re going to do some smaller ones for now.” In the longer term, Nico hinted that he is “going to work on one really big project again,” but wasn’t willing to provide details just yet.

If you want to follow in Nico’s hitless FromSoft footsteps, he puts out instructional videos laying out the specific paths and strategies needed to get through specific games (Pro tip: It involves a lot of well-timed rolling and memorizing attack and stagger patterns). Nico also took the time to recently rank how hard he finds each game in the God Run, putting Elden Ring in the easiest tier and Dark Souls II in the hardest.

Streamer completes hitless run of seven FromSoft Soulslikes without leveling up Read More »

3d-map-of-exoplanet-atmosphere-shows-wacky-climate

3D map of exoplanet atmosphere shows wacky climate

Last year, astronomers discovered an unusual Earth-size exoplanet they believe has a hemisphere of molten lava, with its other hemisphere tidally locked in perpetual darkness. And at about the same time, a different group discovered a rare small, cold exoplanet with a massive outer companion 100 times the mass of Jupiter.

Meet Tylos

The different layers of the atmosphere on WASP-121b.

This latest research relied on observational data collected by the European South Observatory’s (ESO) Very Large Telescope, specifically, a spectroscopic instrument called ESPRESSO that can process light collected from the four largest VLT telescope units into one signal. The target exoplanet, WASP-121b—aka Tylos—is located in the Puppis constellation about 900 light-years from Earth. One year on Tylos is equivalent to just 30 hours on Earth, thanks to the exoplanet’s close proximity to its host star. Since one side is always facing the star, it is always scorching, while the exoplanet’s other side is significantly colder.

Those extreme temperature contrasts make it challenging to figure out how energy is distributed in the atmospheric system, and mapping out the 3D structure can help, particularly with determining the vertical circulation patterns that are not easily replicated in our current crop of global circulation models, per the authors. For their analysis, they combined archival ESPRESSO data collected on November 30, 2018, with new data collected on September 23, 2023. They focused on three distinct chemical signatures to probe the deep atmosphere (iron), mid-atmosphere (sodium), and shallow atmosphere (hydrogen).

“What we found was surprising: A jet stream rotates material around the planet’s equator, while a separate flow at lower levels of the atmosphere moves gas from the hot side to the cooler side. This kind of climate has never been seen before on any planet,” said Julia Victoria Seidel of the European Southern Observatory (ESO) in Chile, as well as the Observatoire de la Côte d’Azur in France. “This planet’s atmosphere behaves in ways that challenge our understanding of how weather works—not just on Earth, but on all planets. It feels like something out of science fiction.”

Nature, 2025. DOI: 10.1038/s41586-025-08664-1

Astronomy and Astrophysics, 2025. DOI: 10.1051/0004-6361/202452405  (About DOIs).

3D map of exoplanet atmosphere shows wacky climate Read More »

privacy-problematic-deepseek-pulled-from-app-stores-in-south-korea

Privacy-problematic DeepSeek pulled from app stores in South Korea

In a media briefing held Monday, the South Korean Personal Information Protection Commission indicated that it had paused new downloads within the country of Chinese AI startup DeepSeek’s mobile app. The restriction took effect on Saturday and doesn’t affect South Korean users who already have the app installed on their devices. The DeepSeek service also remains accessible in South Korea via the web.

Per Reuters, PIPC explained that representatives from DeepSeek acknowledged the company had “partially neglected” some of its obligations under South Korea’s data protection laws, which provide South Koreans some of the strictest privacy protections globally.

PIPC investigation division director Nam Seok is quoted by the Associated Press as saying DeepSeek “lacked transparency about third-party data transfers and potentially collected excessive personal information.” DeepSeek reportedly has dispatched a representative to South Korea to work through any issues and bring the app into compliance.

It’s unclear how long the app will remain unavailable in South Korea, with PIPC saying only that the privacy issues it identified with the app might take “a considerable amount of time” to resolve.

Western infosec sources have also expressed dissatisfaction with aspects of DeepSeek’s security. Mobile security company NowSecure reported two weeks ago that the app sends information unencrypted to servers located in China and controlled by TikTok owner ByteDance; the week before that, another security company found an open, web-accessible database filled with DeepSeek customer chat history and other sensitive data.

Ars attempted to ask DeepSeek’s DeepThink (R1) model about the Tiananmen Square massacre or its favorite “Winnie the Pooh” movie, but the LLM continued to have no comment.

Privacy-problematic DeepSeek pulled from app stores in South Korea Read More »

reddit-mods-are-fighting-to-keep-ai-slop-off-subreddits-they-could-use-help.

Reddit mods are fighting to keep AI slop off subreddits. They could use help.


Mods ask Reddit for tools as generative AI gets more popular and inconspicuous.

Redditors in a treehouse with a NO AI ALLOWED sign

Credit: Aurich Lawson (based on a still from Getty Images)

Credit: Aurich Lawson (based on a still from Getty Images)

Like it or not, generative AI is carving out its place in the world. And some Reddit users are definitely in the “don’t like it” category. While some subreddits openly welcome AI-generated images, videos, and text, others have responded to the growing trend by banning most or all posts made with the technology.

To better understand the reasoning and obstacles associated with these bans, Ars Technica spoke with moderators of subreddits that totally or partially ban generative AI. Almost all these volunteers described moderating against generative AI as a time-consuming challenge they expect to get more difficult as time goes on. And most are hoping that Reddit will release a tool to help their efforts.

It’s hard to know how much AI-generated content is actually on Reddit, and getting an estimate would be a large undertaking. Image library Freepik has analyzed the use of AI-generated content on social media but leaves Reddit out of its research because “it would take loads of time to manually comb through thousands of threads within the platform,” spokesperson Bella Valentini told me. For its part, Reddit doesn’t publicly disclose how many Reddit posts involve generative AI use.

To be clear, we’re not suggesting that Reddit has a large problem with generative AI use. By now, many subreddits seem to have agreed on their approach to AI-generated posts, and generative AI has not superseded the real, human voices that have made Reddit popular.

Still, mods largely agree that generative AI will likely get more popular on Reddit over the next few years, making generative AI modding increasingly important to both moderators and general users. Generative AI’s rising popularity has also had implications for Reddit the company, which in 2024 started licensing Reddit posts to train the large language models (LLMs) powering generative AI.

(Note: All the moderators I spoke with for this story requested that I use their Reddit usernames instead of their real names due to privacy concerns.)

No generative AI allowed

When it comes to anti-generative AI rules, numerous subreddits have zero-tolerance policies, while others permit posts that use generative AI if it’s combined with human elements or is executed very well. These rules task mods with identifying posts using generative AI and determining if they fit the criteria to be permitted on the subreddit.

Many subreddits have rules against posts made with generative AI because their mod teams or members consider such posts “low effort” or believe AI is counterintuitive to the subreddit’s mission of providing real human expertise and creations.

“At a basic level, generative AI removes the human element from the Internet; if we allowed it, then it would undermine the very point of r/AskHistorians, which is engagement with experts,” the mods of r/AskHistorians told me in a collective statement.

The subreddit’s goal is to provide historical information, and its mods think generative AI could make information shared on the subreddit less accurate. “[Generative AI] is likely to hallucinate facts, generate non-existent references, or otherwise provide misleading content,” the mods said. “Someone getting answers from an LLM can’t respond to follow-ups because they aren’t an expert. We have built a reputation as a reliable source of historical information, and the use of [generative AI], especially without oversight, puts that at risk.”

Similarly, Halaku, a mod of r/wheeloftime, told me that the subreddit’s mods banned generative AI because “we focus on genuine discussion.” Halaku believes AI content can’t facilitate “organic, genuine discussion” and “can drown out actual artwork being done by actual artists.”

The r/lego subreddit banned AI-generated art because it caused confusion in online fan communities and retail stores selling Lego products, r/lego mod Mescad said. “People would see AI-generated art that looked like Lego on [I]nstagram or [F]acebook and then go into the store to ask to buy it,” they explained. “We decided that our community’s dedication to authentic Lego products doesn’t include AI-generated art.”

Not all of Reddit is against generative AI, of course. Subreddits dedicated to the technology exist, and some general subreddits permit the use of generative AI in some or all forms.

“When it comes to bans, I would rather focus on hate speech, Nazi salutes, and things that actually harm the subreddits,” said 3rdusernameiveused, who moderates r/consoom and r/TeamBuilder25, which don’t ban generative AI. “AI art does not do that… If I was going to ban [something] for ‘moral’ reasons, it probably won’t be AI art.”

“Overwhelmingly low-effort slop”

Some generative AI bans are reflective of concerns that people are not being properly compensated for the content they create, which is then fed into LLM training.

Mod Mathgeek007 told me that r/DeadlockTheGame bans generative AI because its members consider it “a form of uncredited theft,” adding:

You aren’t allowed to sell/advertise the workers of others, and AI in a sense is using patterns derived from the work of others to create mockeries. I’d personally have less of an issue with it if the artists involved were credited and compensated—and there are some niche AI tools that do this.

Other moderators simply think generative AI reduces the quality of a subreddit’s content.

“It often just doesn’t look good… the art can often look subpar,” Mathgeek007 said.

Similarly, r/videos bans most AI-generated content because, according to its announcement, the videos are “annoying” and “just bad video” 99 percent of the time. In an online interview, r/videos mod Abrownn told me:

It’s overwhelmingly low-effort slop thrown together simply for views/ad revenue. The creators rarely care enough to put real effort into post-generation [or] editing of the content [and] rarely have coherent narratives [in] the videos, etc. It seems like they just throw the generated content into a video, export it, and call it a day.

An r/fakemon mod told me, “I can’t think of anything more low-effort in terms of art creation than just typing words and having it generated for you.”

Some moderators say generative AI helps people spam unwanted content on a subreddit, including posts that are irrelevant to the subreddit and posts that attack users.

“[Generative AI] content is almost entirely posted for purely self promotional/monetary reasons, and we as mods on Reddit are constantly dealing with abusive users just spamming their content without regard for the rules,” Abrownn said.

A moderator of the r/wallpaper subreddit, which permits generative AI, disagrees. The mod told me that generative AI “provides new routes for novel content” in the subreddit and questioned concerns about generative AI stealing from human artists or offering lower-quality work, saying those problems aren’t unique to generative AI:

Even in our community, we observe human-generated content that is subjectively low quality (poor camera/[P]hotoshopping skills, low-resolution source material, intentional “shitposting”). It can be argued that AI-generated content amplifies this behavior, but our experience (which we haven’t quantified) is that the rate of such behavior (whether human-generated or AI-generated content) has not changed much within our own community.

But we’re not a very active community—[about] 13 posts per day … so it very well could be a “frog in boiling water” situation.

Generative AI “wastes our time”

Many mods are confident in their ability to effectively identify posts that use generative AI. A bigger problem is how much time it takes to identify these posts and remove them.

The r/AskHistorians mods, for example, noted that all bans on the subreddit (including bans unrelated to AI) have “an appeals process,” and “making these assessments and reviewing AI appeals means we’re spending a considerable amount of time on something we didn’t have to worry about a few years ago.”

They added:

Frankly, the biggest challenge with [generative AI] usage is that it wastes our time. The time spent evaluating responses for AI use, responding to AI evangelists who try to flood our subreddit with inaccurate slop and then argue with us in modmail, [direct messages that message a subreddits’ mod team], and discussing edge cases could better be spent on other subreddit projects, like our podcast, newsletter, and AMAs, … providing feedback to users, or moderating input from users who intend to positively contribute to the community.

Several other mods I spoke with agree. Mathgeek007, for example, named “fighting AI bros” as a common obstacle. And for r/wheeloftime moderator Halaku, the biggest challenge in moderating against generative AI is “a generational one.”

“Some of the current generation don’t have a problem with it being AI because content is content, and [they think] we’re being elitist by arguing otherwise, and they want to argue about it,” they said.

A couple of mods noted that it’s less time-consuming to moderate subreddits that ban generative AI than it is to moderate those that allow posts using generative AI, depending on the context.

“On subreddits where we allowed AI, I often take a bit longer time to actually go into each post where I feel like… it’s been AI-generated to actually look at it and make a decision,” explained N3DSdude, a mod of several subreddits with rules against generative AI, including r/DeadlockTheGame.

MyarinTime, a moderator for r/lewdgames, which allows generative AI images, highlighted the challenges of identifying human-prompted generative AI content versus AI-generated content prompted by a bot:

When the AI bomb started, most of those bots started using AI content to work around our filters. Most of those bots started showing some random AI render, so it looks like you’re actually talking about a game when you’re not. There’s no way to know when those posts are legit games unless [you check] them one by one. I honestly believe it would be easier if we kick any post with [AI-]generated image… instead of checking if a button was pressed by a human or not.

Mods expect things to get worse

Most mods told me it’s pretty easy for them to detect posts made with generative AI, pointing to the distinct tone and favored phrases of AI-generated text. A few said that AI-generated video is harder to spot but still detectable. But as generative AI gets more advanced, moderators are expecting their work to get harder.

In a joint statement, r/dune mods Blue_Three and Herbalhippie said, “AI used to have a problem making hands—i.e., too many fingers, etc.—but as time goes on, this is less and less of an issue.”

R/videos’ Abrownn also wonders how easy it will be to detect AI-generated Reddit content “as AI tools advance and content becomes more lifelike.”

Mathgeek007 added:

AI is becoming tougher to spot and is being propagated at a larger rate. When AI style becomes normalized, it becomes tougher to fight. I expect generative AI to get significantly worse—until it becomes indistinguishable from ordinary art.

Moderators currently use various methods to fight generative AI, but they’re not perfect. r/AskHistorians mods, for example, use “AI detectors, which are unreliable, problematic, and sometimes require paid subscriptions, as well as our own ability to detect AI through experience and expertise,” while N3DSdude pointed to tools like Quid and GPTZero.

To manage current and future work around blocking generative AI, most of the mods I spoke with said they’d like Reddit to release a proprietary tool to help them.

“I’ve yet to see a reliable tool that can detect AI-generated video content,” Aabrown said. “Even if we did have such a tool, we’d be putting hundreds of hours of content through the tool daily, which would get rather expensive rather quickly. And we’re unpaid volunteer moderators, so we will be outgunned shortly when it comes to detecting this type of content at scale. We can only hope that Reddit will offer us a tool at some point in the near future that can help deal with this issue.”

A Reddit spokesperson told me that the company is evaluating what such a tool could look like. But Reddit doesn’t have a rule banning generative AI overall, and the spokesperson said the company doesn’t want to release a tool that would hinder expression or creativity.

For now, Reddit seems content to rely on moderators to remove AI-generated content when appropriate. Reddit’s spokesperson added:

Our moderation approach helps ensure that content on Reddit is curated by real humans. Moderators are quick to remove content that doesn’t follow community rules, including harmful or irrelevant AI-generated content—we don’t see this changing in the near future.

Making a generative AI Reddit tool wouldn’t be easy

Reddit is handling the evolving concerns around generative AI as it has handled other content issues, including by leveraging AI and machine learning tools. Reddit’s spokesperson said that this includes testing tools that can identify AI-generated media, such as images of politicians.

But making a proprietary tool that allows moderators to detect AI-generated posts won’t be easy, if it happens at all. The current tools for detecting generative AI are limited in their capabilities, and as generative AI advances, Reddit would need to provide tools that are more advanced than the AI-detecting tools that are currently available.

That would require a good deal of technical resources and would also likely present notable economic challenges for the social media platform, which only became profitable last year. And as noted by r/videos moderator Abrownn, tools for detecting AI-generated video still have a long way to go, making a Reddit-specific system especially challenging to create.

But even with a hypothetical Reddit tool, moderators would still have their work cut out for them. And because Reddit’s popularity is largely due to its content from real humans, that work is important.

Since Reddit’s inception, that has meant relying on moderators, which Reddit has said it intends to keep doing. As r/dune mods Blue_Three and herbalhippie put it, it’s in Reddit’s “best interest that much/most content remains organic in nature.” After all, Reddit’s profitability has a lot to do with how much AI companies are willing to pay to access Reddit data. That value would likely decline if Reddit posts became largely AI-generated themselves.

But providing the technology to ensure that generative AI isn’t abused on Reddit would be a large challege. For now, volunteer laborers will continue to bear the brunt of generative AI moderation.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder of Reddit.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Reddit mods are fighting to keep AI slop off subreddits. They could use help. Read More »

trump-has-thrown-a-wrench-into-a-national-ev-charging-program

Trump has thrown a wrench into a national EV charging program


Electric charging projects have been thrown into chaos by the administration’s directive.

A row of happy EVs charge with no drama, no phone calls to the support line, and no one shuffling spots. Credit: Roberto Baldwin

This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy and the environment. Sign up for their newsletter here.

For now, Priester’s will have to stick to its famous pecans in Fort Payne, Alabama. But maybe not for long.

Priester’s Pecans, an Alabama staple, is one of more than half a dozen sites across the state slated to receive millions of dollars in federal funding to expand access to chargers for electric vehicles.

Across the country, the National Electric Vehicle Infrastructure (NEVI) program, part of the 2021 Infrastructure Investment and Jobs Act signed into law under then-President Joe Biden, is set to provide $5 billion to states for projects that expand the nation’s EV charging infrastructure.

But in a February 6 letter, a Trump administration official notified state directors of transportation that, effectively, they can’t spend it. The Federal Highway Administration rescinded guidance on the funds, which had been allocated by Congress, and “is also immediately suspending the approval of all State Electric Vehicle Infrastructure Deployment plans for all fiscal years,” the letter said.

“Therefore, effective immediately, no new obligations may occur under the NEVI Formula Program until the updated final NEVI Formula Program Guidance is issued and new State plans are submitted and approved.”

POLITICO reported on Wednesday that a DOT spokesman said in an email that states were free to use a small portion of the funding—about $400 million—because that was money the states had already “obligated,” or awarded to subcontractors. But that would still leave close to 90 percent of the funding up in the air.

Even before the administration had issued its letter, some Republican-led states, including Alabama, had already announced pauses to their states’ implementation of the national EV charging program.

“In response to Unleashing American Energy, one of several Executive Orders that President Trump signed on January 20, 2025, the Alabama Department of Economic and Community Affairs has paused the National Electric Vehicle Infrastructure (NEVI) Program as of January 28, 2025,” the Alabama agency responsible for implementing NEVI posted on its website. “In addition, for applications for funding that were originally due on March 17, 2025, ADECA has closed the application window until further notice.”

Despite the announcement by the Trump administration, however, legal experts and those familiar with the electric charging program at issue say the president does not have the power to permanently nix the NEVI program.

“NEVI funding was appropriated by Congress as part of the bipartisan infrastructure law, and it cannot be canceled by the executive branch,” said Elizabeth Turnbull, director of policy and regulatory affairs at the Alliance for Transportation Electrification, a trade group for the electric vehicle industry. “It’s not clear that the secretary of transportation has the authority to revoke states’ NEVI plans, and it’s quite clear that the executive branch lacks the authority to withhold the funding for any sustained period. So, we expect recent executive branch actions to be successfully challenged in court.”

Even under the most aggressive arguments for a strong executive branch, the Supreme Court has stated clearly that the Constitution gives Congress the sole authority to appropriate and legislate.

Lawmakers, too, have weighed in on the legality of the Trump administration’s NEVI directive, saying officials acted with “blatant disregard for the law.”

In a letter to administration officials, Democratic members of the Senate Committee on Environment and Public Works urged the Department of Transportation to retract its February 6 letter and “implement the law according to your responsibilities.”

The Democrats’ letter also asked for responses to questions about the legal basis for the action and for information about the involvement of individuals associated with Elon Musk’s so-called “Department of Government Efficiency.” DOGE is not an official department, and multiple reports show that Musk’s team has been dismantling parts or all of some federal agencies.

Tesla, Musk’s electric vehicle company, currently has the largest network of fast chargers in the country. It’s not yet clear if any new policies on NEVI, or the pause on building out a more robust network for all EV drivers, could benefit Tesla.

The Department of Transportation, the Federal Highway Administration’s parent agency, did not respond to a request for comment.

With or without NEVI, the move toward the electrification of transportation is inevitable, experts say. But they warn that although the administration’s pause of the program will likely be reversed by the courts, even a temporary delay in EV charging infrastructure can harm the nation’s ability to quickly and efficiently transition to electric vehicles. And the Trump administration ignored an earlier court order to lift a broad freeze on federal funds, a federal judge ruled this week.

Meanwhile, Trump’s NEVI freeze has sown confusion across the country, with EV stakeholders and state governments scrambling to figure out what the funding pause will mean and how to respond.

Beyond Alabama, interviews across the country found officials in deep red Wyoming contemplating a possible return of funds, while those in progressive states like Illinois and Maryland remain firmly committed to the EV buildout, with or without federal funding. In purple North Carolina, officials are in limbo, having already spent some NEVI funds, but not sure how to proceed with the next round of projects.

Alabama

In Alabama, officials had already announced plans to fund more than a dozen chargers at sites across the state along interstates and major highways, including installing two dual-port chargers at eight Love’s Travel Stops and another at Priester’s Pecans off I-65 in Fort Deposit.

At the time, state officials, including Republican Gov. Kay Ivey, praised the funding.

“Having strategic electric vehicle charging stations across Alabama not only benefits EV drivers, but it also benefits those companies that produce electric vehicles, including many of them right here in Alabama, resulting in more high-paying jobs for Alabamians,” Ivey said when the funding allocation was announced in July 2024. “This latest round of projects will provide added assurance that Alabamians and travelers to our state who choose electric vehicles can travel those highways and know a charging station is within a reliable distance on their routes.”

In total, Alabama was set to receive $79 million in funding through the program, including $2.4 million to expand training programs for the installation, testing, operation, and maintenance of EVs and EV chargers at Bevill State Community College in the central part of the state. The college did not respond to a request for comment on whether the money had been disbursed to the institution before the announced pause.

In an email exchange this week, a spokesperson for the Alabama Department of Economic and Community Affairs confirmed what the agency had posted to its website in the wake of Trump’s inauguration—that the state would pause NEVI projects and await further guidance from the Trump administration.

Even with a pause, however, stakeholders in Alabama and across the country have expressed a commitment to continuing the expansion of electric vehicle charging infrastructure.

For its part, Love’s Travel Stops, a 42-state chain that had been set to receive more than $5.8 million in funding for EV chargers in Alabama alone, said it will continue to roll out electric chargers at locations nationwide.

“Love’s remains committed to meeting customers’ needs regardless of fuel type and believes a robust electric vehicle charging network is a part of that,” Kim Okafor, general manager of zero emissions for Love’s, said in an emailed statement. “Love’s will continue to monitor related executive orders and subsequent changes in law to determine the next steps. This includes the Alabama Department of Transportation’s Electric Vehicle charging plan timelines.”

The state of Alabama, meanwhile, has its own EV charger program apart from NEVI that has already funded millions of dollars worth of charging infrastructure.

In January, even after its announced pause of NEVI implementation, the Alabama Department of Economic and Community Affairs announced the awarding of six grants totaling $2.26 million from state funds for the construction of EV chargers in Huntsville, Hoover, Tuscaloosa, and Mobile.

“The installation of electric vehicle charging stations at places like hotels are investments that can attract customers and add to local economies,” ADECA Director Kenneth Boswell said at the time.

North Carolina

In North Carolina, the full buildout of the state’s electric charging network under NEVI is in limbo just four months after the NC Department of Transportation announced the initial recipients of the funds.

NC DOT spokesman Jamie Kritzer said that based on the federal government’s directive, the agency is continuing with awarded projects but “pausing” the next round of requests for proposals, as well as future phases of the buildout.

If that pause were to become permanent, the state would be forced to abandon $103 million in federal infrastructure money that would have paid for an additional 41 stations to be built as part of Phase 1.

Last September the state announced it had awarded nearly $6 million to six companies to build nine public charging stations. Locations include shopping centers, travel plazas, and restaurants, most of them in economically disadvantaged communities.

NEVI requires EV charging stations in the first phase to be installed every 50 miles along the federally approved alternative fuel corridors, and that they be within one mile of those routes. The state has also prioritized Direct Current Fast Charging (DCFC) stations, which can charge a vehicle to 80 percent in 20 to 30 minutes.

The NEVI program is structured to reimburse private companies for up to 80 percent of the cost to construct and operate electric vehicle charging stations for five years, after which the charging stations will continue to operate without government support, according to the state DOT.

The state estimated it would have taken two to three years to finish Phase 1.

Under Phase 2, the state would award federal funds to build community-level electric vehicle charging stations, farther from the major highways, including in disadvantaged communities.

That is particularly important in North Carolina, which has the second-largest rural population in the US in terms of percentage. A third of the state’s residents live in rural areas, which are underserved by electric vehicle charging stations.

There are already more than 1,700 public electric charging stations and 4,850 ports in North Carolina, according to the US Department of Energy’s Alternative Fuels Data Center. But they aren’t evenly dispersed throughout the state. Alleghany and Ashe counties, in the western mountains, have just one charging station each.

Vickie Atkinson, who lives in the country between Chapel Hill and Pittsboro in central North Carolina, drives a plug-in hybrid Ford Escape, which is powered by an electric engine or gas, unlike full electric models, which have no gas option. Plug-in hybrids typically have fully electric ranges of 35 to 40 miles.

“I try to drive on battery whenever possible,” Atkinson said. But she’s frustrated that she can’t drive from her home to downtown Siler City and back—a 60-mile round trip—without resorting to the gas engine. There are two chargers on the outskirts along US 64—only one of them is a fast charger—but none downtown.

“I really hope the chargers are installed,” Atkinson said. “I fear they won’t and I find that very frustrating.”

Former Gov. Roy Cooper, a Democrat, advocated for wider adoption of electric vehicles and infrastructure. In a 2018 executive order, Cooper established a benchmark of 80,000 registered zero-emission vehicles in the state by 2025.

North Carolina met that goal. State DOT registration data shows there were 81,658 electric vehicles and 24,457 plug-in hybrids as of September, the latest figures available.

Cooper issued a subsequent executive order in 2022 that set a more aggressive goal: 1.2 million registered electric vehicles by 2030. At the current pace of electric vehicle adoption, it’s unlikely the state will achieve that benchmark.

The electric vehicle industry is an economic driver in North Carolina. Toyota just opened a $13.9 billion battery plant in the small town of Liberty and says it will create about 5,100 new jobs. The company is scheduled to begin shipping batteries in April.

Natron Energy is building a plant in Edgecombe County, east of Raleigh, to manufacture sodium-ion batteries for electric vehicles. Experts say they are cheaper and environmentally superior to lithium-ion batteries and less likely to catch fire, although they store less energy.

The global company Kempower opened its first North American factory in Durham, where it builds charging infrastructure. Jed Routh, its vice president of markets and products for North America, said that while “the rapidly shifting market is difficult to forecast and interest in electric vehicles may slow at times over the next four years, we don’t expect it to go away. We believe that the industry will remain strong and Kempower remains committed to define, produce, and improve EV charging infrastructure throughout North America.”

North Carolina does have a separate funding source for electric charging stations that is protected from the Trump administration’s program cuts and cancellations. The state received $92 million from Volkswagen, part of the EPA’s multi-billion-dollar national settlement in 2016 with the car company, which had installed software in some of its diesel cars to cheat on emissions tests.

The Department of Environmental Quality used the settlement money to pay for 994 EV charging ports at 318 sites in North Carolina. The agency expects to add more charging stations with $1.8 million in unspent settlement funds.

Electrify America was created by the Volkswagen Group of America to implement a $2 billion portion of the settlement. It required the car company to invest in electric charging infrastructure and in the promotion of electric and plug-in hybrid vehicles.

Electrify America operates 20 charging NEVI-compliant, high-speed stations in North Carolina, using the settlement money. However, the funding pause could affect the company because it works with potential site developers and small businesses to comply with the NEVI requirements.

The company is still reviewing the details in the federal memo, company spokeswoman Tara Geiger said.

“Electrify America continues to engage with stakeholders to understand developments impacting the National Electric Vehicle Infrastructure program,” Geiger wrote in an email. “We remain committed to growing our coast-to-coast Hyper-Fast network to support transportation electrification.”

Wyoming

In Wyoming, Doug McGee, a state Department of Transportation spokesperson, said the agency is taking a wait and see approach to NEVI moving forward, and is not ruling out a return of funding. About half a dozen people at the department handle NEVI along with other daily responsibilities, McGee said, and it will be easy for them to put NEVI on hold while they await further instruction.

The department was in the process of soliciting proposals for EV charging stations and has not yet spent any money under NEVI. “There was very little to pause,” McGee said.

Across 6,800 miles of highway in Wyoming, there are 110 public EV charging stations, making the state’s EV infrastructure the third-smallest in the country, ahead of charging networks in only North Dakota and Alaska.

Illinois

More progressive states, including Illinois, have explicitly said they will redouble their efforts to support the expansion of EV charging infrastructure in the wake of the Trump administration’s NEVI pause.

The state of Illinois has said it remains committed to the goal of helping consumers and the public sector transition to EVs in 2025 through state funding sources, even if some NEVI projects are halted.

Commonwealth Edison Co. (ComEd), the largest electric utility in Illinois and the primary electric provider in Chicago, also announced a $100 million rebate program on Feb. 6 at the Chicago Auto Show, funds that are currently available to boost EV adoption throughout the state.

The funds are for residential EV charger and installation costs, all-electric fleet vehicles, and charging infrastructure in both the public and private sectors.

According to Cristina Botero, senior manager for beneficial electrification at ComEd, the rebate is part of a total investment of $231 million from ComEd as part of its Beneficial Electrification plan programs to promote electrification and EV adoption.

While the $231 million won’t be impacted by the Trump administration’s order, other EV projects funded by NEVI are halted. In 2022, for example, $148 million from NEVI was set to be disbursed in Illinois over the course of five years, focusing on Direct Current Fast Charging to fulfill the requirement to build charging stations every 50 miles, according to the Illinois Department of Transportation.

“We are still in the process of reviewing the impacts of last week’s order and evaluating next steps going forward,” said Maria Castaneda, spokesperson at IDOT, in an emailed statement.

The NEVI funds were also set to help achieve Gov. J.B. Pritzker’s goal to have 1 million EVs on Illinois roads by 2030. Officials estimated that at least 10,000 EV charging stations are needed in order to achieve this 2030 goal. Last fall, there were 1,200 charging stations open to the public.

In January, Illinois was awarded federal funds totaling $114 million from the US Department of Transportation to build 14 truck charging hubs, adding to the statewide charging infrastructure.

According to Brian Urbaszewski, director of environmental health programs for the Respiratory Health Association, most of that funding is either frozen or at risk.

However, programs like the recent ComEd rebate will not be impacted. “This is at the state level and not dictated by federal policy,” Botero said.

Maryland

In Maryland, state officials are trying to assess the fallout and find alternative ways to keep EV infrastructure efforts alive. The outcome hinges on new federal guidance and potential legal battles over the suspension.

Maryland is allocated $63 million over five years under NEVI. The Maryland Department of Transportation (MDOT) launched the first $12.1 million round last summer to build 126 fast-charging ports at 22 sites across many of the state’s counties. At least some are expected to be operational by late 2025.

In December, MDOT issued a new call for proposals for building up to 29 additional highway charging stations, expecting stable federal support. At the time, senior MDOT officials told Inside Climate News they were confident in the program’s security since it was authorized under law.

But Trump’s funding pause has upended those plans.

“The Maryland Department of Transportation is moving forward with its obligated NEVI funding and is awaiting new guidance from the U.S. Department of Transportation to advance future funding rounds,” said Carter Elliott, a spokesperson for Gov. Wes Moore, in an emailed statement.

The Moore administration reaffirmed its commitment to EV expansion, calling charging essential to reducing consumer costs and cutting climate pollution. “Gov. Moore is committed to making the state more competitive by pressing forward with the administration’s strategy to deliver charging infrastructure for clean cars to drivers across the state,” the statement added.

In written comments, an MDOT spokesperson said the agency is determining its options for future funding needs and solicitations.

Katherine García, director of the Sierra Club’s Clean Transportation for All program, said that freezing the EV charging funds was an unsound and illegal move by the Trump administration. “This is an attack on bipartisan funding that Congress approved years ago and is driving investment and innovation in every state,” she said.

She said that the NEVI program is helping the US build out the infrastructure needed to support the transition to vehicles that don’t pollute the air.

The Sierra Club’s Josh Stebbins lamented the slow pace of the EV charger buildout across the state. “We are not sure when Maryland’s NEVI chargers will be operational,” he said. “States must move faster and accelerate the installation of NEVI stations. It has been frustratingly slow, and the public needs to see a return on its investment.”

Maryland EV ambitions are high stakes. Transportation remains the state’s largest source of greenhouse gas emissions, and public officials and advocates see EV adoption as critical to meet its net-zero carbon goal by 2045. NEVI is also a key plank of the state’s broader Zero Emission Vehicle Infrastructure Planning initiative, designed to accelerate the transition away from fossil fuels.

What happens next

As litigation is brought over the Trump administration’s pause on NEVI funds, experts like Turnbull of the Alliance for Transportation Electrification believe the United States remains, despite this bump, on the road toward electrification.

“We are not shifting into reverse,” Turnbull said. “The EV market will continue to grow across all market segments driven by market innovation and consumer demand, both within the United States and globally. By pretending the EV transition doesn’t exist, this administration risks the US’s global competitiveness, national security, and economic growth.”

Photo of Inside Climate News

Trump has thrown a wrench into a national EV charging program Read More »

what-we-know-about-amd-and-nvidia’s-imminent-midrange-gpu-launches

What we know about AMD and Nvidia’s imminent midrange GPU launches

The GeForce RTX 5090 and 5080 are both very fast graphics cards—if you can look past the possibility that we may have yet another power-connector-related overheating problem on our hands. But the vast majority of people (including you, discerning and tech-savvy Ars Technica reader) won’t be spending $1,000 or $2,000 (or $2,750 or whatever) on a new graphics card this generation.

No, statistically, you (like most people) will probably end up buying one of the more affordable midrange Nvidia or AMD cards, GPUs that are all slated to begin shipping later this month or early in March.

There has been a spate of announcements on that front this week. Nvidia announced yesterday that the GeForce RTX 5070 Ti, which the company previously introduced at CES, would be available starting on February 20 for $749 and up. The new GPU, like the RTX 5080, looks like a relatively modest upgrade from last year’s RTX 4070 Ti Super. But it ought to at least flirt with affordability for people who are looking to get natively rendered 4K without automatically needing to enable DLSS upscaling to get playable frame rates.

RTX 5070 Ti RTX 4070 Ti Super RTX 5070 RTX 4070 Super
CUDA Cores 8,960 8,448 6,144 7,168
Boost Clock 2,452 MHz 2,610 MHz 2,512 MHz 2,475 MHz
Memory Bus Width 256-bit 256-bit 192-bit 192-bit
Memory Bandwidth 896 GB/s 672 GB/s 672 GB/s 504 GB/s
Memory size 16GB GDDR7 16GB GDDR6X 12GB GDDR7 12GB GDDR6X
TGP 300 W 285 W 250 W 220 W

That said, if the launches of the 5090 and 5080 are anything to go by, it may not be easy to find and buy the RTX 5070 Ti for anything close to the listed retail price; early retail listings are not promising on this front. You’ll also be relying exclusively on Nvidia’s partners to deliver unadorned, relatively minimalist MSRP versions of the cards since Nvidia isn’t making a Founders Edition version.

As for the $549 RTX 5070, Nvidia’s website says it’s launching on March 5. But it’s less exciting than the other 50-series cards because it has fewer CUDA cores than the outgoing RTX 4070 Super, leaving it even more reliant on AI-generated frames to improve performance compared to the last generation.

What we know about AMD and Nvidia’s imminent midrange GPU launches Read More »

ai-used-to-design-a-multi-step-enzyme-that-can-digest-some-plastics

AI used to design a multi-step enzyme that can digest some plastics

And it worked. Repeating the same process with an added PLACER screening step boosted the number of enzymes with catalytic activity by over three-fold.

Unfortunately, all of these enzymes stalled after a single reaction. It turns out they were much better at cleaving the ester, but they left one part of it chemically bonded to the enzyme. In other words, the enzymes acted like part of the reaction, not a catalyst. So the researchers started using PLACER to screen for structures that could adopt a key intermediate state of the reaction. This produced a much higher rate of reactive enzymes (18 percent of them cleaved the ester bond), and two—named “super” and “win”—could actually cycle through multiple rounds of reactions. The team had finally made an enzyme.

By adding additional rounds alternating between structure suggestions using RFDiffusion and screening using PLACER, the team saw the frequency of functional enzymes increase and eventually designed one that had an activity similar to some produced by actual living things. They also showed they could use the same process to design an esterase capable of digesting the bonds in PET, a common plastic.

If that sounds like a lot of work, it clearly was—designing enzymes, especially ones where we know of similar enzymes in living things, will remain a serious challenge. But at least much of it can be done on computers rather than requiring someone to order up the DNA that encodes the enzyme, getting bacteria to make it, and screening for activity. And despite the process involving references to known enzymes, the designed ones didn’t share a lot of sequences in common with them. That suggests there should be added flexibility if we want to design one that will react with esters that living things have never come across.

I’m curious about what might happen if we design an enzyme that is essential for survival, put it in bacteria, and then allow it to evolve for a while. I suspect life could find ways of improving on even our best designs.

Science, 2024. DOI: 10.1126/science.adu2454  (About DOIs).

AI used to design a multi-step enzyme that can digest some plastics Read More »

rocket-report:-a-blue-mood-at-blue;-stoke-space-fires-a-shot-over-the-bow

Rocket Report: A blue mood at Blue; Stoke Space fires a shot over the bow


All the news that’s fit to lift

“Rapid turnaround isn’t merely a goal, it’s baked into the design.”

A bottoms-up view of Stoke Space’s Andromeda upper stage. Credit: Stoke Space

A bottoms-up view of Stoke Space’s Andromeda upper stage. Credit: Stoke Space

Welcome to Edition 7.31 of the Rocket Report! The unfortunate news this week concerns layoffs. Blue Origin announced a 10 percent cut in its workforce as the company aims to get closer to breaking even. More broadly in the space industry, there is unease about what the Trump administration’s cuts to NASA and other federal agencies might mean.

We don’t have all the answers, but it does seem that NASA is likely to be subject to less deep cuts than some other parts of the government. We should find out sometime in March when the Trump White House submits its initial budget request. Congress, of course, will have the final say.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

PLD Space continues to grow. The Spain-based launch company said this week that it now has more than 300 employees as it works toward an orbital launch attempt. “In this race for space supremacy, the amount of capital raised and the talent gathered have become key indicators of a company’s potential for success,” the company said. “While capital acts as the fuel for these ambitious initiatives, the talent behind it is the catalyst that drives them.”

Working to reach orbit … The average age of employees at PLD Space is 34, and the company is hiring 15 people a month as it works to develop the Miura 5 rocket. It’s unclear which of the commercial launch startups in Europe will succeed, but PLD Space has a decent chance to be among them. With luck, the Miura 5 launch vehicle will make its debut sometime in 2026.

Will NASA launch on a Transporter mission? NASA announced this week that it has selected SpaceX to launch a small exoplanet science mission as a rideshare payload as soon as September, Space News reports. The task order to launch the Pandora mission was made through the Venture-class Acquisition of Dedicated and Rideshare launch services contract, intended for small missions with higher acceptance of risk.

Could fly on a Falcon 9 … Pandora is an ESPA Grande-class spacecraft, a category that includes spacecraft weighing up to 320 kilograms, and is designed to operate in a Sun-synchronous orbit. That suggests Pandora could launch on SpaceX’s Transporter series of dedicated rideshare missions that send payloads to such orbits, but neither NASA nor SpaceX disclosed specifics. The NASA announcement also did not disclose the value of the task order to SpaceX.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Stoke Space dishes on Andromeda upper stage. The Washington-based launch company has revealed the name of its Nova rocket’s upper stage, Andromeda, and provided some new details about the design. Andromeda will incorporate hot staging, Stoke says, and will use fewer but larger thrusters—24 instead of 30. The upper stage is now mounted on Stoke’s test stand in Moses Lake, Washington, the company said.

Hot staging, hot talk … The new design is focused on rapid reusability, with easier access and serviceability to components of the engines and heat shield. “These changes further reduce complexity and allow the entire engine to be serviced—or even replaced—within hours or minutes. Rapid turnaround isn’t merely a goal, it’s baked into the design,” the company said. The upper stage will also incorporate “hot staging” to improve the capacity to orbit. You’ve got to appreciate the subtle dig at SpaceX’s Starship program, too: the design allows for hot staging “without the need for a heavy one-time-use interstage shield to protect Stage 1.” Shots fired!

European space commissioner worried about launch. During a keynote address at the Perspectives Spatiales 2025 event in Paris, European Commissioner for Defence Industry and Space Andrius Kubilius outlined the challenging position the continent’s space sector finds itself in, European Spaceflight reports. “Commercial sales are down. Exports are down. Profits are down. And this comes at a time when we need space more than ever. For our security. For our survival.”

Actions, not words, needed … Rhetorical language and bold declarations are inspiring, but when it comes to securing Europe’s place in the global space race, adopted policy and appropriated funding are where aspirations are tested, the European publication stated. Without concrete investments, streamlined regulations, and clear strategic priorities, Europe’s ambition to once again lead the global launch market is likely to amount to little.

Election set to create Starbase, the city. A Texas county on Wednesday approved holding an election sought by SpaceX that would let residents living around the company’s launch and production facilities in South Texas decide whether to formally create a new city called Starbase, ABC News reports. The election was set for May 3, and votes can only be cast by residents living near the launch site, which is currently part of an unincorporated area of Cameron County located along the US-Mexico border. Approval is expected.

A busy beehive … In December, more than 70 area residents signed a petition requesting an election to make Starbase its own municipality. Cameron County Judge Eddie Treviño said the county reviewed the petition and found it met the state’s requirements for the incorporation process to move forward. Kathy Lueders, Starbase’s general manager, has previously said that the incorporation would streamline certain processes to build amenities in the area. More than 3,400 full-time SpaceX employees and contractors work at the site. (submitted by teb)

China taps into commercial space for station missions. China will launch a pair of low-cost space station resupply spacecraft this year on new commercial launch vehicles, Space News reports. The Haolong cargo space shuttle from the Chengdu Aircraft Design Institute will launch on Landspace’s Zhuque-3. The reusable stainless steel, methane-liquid oxygen Zhuque-3 rocket is due to have its first flight in the third quarter of this year. The reusable Haolong vehicle will be 10 meters in length, around 7,000 kilograms in mass, and capable of landing on a runway.

Following a commercial trail laid by NASA … Meanwhile, the Qingzhou cargo spacecraft from the Innovation Academy for Microsatellites of the Chinese Academy of Sciences will launch on the first flight of the CAS Space Kinetica-2 (Lijian-2) rocket no earlier than September. The development is analogous to NASA’s Commercial Resupply Services program, diversifying China’s options for supplying the Tiangong space station. If even one of these missions takes place successfully within the next year, it would represent a major step forward for China’s quasi-commercial space program. (submitted by EllPeaTea)

H3 rocket launches its fifth mission. Japan’s flagship H3 rocket successfully launched the Michibiki 6 navigation satellite early Sunday, enhancing the country’s regional GPS capabilities, Space News reports. The launch was Japan’s first of 2025 and suggests that the relatively new H3 rocket is starting to hit its stride.

Getting up to speed … The expendable launcher’s inaugural launch in March 2023, after numerous delays, suffered a second-stage engine failure, leading controllers to issue a destruct command to destroy the stage and its ALOS-3 payload. Since then, it has had a successful run of launches, most recently the Kirameki 3 communications satellite for defense purposes in November last year. (submitted by EllPeaTea)

Blue Origin lays off 10 percent of its employees. A little less than a month after the successful debut of its New Glenn rocket, Blue Origin’s workforce will be trimmed by 10 percent, Ars reports. The cuts were announced during an all-hands meeting on Thursday morning led by the rocket company’s chief executive, Dave Limp. During the gathering, Limp cited business strategy as the rationale for making the cuts to a workforce of more than 10,000 people.

Growing too fast … “We grew and hired incredibly fast in the last few years, and with that growth came more bureaucracy and less focus than we needed,” Limp wrote in an email to the entire workforce after the meeting. Even before Thursday’s announcement, Blue Origin sought to control costs. According to sources, the company has had a hiring freeze for the last six months. And in January, it let the majority of its contractors go. Layoffs suck—here’s hoping that those let go this week find a soft landing.

Speaking of Blue, they’re targeting spring for next launch. Blue Origin expects to attempt its second New Glenn launch in late spring after correcting problems that prevented the booster from landing on the first launch last month, Space News reports. Speaking at the 27th Annual Commercial Space Conference on Wednesday, Dave Limp suggested a propulsion issue caused the loss of the New Glenn booster during its landing attempt on the Jan. 16 NG-1 launch.

Understanding the issues … “We had most of the right conditions in the engine but we weren’t able to get everything right to the engine from the tanks,” Limp said. “We think we understand what the issues are.” A second booster is in production. “I don’t think it’s going to delay our path to flight,” Limp said of the investigation. “I think we can still fly late spring.” June seems overly optimistic. One source with knowledge of the second booster’s production said October might be a more reasonable timeframe for the second launch.

Boeing warns of potential SLS workforce cuts. The primary contractor for the Space Launch System rocket, Boeing, is preparing for the possibility that NASA cancels the long-running program, Ars reports. Last Friday, with less than an hour’s notice, David Dutcher, Boeing’s vice president and program manager for the SLS rocket, scheduled an all-hands meeting for the approximately 800 employees working on the program. The apparently scripted meeting lasted just six minutes, and Dutcher didn’t take questions. Afterward, Ars learned that NASA was not informed the meeting would take place.

Waiting on the president’s budget request … During his remarks, Dutcher said Boeing’s contracts for the rocket could end in March and that the company was preparing for layoffs in case the contracts with the space agency were not renewed. The aerospace company, which is the primary contractor for the rocket’s large core stage, issued the notifications as part of the Worker Adjustment and Retraining Notification (or WARN) Act. The timing of Friday’s hastily called meeting aligns with the anticipated release of President Trump’s budget proposal for fiscal year 2026, which should signal the administration’s direction on the SLS rocket.

Space Force is still waiting on Vulcan. Last October, United Launch Alliance started stacking its third Vulcan rocket on a mobile launch platform in Florida in preparation for a mission for the US Space Force by the end of the year. However, that didn’t happen, Ars reports. Now, ULA is still awaiting the Space Force’s formal certification of its new rocket, further pushing out delivery schedules for numerous military satellites booked to fly to orbit on the Vulcan launcher.

Falling short of ambitious goals … In fact, ULA has started to take the rocket apart. This involves removing the rocket’s Centaur upper stage, interstage adapter, and booster stage from its launch mount. Instead, ULA will now focus on launching a batch of Project Kuiper satellites for Amazon on an Atlas V rocket in the next couple of months before pivoting back to Vulcan. ULA hoped to launch as many as 20 missions in 2025, with roughly an even split between its new Vulcan rocket and the Atlas V heading for retirement. Clearly, this now won’t happen.

Next three launches

Feb. 15: Falcon 9 | Starlink 12-8 | Cape Canaveral Space Force Station, Florida | 06: 14 UTC

Feb. 17: Falcon 9 | NROL-57 | Vandenberg Space Force Base, California | 13: 18 UTC

Feb. 18: Falcon 9 | Starlink 10-12 | Cape Canaveral Space Force Station, Florida | 23: 00 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: A blue mood at Blue; Stoke Space fires a shot over the bow Read More »

the-mask-comes-off:-a-trio-of-tales

The Mask Comes Off: A Trio of Tales

This post covers three recent shenanigans involving OpenAI.

In each of them, OpenAI or Sam Altman attempt to hide the central thing going on.

First, in Three Observations, Sam Altman’s essay pitches our glorious AI future while attempting to pretend the downsides and dangers don’t exist in some places, and in others admitting we’re not going to like those downsides and dangers but he’s not about to let that stop him. He’s going to transform the world whether we like it or not.

Second, we have Frog and Toad, or There Is No Plan, where OpenAI reveals that its plan for ensuring AIs complement humans rather than AIs substituting for humans is to treat this as a ‘design choice.’ They can simply not design AIs that will be substitutes. Except of course this is Obvious Nonsense in context, with all the talk of remote workers, and also how every company and lab will rush to do the substituting because that’s where the money will be. OpenAI couldn’t follow this path even if it wanted to do so, not without international coordination. Which I’d be all for doing, but then you have to actually call for that.

Third, A Trade Offer Has Arrived. Sam Altman was planning to buy off the OpenAI nonprofit for about $40 billion, even as the for-profit’s valuation surged to $260 billion. Elon Musk has now offered $97 billion for the non-profit, on a completely insane platform of returning OpenAI to a focus on open models. I don’t actually believe him – do you see Grok’s weights running around the internet? – and obviously his bid is intended as a giant monkey wrench to try and up the price and stop the greatest theft in human history. There was also an emergency 80k hours podcast on that.

  1. Three Observations.

  2. Frog and Toad (or There Is No Plan).

  3. A Trade Offer Has Arrived.

Altman used to understand that creating things smarter than us was very different than other forms of technology. That it posed an existential risk to humanity. He now pretends not to, in order to promise us physically impossible wonderous futures with no dangers in sight, while warning that if we take any safety precautions then the authoritarians will take over.

His post, ‘Three Observations,’ is a cartoon villain speech, if you are actually paying attention to it.

Even when he says ‘this time is different,’ he’s now saying this time is just better.

Sam Altman: In some sense, AGI is just another tool in this ever-taller scaffolding of human progress we are building together.

In another sense, it is the beginning of something for which it’s hard not to say “this time it’s different”; the economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases, have much more time to enjoy with our families, and can fully realize our creative potential.

In a decade, perhaps everyone on earth will be capable of accomplishing more than the most impactful person can today.

Yes, there’s that sense. And then there’s the third sense, in that at least by default it is rapidly already moving from ‘tool’ to ‘agent’ and to entities in competition with us, that are smarter, faster, more capable, and ultimately more competitive at everything other than ‘literally be a human.’

It’s not possible for everyone on Earth to be ‘capable of accomplishing more than the most impactful person today.’ The atoms for it are simply not locally available. I know what he is presumably trying to say, but no.

Altman then lays out three principles.

  1. The intelligence of an AI model roughly equals the log of the resources used to train and run it. These resources are chiefly training compute, data, and inference compute. It appears that you can spend arbitrary amounts of money and get continuous and predictable gains; the scaling laws that predict this are accurate over many orders of magnitude.

  2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.

  3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.

Even if we fully accept point one, that doesn’t tell us as much as you might think.

  1. It doesn’t tell us how many OOMs (orders of magnitude) are available to us, or how we can make them available, or how much they cost.

  2. It doesn’t tell us what other ways we could also scale intelligence of the system, because of algorithmic efficiency. He covers this in point #2, but we should expect this law to break to the upside (go faster) once AIs smarter than us are doing the work.

  3. It doesn’t tell us what the scale of this ‘intelligence’ is, which is a matter of much debate. What does it mean to be ‘twice as smart’ as the average (let’s simplify and say IQ 100) person? It doesn’t mean ‘IQ 200,’ that’s not how that scale works. Indeed, much of the debate is people essentially saying that this wouldn’t mean anything, if it was even possible.

  4. It doesn’t tell us what that intelligence actually enables, which is also a matter of heated debate. Many claim, essentially, ‘if you had a country of geniuses in a data center’ to use Dario’s term, that this would only add e.g. 0.5% to RGDP growth, and would not threaten our lifestyles much let alone our survival. The fact that this does not make any sense does not seem to dissuade them. And the ‘final form’ likely goes far beyond ‘genius’ in that data center.

Then point two, as I noted, we should expect to break to the upside if capabilities continue to increase, and to largely continue for a while in terms of cost even if capabilities mostly stall out.

Point three may or may not be correct, since defining ‘linear intelligence’ is difficult. And there are many purposes for which all you need is ‘enough’ intelligence – as we can observe with many human jobs, where being a genius is of at most marginal efficiency benefit. But there are other things for which once you hit the necessary thresholds, there are dramatic super exponential returns to relevant skills and intelligence by any reasonable measure.

Altman frames the impact of superintelligence as a matter of ‘socioeconomic value,’ ignoring other things this might have an impact upon?

If these three observations continue to hold true, the impacts on society will be significant.

Um, no shit, Sherlock. This is like saying dropping a nuclear bomb would have a significant impact on an area’s thriving nightlife. I suppose Senator Blumenthal was right, by ‘existential’ you did mean the effect on jobs.

Speaking of which, if you want to use the minimal amount of imagination, you can think of virtual coworkers, while leaving everything else the same.

Still, imagine it as a real-but-relatively-junior virtual coworker. Now imagine 1,000 of them. Or 1 million of them. Now imagine such agents in every field of knowledge work.

Then comes the part where he assures us that timelines are only so short.

The world will not change all at once; it never does. Life will go on mostly the same in the short run, and people in 2025 will mostly spend their time in the same way they did in 2024. We will still fall in love, create families, get in fights online, hike in nature, etc.

But the future will be coming at us in a way that is impossible to ignore, and the long-term changes to our society and economy will be huge. We will find new things to do, new ways to be useful to each other, and new ways to compete, but they may not look very much like the jobs of today.

Yes, everything will change. But why all this optimism, stated as fact? Why not frame that as an aspiration, a possibility, an ideal we can and must seek out? Instead he blindly talks like Derek on Shrinking and says it will all be fine.

And oh, it gets worse.

Technically speaking, the road in front of us looks fairly clear.

No it bloody does not. Do not come to us and pretend that your technical problems are solved. You are lying. Period. About the most important question ever. Stop it!

But don’t worry, he mentions AI Safety! As in, he warns us not to worry about it, or else the future will be terrible – right after otherwise assuring us that the future will definitely be Amazingly Great.

While we never want to be reckless and there will likely be some major decisions and limitations related to AGI safety that will be unpopular, directionally, as we get closer to achieving AGI, we believe that trending more towards individual empowerment is important; the other likely path we can see is AI being used by authoritarian governments to control their population through mass surveillance and loss of autonomy.

That’s right. Altman is saying: We know pushing forward to AGI and beyond as much as possible might appear to be unsafe, and what we’re going to do is going to be super unpopular and we’re going to transform the world and put the entire species and planet at risk directly against the overwhelming preferences of the people, in America and around the world. But we have to override the people and do it anyway. If we don’t push forward quickly as possible then China Wins.

Oh, and all without even acknowledging the possibility that there might be a loss of control or other existential risk in the room. At all. Not even to dismiss it, let alone argue against it or that the risk is worthwhile.

Seriously. This is so obscene.

Anyone in 2035 should be able to marshall the intellectual capacity equivalent to everyone in 2025; everyone should have access to unlimited genius to direct however they can imagine.

Let’s say, somehow, you could pull that off without already having gotten everyone killed or disempowered along the way. Have you stopped, sir, for five minutes, to ask how that could possibly work even in theory? How the humans could possibly stay in control of such a scenario, how anyone could ever dare make any meaningful decision rather than handing it off to their unlimited geniuses? What happens when people direct their unlimited geniuses to fight with each other in various ways?

This is not a serious vision of the future.

Or more to the point: How many people do you think this ‘anyone’ consists of in 2035?

As we will see later, there is no plan. No vision. Except to build it, and have faith.

Now that Altman has made his intentions clear: What are you going to do about it?

Don’t make me tap the sign, hope is not a strategy, solve for the equilibrium, etc.

Gary Tan: We are very lucky that for now that frontier AI models are very smart toasters instead of Skynet (personally I hope it stays that way)

This means *agencyis now the most important trait to teach our kids and will be a mega multiplier on any given person’s life outcome.

Agency is important. By all means teach everyone agency.

Also don’t pretend that the frontier AI models will effectively be ‘very smart toasters.’

The first thing many people do, the moment they know how, is make one an agent.

Similarly, what type of agent will you build?

Oh, OpenAI said at the summit, we’ll simply only build the kind that complements humans, not the kind that substitutes for humans. It’ll be fine.

Wait, what? How? Huh?

This was the discussion about it on Twitter.

The OpenAI plan here makes no sense. Or rather, it is not a plan, and no one believes you when you call it a plan, or claim it is your intention to do this.

Connor Axiotes: I was invited to the @OpenAI AI Economics event and they said their AIs will just be used as tools so we won’t see any real unemployment, as they will be complements not substitutes.

When I said that they’d be competing with human labour if Sama gets his AGI – I was told it was just a “design choice” and not to worry. From 2 professional economists!

Also in the *wholeevent there was no mention of Sama’s UBI experiment or any mention of what post AGI wage distribution might look like. Even when I asked.

Sandro Gianella (OpenAI): hey! glad you could make to our event

– the point was not that it was “just a design choice” but that we have agency on how we build and deploy these systems so they are complementing

– we’re happy to chat about UBI or wage distribution but you can’t fit everything into 1.5h

Connor Axiotes: I appreciate you getting me in! It was very informative and you were very hospitable.

And I wish I didn’t have to say anything but many in that room will have left, gone back to their respective agencies and governments, and said “OpenAI does not think there will be job losses from AGI” and i just think it shouldn’t have been made out to be that black and white.

Regarding your second point, it also seems Sama has just spoken less about UBI for a while. What is OpenAI’s plans to spread the rent? UBI? World coin? If there is no unemployment why would we need that?

Zvi Mowshowitz (replying to Sandro, got no response so far): Serious question on the first point. We do have such agency in theory, but how collectively do we get to effectively preserve this agency in practice?

The way any given agent works is a design choice, but those choices are dictated by the market/competition/utility if allowed.

All the same concerns about the ‘race to AGI’ apply to a ‘race to agency’ except now with the tools generally available, you have a very large number of participants. So what to do?

Steven Adler (ex-OpenAI): Politely, I don’t think it is at all possible for OpenAI to ‘have AGI+ only complement humans rather than replace them’; I can’t imagine any way this could be done. Nor do I believe that OpenAI’s incentives would permit this even if possible.

David Manheim: Seems very possible to do, with a pretty minimal performance penalty as long as you only compare to humans, instead of comparing to inarguably superior unassisted and unmonitorable agentic AI systems.

Steven Adler: In a market economy, I think those non-replacing firms just eventually get vastly outcompeted by those who do replacement. Also, in either case I still don’t see how OAI could enforce that its customers may only complement not replace

David Manheim:Yes, it’s trivially incorrect. It’s idiotic. It’s completely unworkable because it makes AI into a hindrance rather than an aide.

But it’s *alsothe only approach I can imagine which would mean you could actually do the thing that was claimed to be the goal.

OpenAI can enforce it the same way they plan to solve superalignment; assert an incoherent or impossible goal and then insist that they can defer solving the resulting problem until they have superintelligence do it for them.

Yes, this is idiocy, but it’s also their plan!

sma: > we have agency on how we build and deploy these systems so they are complementing

Given the current race dynamics this seems… very false.

I don’t think it is their plan. I don’t even think it is a plan at all. The plan is to tell people that this is the plan. That’s the whole plan.

Is it a design choice for any individual which way to build their AGI agent? Yes, provided they remain in control of their AGI. But how much choice will they have, competing against many others? If you not only keep the human ‘in the loop’ but only ‘complement’ them, you are going to get absolutely destroyed by anyone who takes the other path, whether the ‘you’ is a person, a company or a nation.

Once again, I ask, is Sam Altman proposing that he take over the world to prevent anyone else from creating AI agents that substitute for humans? If not, how does he intend to prevent others from building such agents?

The things I do strongly agree with:

  1. We collectively have agency over how we create and deploy AI.

  2. Some ways of doing that work out better for humans than others.

  3. We should coordinate to do the ones that work out better, and to not do the ones that work out worse.

The problem is, you have to then figure out how to do that, in practice, and solve for the equilibrium, not only for you or your company but for everyone. Otherwise, It’s Not Me, It’s the Incentives. And in this case, it’s not a subtle effect, and you won’t last five minutes.

You can also say ‘oh, any effective form of coordination would mean tyranny and that is actually the worst risk from AI’ and then watch as everyone closes their eyes and runs straight into the (technically metaphorical, but kind of also not so metaphorical) whirling blades of death. I suppose that’s another option. It seems popular.

Remember when I said that OpenAI’s intention to buy their nonprofit arm off for ~$40 billion was drastically undervaluing OpenAI’s nonprofit and potentially the largest theft in human history?

Confirmed.

Jessica Toonkel and Berber Jin: “It’s time for OpenAI to return to the open-source, safety-focused force for good it once was,” Musk said in a statement provided by Toberoff. “We will make sure that happens.”

One piece of good news is that this intention – to take OpenAI actual open source – will not happen. This would be complete insanity as an actual intention. There is no such thing as OpenAI as ‘open-source, safety-focused force for good’ unless they intend to actively dismantle all of their frontier models.

Indeed I would outright say: OpenAI releasing the weights of its models would present a clear and present danger to the national security of the United States.

(Also it would dramatically raise the risk of Earth not containing humans for long, but alas I’m trying to make a point about what actually motivates people these days.)

Not that any of that has a substantial chance of actually happening. This is not a bid that anyone involved is ever going to accept, or believes might be accepted.

Getting it accepted was never the point. This offer is designed to be rejected.

The point is that if OpenAI still wants to transition to a for-profit, it now has to pay the nonprofit far closer to what it is actually worth, a form of a Harberger tax.

It also illustrates the key problem with a Harberger tax. If someone else really does not like you, and would greatly enjoy ruining your day, or simply wants to extort money, then they can threaten to buy something you’re depending on simply to blow your whole operation up.

Altman of course happy to say the pro-OpenAI half the quiet part out loud.

Sam Altman: I think he is probably just trying to slow us down. He obviously is a competitor. I wish he would just compete by building a better product, but I think there’s been a lot of tactics, many, many lawsuits, all sorts of other crazy stuff, now this.

Charles Capel and Tom MacKenzie: In the interview on Tuesday, Altman chided Musk, saying: “Probably his whole life is from a position of insecurity — I feel for the guy.” Altman added that he doesn’t think Musk is “a happy person.”

Garrison Lovely explains all this here, that it’s all about driving up the price that OpenAI is going to have to pay.

Nathan Young also has a thread where he angrily explains Altman’s plan to steal OpenAI, in the context of Musk’s attempt to disrupt this.

Sam Altman: no thank you but we will buy twitter for $9.74 billion if you want.

Elon Musk (reply to Altman): Swindler.

Kelsey Piper: Elon’s offer to purchase the OpenAI nonprofit for $97.4 billion isn’t going to happen, but it may seriously complicate OpenAI’s efforts to claim the nonprofit is fairly valued at $40billion. If you won’t sell it for $97.4billion, that means you think it’s worth more than that.

I wrote back in October that OpenAI was floating valuations of its nonprofit that seemed way, way too low.

Jungwon has some experience with such transfers, and offers thoughts, saying this absolutely presents a serious problem for Altman’s attempt to value the nonprofit at a fraction of its true worth. Anticipated arguments include ‘OpenAI is nothing without its people’ and that everyone would quit if Elon bought the company, which is likely true. And that Elon’s plan would violate the charter and be terrible for humanity, which is definitely true.

And that Altman could essentially dissolve OpenAI and start again if he needed to, as he essentially threatened to do last time. In this case, it’s a credible threat. Indeed, one (unlikely but possible) danger of the $97 billion bid is if Altman accepts it, takes the $97 billion and then destroys the company on the way out the door and starts again. Whoops. I don’t think this is enough to make that worth considering, but there’s a zone where things get interesting, at least in theory.

80k Hours had an emergency podcast on this (also listed under The Week in Audio). Another note is that technically, any board member can now sue if they think the nonprofit is not getting fair value in compensation.

Finally, there’s this.

Bret Taylor (Chairman of the Board): “OpenAI is not for sale” because they have a “mission of ensuring AGI benefits humanity and I have a hard time seeing how this would.”

That is all.

Discussion about this post

The Mask Comes Off: A Trio of Tales Read More »