Author name: Beth Washington

it’s-getting-harder-to-skirt-rto-policies-without-employers-noticing

It’s getting harder to skirt RTO policies without employers noticing

For example, while high-profile banks like JPMorgan Chase and HSBC have started enforcing in-office policies, London-headquartered bank Standard Chartered is letting managers and individual employees decide how often workers are expected in the office. In July, Standard CEO Bill Winters told Bloomberg Television:

We work with adults. The adults can have an adult conversation with other adults and decide how they’re going to best manage their team.

The differing management methods come as numerous corporations have pointed to in-office work as driving collaboration, ideation, and, in some cases, revenue, while numerous studies point to RTO policies hurting employee morale and risking employee retention.

“There are some markets where there’s effectively peer pressure to come in more often, and there’s other markets where there’s less of that,” Winters said. “People come into the office because they want to come into the office.”

Office space

After the COVID-19 pandemic forced many businesses to figure out how to function with remote workers, there was speculation that the commercial real estate business would seriously suffer long-term. CNBC reported that the US office vacancy rate (18.9 percent) is currently near the highest we’ve seen in 30 years (19 percent).

However, CBRE, which has big stakes here, found that out of the companies it surveyed, more are planning to expand office space than reduce it. Per the report, 67 percent of companies said they will expand or maintain the size of their office space over the next three years, compared to 64 percent last year. Thirty-three percent of respondents overall said they will reduce office space; however, among companies with at least 10,000 employees, 60 percent are planning to downsize. Among the companies planning to downsize, 79 percent said they are doing so because more hybrid work means that they need less space.

“Employers are much more focused now than they were pre-pandemic on quality of workplace experience, the efficiency of seat sharing, and the vibrancy of the districts in which they’re located,” Julie Whelan, CBRE’s global head of occupier research, told CNBC.

Although tariffs and broader economic uncertainty are turning some corporations away from long-term real estate decisions, Whelan said many firms are ready to make decisions about office space, “even if there’s a little bit of economic uncertainty right now.”

It’s getting harder to skirt RTO policies without employers noticing Read More »

rocket-report:-firefly-lights-the-markets-up;-spacex-starts-selling-trips-to-mars

Rocket Report: Firefly lights the markets up; SpaceX starts selling trips to Mars


All the news that’s fit to lift

“Get on board! We are going to Mars!”

The Vulcan rocket for ULA’s first national security mission nears its initial launch, NET August 12. Credit: United Launch Alliance

The Vulcan rocket for ULA’s first national security mission nears its initial launch, NET August 12. Credit: United Launch Alliance

Welcome to Edition 8.06 of the Rocket Report! After years of disappointing results from SPACs and space companies, it is a good sign to see Firefly’s more traditional initial public offering doing so well. The company has had such a long and challenging road over more than a decade; the prospect of their success should be heartening to the commercial space industry.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Virgin Galactic delays resumption of spaceflights. The Richard Branson-founded company plans to resume private space tourism trips in the autumn of 2026 after its Delta spacecraft’s first commercial flight, a research mission that was delayed from summer 2026 to also occur in the fall, Bloomberg reports. Virgin Galactic announced an updated timeline on Wednesday, when it reported quarterly financial results that fell short of analysts’ expectations. Revenue was about $410,000 for the second quarter.

Waiting on Delta … The company paused commercial operations in June 2024 to focus on developing the upgraded Delta vehicle, which is being optimized for reusability and faster turnaround time between flights. Virgin Galactic had been selling seats on the Delta spacecraft for about $600,000 and said that it plans to raise prices when ticket sales resume in the first quarter of 2026. The company also recently adjusted the size of its in-house engineering team and reduced the overall headcount by 7 percent to control costs.

Firefly is a big hit with investors. Shares in the Cedar Park, Texas-based space company began trading at $70 on the NASDAQ stock exchange midday Thursday under the symbol FLY, jumping from their initial public offering price of $45, The Wall Street Journal reports. The company sold more than 19 million shares in the listing, raising $868 million. Bankers and traders are closely tracking the stock’s performance as a sign of both the US IPO market strength and investor interest in space companies. The offering will allow the company to accelerate production and its launch cadence, Firefly CEO Jason Kim said in an interview.

Time to build and fly … “We have to execute,” said Kim, who led a Boeing satellite business before taking the top role at Firefly last year. “We’ve got a really strong backlog.” Firefly’s listing comes five months after it successfully guided its Blue Ghost lander to the lunar surface, carrying scientific gear to research moondust and ground temperatures. The NASA-funded mission marked the first fully successful private moon landing, following misfires on three other flights handled by competitors. The company’s next challenge is to prove that its other vehicles can work as well, including the Alpha rocket.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

iRocket says it has signed a huge deal. A largely unknown small launch startup, iRocket, says it has signed a multi-year agreement with SpaceBelt KSA valued at up to $640 million. iRocket will support up to 30 satellite launches, providing mission planning, propulsion systems, and integration services to help establish a secure, autonomous space communications network across Saudi Arabia and the Gulf region.

Yes, but … iRocket says the agreement represents a significant commercial milestone. However, since its founding in 2018, New York-based iRocket hasn’t released much information on any technical progress toward a first flight of the Shockwave launch vehicle. It is difficult to know how much (if any) money changed hands with this agreement.

Indian space startup builds 3D-printed engine. The Chennai-based startup Agnikul Cosmos has announced the successful development of the world’s largest single-piece 3D-printed Inconel rocket engine, Business Today reports. The engine, printed in one go without any welds, joints, or fasteners, represents a leap in additive manufacturing for aerospace, the company said.

Earned a patent … Agnikul also said it has been granted a US patent for the design and manufacturing process of single-piece rocket engines. “Means something to have a completely Indian-origin design patented in the US—a nation that has built some of the most complex engines in this industry,” the company said. Agnikul is developing a small-lift booster that can put about 100 kg to low-Earth orbit.

Skyrora wins first UK launch license. Skyrora became the first British commercial rocket manufacturer to secure a launch license from the UK Civil Aviation Authority, paving the way for its Skylark L suborbital rocket to lift off from the SaxaVord spaceport in the Shetland Islands, Payload reports. Derek Harris, Skyrora’s business development lead, said this test flight could take place as early as May 2026.

Waiting on launch pads … Skyrora said it could launch sooner if it opted to fly from an international launch pad. That’s the route it took in 2022, when it launched a rocket from Iceland’s mobile Langanes launch site. “Unfortunately, we are still technically locked out of SaxaVord,” Harris said. “What is still open to us is Oman, and Australia, or even going back to Iceland…[but] it would be a sad indictment of what’s going on with the government funding if we have to go elsewhere to launch it.”

The Philippines condemns China’s rocket launch. A top Philippine security official on Tuesday condemned China’s latest rocket launch, which caused suspected debris to fall near a western Philippine province, the AP reports. Authorities said the incident sparked alarm and posed a danger to people, ships, and aircraft. There were no immediate reports of injuries or damage from the suspected Chinese rocket debris that fell near Palawan province Monday night, following a launch of the medium-lift Long March 12.

No NOTAMs it seems … China’s official Xinhua News Agency reported that the Long March-12 rocket that lifted off Monday night from a commercial spacecraft launch site on the southern island province of Hainan successfully carried a group of Internet satellites into pre-set orbit. It was not immediately clear whether Chinese authorities had notified nearby countries, such as the Philippines, of possible debris from its latest rocket launch. Philippine aircraft and vessels were deployed on Tuesday to search for the rocket debris.

Crew-11 mission launches from Florida. The next four-person team to live and work aboard the International Space Station departed from NASA’s Kennedy Space Center last Friday, taking aim at the massive orbiting research complex for a planned stay of six to eight months, Ars reports. Spacecraft commander Zena Cardman leads the mission, designated Crew-11, with three others aboard SpaceX’s Crew Dragon Endeavour capsule: veteran NASA astronaut Mike Fincke, Kimiya Yui of Japan, and Oleg Platonov of Russia.

Au revoir to an old friend … The Falcon 9’s reusable first stage booster detached and returned to a propulsive touchdown at Landing Zone 1 (LZ-1) at Cape Canaveral Space Force Station, a few miles south of the launch site. This was the 53rd and final rocket landing at LZ-1 since SpaceX aced the first intact recovery of a Falcon 9 booster there on December 21, 2015. SpaceX will move onshore rocket landings to new landing zones to be constructed next to the two Falcon 9 launch pads at the Florida spaceport. Landing Zone 2, located adjacent to Landing Zone 1, will also be decommissioned and handed back over to the Space Force once SpaceX activates the new landing sites.

NASA says it will move a space shuttle. The head of NASA has decided to move one of the agency’s retired space shuttles to Houston, but which shuttle remains unclear, Ars reports. Senator John Cornyn (R-Texas), who earlier this year introduced and championed an effort to relocate the space shuttle Discovery from the Smithsonian to Space Center Houston, issued a statement on Tuesday evening applauding the decision. The senator did not state which of NASA’s winged orbiters would be making the move.

Playing coy for no clear reason … The legislation that required Duffy to choose a “space vehicle” that had “flown in space” and “carried people” did not specify an orbiter by name, but the language in the “One Big Beautiful Bill” that President Donald Trump signed into law last month was inspired by Cornyn and fellow Texas Senator Ted Cruz’s bill to relocate Discovery. It is unclear why the choice of orbiters is being kept a secret. According to the bill, the decision was to be made “with the concurrence of an entity designated” by the NASA administrator to display the shuttle. Cornyn’s release only confirmed that Duffy had identified the location to be “a non-profit near the Johnson Space Center.”

SpaceX begins offering Starship services to Mars. On Thursday, Gwynne Shotwell, the president and chief operating officer of SpaceX, announced that the company has begun selling rides to Mars. “Get on board! We are going to Mars! SpaceX is now offering Starship services to the red planet,” Shotwell said on X. As part of the announcement, Shotwell said SpaceX has signed a “first of its kind” agreement with the Italian Space Agency.

Racing the Giro d’Mars … The president of the Italian Space Agency, Teodoro Valente, confirmed the news, saying the first Starship flights to Mars (which will, of course, be uncrewed) will carry Italian experiments. “The payloads will gather scientific data during the missions. Italy continues to lead in space exploration!” Valente wrote on X. Left unsaid, of course, is when such flights will take place. It is difficult to see Starship now being ready for a late 2026 window, but early 2029 seems plausible.

ULA will eventually test reuse technology. On Thursday, ahead of the first Vulcan launch of a national security payload next week, United Launch Alliance chief executive Tory Bruno spoke with reporters about various topics, NASA Spaceflight reports. A highlight was ULA’s progress on SMART Reuse, a system aimed at recovering and reusing booster components to reduce costs. Bruno announced that the critical design review for key components is complete, paving the way for building flight-like hardware for certification.

Testing remains a ways away … As development progresses, ULA plans to relocate more components to the aft section of the booster for recovery. “By the time that path is finished, pretty much the only thing being discarded from the booster will be the fuel tanks,” he said. Experimental flights incorporating SMART Reuse could begin as early as 2026, or at least by 2027, but only when aligned with customer needs. One wonders when actual engine recovery and reuse might begin.

Next three launches

August 8: Falcon 9 | Project Kuiper KF-02 | Cape Canaveral Space Force Station, Florida | 13: 40 UTC

August 8: Jielong 3 | Undeclared payload | Offshore site, Chinese coastal waters | 16: 30 UTC

August 10: Falcon 9 | Starlink 17-4 | Vandenberg Space Force Base, Calif. | 03: 43 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: Firefly lights the markets up; SpaceX starts selling trips to Mars Read More »

sonos-says-it’s-forced-to-raise-prices-while-trying-to-win-back-customers

Sonos says it’s forced to raise prices while trying to win back customers

During that call, Sonos CFO Saori Casey said that the company expects “tariff expenses will be approximately $5 million in Q4.” In Sonos’ fiscal Q3, it paid $3.5 million in tariffs, Casey said.

Sonos is still recovering from app problems

Since July 2024, when Sonos’ then-CEO Patrick Spence admitted that a software update inadvertently broke many Sonos devices, the company has been trying to prove to customers and investors that its pricey audio devices are still worth buying.

During the earnings call, Conrad said he believes the value of Sonos gadgets “compounds over time, thanks to the kinds of software updates that deliver new experiences.” But a widely reviled app update last year damaged Sonos’ reputation in this area. The update stripped the app of some basic features, such as the ability to edit playlists and song queues, and many Sonos devices, especially older ones, stopped functioning properly.

Meanwhile, Sonos hasn’t released a new product since the Arc Ultra soundbar and Sub 4 subwoofer in October 2024. In March, reports surfaced that Sonos axed its streaming video player. Conrad told investors yesterday that Sonos has a release roadmap going beyond its 2026 fiscal year. Any devices in that roadmap, however, will be challenged to sell customers on their software, long-term reliability, and price.

Customers may cut Sonos some slack, considering the widespread impact that tariffs are expected to have on electronics pricing. In May, the Trump administration axed the de minimis exemption that enabled duty-free imports of goods worth $800 or less, impacting electronics such as PC peripherals and DIY parts. Currently, the US and China have paused tariffs as the countries look to reach an agreement by August 12. At that time, goods imported from China could face tariffs as high as 145 percent, which would significantly impact the prices of most electronics sold in the US.

But Sonos is already struggling to release and sell new products at high prices, so raising them even higher could further harm the company.

“We lost the momentum in 2024. We’re starting to get it back, and we’re going to accelerate our pace from here,” Conrad said.

Sonos says it’s forced to raise prices while trying to win back customers Read More »

president-trump-says-intel’s-new-ceo-“must-resign-immediately”

President Trump says Intel’s new CEO “must resign immediately”

Intel and the White House did not immediately respond to a request for comment on Trump’s post. Intel shares dropped 3 percent in pre-market trading in New York.

Tan was appointed as Intel CEO in March after the Silicon Valley company’s board ousted his predecessor, Pat Gelsinger, in December.

Intel is the only US-headquartered company capable of producing advanced semiconductors, though it has so far largely missed out on the current boom for artificial intelligence chips. It has been awarded billions of dollars in US government subsidies and loans to support its chip manufacturing business, which has fallen far behind its rival Taiwan Semiconductor Manufacturing Company.

However, amid a radical cost-cutting program, Tan warned last month that Intel might be forced to abandon development of its next-generation manufacturing technology if it were unable to secure a “significant external customer.” Such a move would hand a virtual monopoly of leading-edge chipmaking to TSMC.

“Intel is required to be a responsible steward of American taxpayer dollars and to comply with applicable security regulations,” Cotton wrote in Tuesday’s letter to Intel’s board chair, Frank Yeary. “Mr Tan’s associations raise questions about Intel’s ability to fulfill these obligations.”

Additional reporting by Demetri Sevastopulo.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

President Trump says Intel’s new CEO “must resign immediately” Read More »

trump’s-trade-and-environment-policies-are-a-disaster-for-carmakers

Trump’s trade and environment policies are a disaster for carmakers

General Motors blamed Trump’s tariffs for costing it $1.1 billion in Q2 and as much as $5 billion by the end of the year. And while the new anti-EV adoption policies are yet to fully bite, it’s clear they’ve motivated some action inside the GM boardroom. Although GM CEO Mary Barra wrote to investors that the company believes “the long-term future is profitable electric vehicle production,” she followed by explaining that GM’s flexible factories will help it succeed in a world where EPA fuel economy targets are no longer a thing. That’s probably why GM added 300,000 more units of capacity for “high margin light-duty pickups, full-size SUVs and crossovers.”

Ford said that the tariffs could cost it as much as $2 billion this year, despite it making more actual vehicles in the US than any other automaker. That’s because it has to pay the US government to import raw materials like steel and aluminum, as well as components and subassemblies.

Foreign automakers are also feeling the effects, given the importance—until now, at least—of the US car buyer. Stellantis, which owns the Jeep and Ram brands, said it had already lost $2.7 billion this year due to tariffs, although the automaker stands to benefit in the coming years from the gutting of fleet fuel efficiency fines.

Aston Martin may benefit from a lower 10 percent tariff for UK-made cars, but it described the process as “extremely disruptive,” and although it has now restarted shipping cars to America, it issued a profit warning last week.

BMW is among the less badly hurt; although its operating margin fell to 5.4 percent, this was within its expectations. Mercedes had to warn investors to expect less this year, and it says the US will become a less-important market for the company, which plans to make up for it with growth in China. Volkswagen Group said the tariffs have cost it $1.5 billion so far this year, and it has also revised down its forecasts for the rest of the year.

Although Porsche announced record deliveries in North America just a week ago, its operating profit was a third of that a year ago. “In the US, import tariffs are also putting huge pressure on our business. Looking ahead, the movement of the dollar could also have an impact. In addition, the transformation to electric mobility is progressing more slowly than expected overall, with consequences for the supplier network,” said Porsche and VW Group CEO Oliver Blume.

Trump’s trade and environment policies are a disaster for carmakers Read More »

some-ai-tools-don’t-understand-biology-yet

Some AI tools don’t understand biology yet


A collection of new studies on gene activity shows that AI tools aren’t very good.

Gene activity appears to remain beyond the abilities of AI at the moment. Credit: BSIP

Biology is an area of science where AI and machine-learning approaches have seen some spectacular successes, such as designing enzymes to digest plastics and proteins to block snake venom. But in an era of seemingly endless AI hype, it might be easy to think that we could just set AI loose on the mounds of data we’ve already generated and end up with a good understanding of most areas of biology, allowing us to skip a lot of messy experiments and the unpleasantness of research on animals.

But biology involves a whole lot more than just protein structures. And it’s extremely premature to suggest that AI can be equally effective at handling all aspects of biology. So we were intrigued to see a study comparing a set of AI software packages designed to predict how active genes will be in cells exposed to different conditions. As it turns out, the AI systems couldn’t manage to do any better than a deliberately simplified method of predicting.

The results serve as a useful caution that biology is incredibly complex, and developing AI systems that work for one aspect of it is not an indication that they can work for biology generally.

AI and gene activity

The study was conducted by a trio of researchers based in Heidelberg: Constantin Ahlmann-Eltze, Wolfgang Huber, and Simon Anders. They note that a handful of additional studies have been released while their work was on a pre-print server, all of them coming to roughly the same conclusions. But these authors’ approach is pretty easy to understand, so we’ll use it as an example.

The AI software they examined attempts to predict changes in gene activity. While every cell carries copies of the roughly 20,000 genes in the human genome, not all of them are active in a given cell—”active” in this case meaning they are producing messenger RNAs. Some provide an essential function and are active at high levels at all times. Others are only active in specific cell types, like nerves or skin. Still others are activated under specific conditions, like low oxygen or high temperatures.

Over the years, we’ve done many studies examining the activity of every gene in a given cell type under different conditions. These studies can range from using gene chips to determine which messenger RNAs are present in a population of cells to sequencing the RNAs isolated from single cells and using that data to identify which genes are active. But collectively, they can provide a broad, if incomplete, picture that links the activity of genes with different biological circumstances. It’s a picture you could potentially use to train an AI that would make predictions about gene activity under conditions that haven’t been tested.

Ahlmann-Eltze, Huber, and Anders tested a set of what are called single-cell foundation models that have been trained on this sort of gene activity data. The “single cell” portion indicates that these models have been trained on gene activity obtained from individual cells rather than a population average of a cell type. Foundation models mean that they have been trained on a broad range of data but will require additional training before they’re deployed for a specific task.

Underwhelming performance

The task in this case is predicting how gene activity might change when genes are altered. When an individual gene is lost or activated, it’s possible that the only messenger RNA that is altered is the one made by that gene. But some genes encode proteins that regulate a collection of other genes, in which case you might see changes in the activity of dozens of genes. In other cases, the loss or activation of a gene could affect a cell’s metabolism, resulting in widespread alterations of gene activity.

Things get even more complicated when two genes are involved. In many cases, the genes will do unrelated things, and you get a simple additive effect: the changes caused by the loss of one, plus the changes caused by the loss of others. But if there’s some overlap between the functions, you can get an enhancement of some changes, suppression of others, and other unexpected changes.

To start exploring these effects, researchers have intentionally altered the activity of one or more genes using the CRISPR DNA editing technology, then sequenced every RNA in the cell afterward to see what sorts of changes took place. This approach (termed Perturb-seq) is useful because it can give us a sense of what the altered gene does in a cell. But for Ahlmann-Eltze, Huber, and Anders, it provides the data they need to determine if these foundation models can be trained to predict the ensuing changes in the activity of other genes.

Starting with the foundation models, the researchers conducted additional training using data from an experiment where either one or two genes were activated using CRISPR. This training used the data from 100 individual gene activations and another 62 where two genes were activated. Then, the AI packages were asked to predict the results for another 62 pairs of genes that were activated. For comparison, the researchers also made predictions using two extremely simple models: one that always predicted that nothing would change and a second that always predicted an additive effect (meaning that activating genes A and B would produce the changes caused by activating A plus the changes caused by activating B).

They didn’t work. “All models had a prediction error substantially higher than the additive baseline,” the researchers concluded. The result held when the researchers used alternative measurements of the accuracy of the AI’s predictions.

The gist of the problem seemed to be that the trained foundation models weren’t very good at predicting when the alterations of pairs of genes would produce complex patterns of changes—when the alteration of one gene synergized with the alteration of a second. “The deep learning models rarely predicted synergistic interactions, and it was even rarer that those predictions were correct,” the researchers concluded. In a separate test that looked specifically at these synergies between genes, it turned out that none of the models were better than the simplified system that always predicted no changes.

Not there yet

The overall conclusions from the work are pretty clear. “As our deliberately simple baselines are incapable of representing realistic biological complexity yet were not outperformed by the foundation models,” the researchers write, “we conclude that the latter’s goal of providing a generalizable representation of cellular states and predicting the outcome of not-yet-performed experiments is still elusive.”

It’s important to emphasize that “still elusive” doesn’t mean we’re incapable of ever developing an AI that can help with this problem. It also doesn’t mean that this applies to all cellular states (the results are specific to gene activity), much less all of biology. At the same time, the work provides a valuable caution at a time when there’s a lot of enthusiasm for the idea that AI’s success in a couple of areas means we’re on the cusp of a world where it can be applied to anything.

Nature Methods, 2025. DOI: 10.1038/s41592-025-02772-6  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Some AI tools don’t understand biology yet Read More »

rip-to-the-macintosh-hd-hard-drive-icon,-2000–2025

RIP to the Macintosh HD hard drive icon, 2000–2025

That version of the icon persisted through the Apple Silicon-era Big Sur redesign and was still with us in the first public beta build for macOS 26 Tahoe that Apple released last week. The new beta also updates the icons for external drives (orange, with a USB-C connector on top), network shares (blue, with a globe on top), and removable disk images (white, with an arrow on top).

All of the system’s disk icons get an update in the latest macOS 26 Tahoe developer beta. Credit: Apple/Andrew Cunningham

Other icons that reused or riffed on the old hard drive icon have also been changed. Disk Utility now looks like a wrench tightening an Apple-branded white bolt, for some reason, and drive icons within Disk Utility also have the new SSD-esque icon. Installer apps use the new icon instead of the old one. Navigate to the /System/Library/CoreServices folder where many of the built-in operating system icons live, and you can see a bunch of others that exchange the old HDD icon for the new SSD.

Apple first offered a Mac with an SSD in 2008, when the original MacBook Air came out. By the time “Retina” Macs began arriving in the early 2010s, SSDs had become the primary boot disk for most of them; laptops tended to be all-SSD, while desktops could be configured with an SSD or a hybrid Fusion Drive that used an SSD as boot media and an HDD for mass storage. Apple stopped shipping spinning hard drives entirely when the last of the Intel iMacs went away.

This doesn’t actually matter much. The old icon didn’t look much like the SSD in your Mac, and the new one doesn’t really look like the SSD in your Mac either. But we didn’t want to let the old icon’s passing go unremarked. So, thanks for the memories, Macintosh HD hard drive icon! Keep on spinning, wherever you are.

RIP to the Macintosh HD hard drive icon, 2000–2025 Read More »

openai-releases-its-first-open-source-models-since-2019

OpenAI releases its first open source models since 2019

OpenAI is releasing new generative AI models today, and no, GPT-5 is not one of them. Depending on how you feel about generative AI, these new models may be even more interesting, though. The company is rolling out gpt-oss-120b and gpt-oss-20b, its first open weight models since the release of GPT-2 in 2019. You can download and run these models on your own hardware, with support for simulated reasoning, tool use, and deep customization.

When you access the company’s proprietary models in the cloud, they’re running on powerful server infrastructure that cannot be replicated easily, even in enterprise. The new OpenAI models come in two variants (120b and 20b) to be run on less powerful hardware configurations. Both are transformers with configurable chain of thought (CoT), supporting low, medium, and high settings. The lower settings are faster and use fewer compute resources, but the outputs are better with the highest setting. You can set the CoT level with a single line in the system prompt.

The smaller gpt-oss-20b has a total of 21 billion parameters, utilizing mixture-of-experts (MoE) to reduce that to 3.6 billion parameters per token. As for gpt-oss-120b, its 117 billion parameters come down to 5.1 billion per token with MoE. The company says the smaller model can run on a consumer-level machine with 16GB or more of memory. To run gpt-oss-120b, you need 80GB of memory, which is more than you’re likely to find in the average consumer machine. It should fit on a single AI accelerator GPU like the Nvidia H100, though. Both models have a context window of 128,000 tokens.

Credit: OpenAI

The team says users of gpt-oss can expect robust performance similar to its leading cloud-based models. The larger one benchmarks between the o3 and o4-mini proprietary models in most tests, with the smaller version running just a little behind. It gets closest in math and coding tasks. In the knowledge-based Humanity’s Last Exam, o3 is far out in front with 24.9 percent (with tools), while gpt-oss-120b only manages 19 percent. For comparison, Google’s leading Gemini Deep Think hits 34.8 percent in that test.

OpenAI releases its first open source models since 2019 Read More »

report:-intel-struggles-with-new-18a-process-as-it-cuts-workers-and-cancels-projects

Report: Intel struggles with new 18A process as it cuts workers and cancels projects

Intel has a lot riding on “18A,” its next-generation manufacturing process for silicon chips that the company claims will help it catch up to the lead that competitors like TSMC have built up over the last few years. With 18A, Intel would return to manufacturing its own processor designs in its own factories, including the upcoming Series 3 Core Ultra chips for laptops (codenamed Panther Lake), after manufacturing parts of all other Core Ultra chips with TSMC. Intel is also offering 18A manufacturing capacity to external chipmakers, a major milestone in former CEO Pat Gelsinger’s plan to make Intel a competitive cutting-edge (and primarily US-based) chip manufacturer for the rest of the industry.

But a Reuters report claims that Intel is struggling to make usable chips on 18A, according to “people who were briefed on the company’s test data since late last year.” As of this summer, these sources say that just 10 percent of the chips being manufactured on 18A are “up to [Intel’s] specifications.”

Intel disputed the numbers cited in the report. “Yields are better than that,” Intel CFO David Zinsner told Reuters, though neither Zinsner nor Intel provided an alternate figure.

Whether Intel is struggling with 18A or not, the story is easy to believe because it fits a decade-long pattern going back to early delays for Intel’s 14 nm process in 2013 and 2014. Intel had finally switched its lineup to the 14 nm process by late 2015, but it was then stuck on that manufacturing process for years (2019–2020 for laptop chips, 2021–2022 for desktop chips).

Through that span, Intel’s PR strategy was familiar: insist that things were ramping up well internally and that bugs were being ironed out, express confidence in the roadmap, give itself a little wiggle room on launch dates of actual products, and continue onward.

In this case, Intel told Reuters that its Panther Lake chips are “fully on track” as of July 30. Intel reaffirmed that it would launch Panther Lake using the 18A manufacturing process in the second half of 2025, with more models coming in 2026. These will be the milestones to watch for—Intel could very well be struggling to ramp up yields on 18A chips, but the struggles could be normal-ish and planned-for ones that don’t delay the company’s plans any more than they already have.

Report: Intel struggles with new 18A process as it cuts workers and cancels projects Read More »

the-week-in-ai-governance

The Week in AI Governance

There was enough governance related news this week to spin it out.

Anthropic, Google, OpenAI, Mistral, Aleph Alpha, Cohere and others commit to signing the EU AI Code of Practice. Google has now signed. Microsoft says it is likely to sign.

xAI signed the AI safety chapter of the code, but is refusing to sign the others, citing them as overreach especially as pertains to copyright.

The only company that said it would not sign at all is Meta.

This was the underreported story. All the important AI companies other than Meta have gotten behind the safety section of the EU AI Code of Practice. This represents a considerable strengthening of their commitments, and introduces an enforcement mechanism. Even Anthropic will be forced to step up parts of their game.

That leaves Meta as the rogue state defector that once again gives zero anythings about safety, as in whether we all die, and also safety in its more mundane forms. Lol, we are Meta, indeed. So the question is, what are we going to do about it?

xAI took a middle position. I see the safety chapter as by far the most important, so as long as xAI is signing that and taking it seriously, great. Refusing the other parts is a strange flex, and I don’t know exactly what their problem is since they didn’t explain. They simply called it ‘unworkable,’ which is odd when Google, OpenAI and Anthropic all declared they found it workable.

Then again, xAI finds a lot of things unworkable. Could be a skill issue.

This is a sleeper development that could end up being a big deal. When I say ‘against regulations’ I do not mean against AI regulations. I mean against all ‘regulations’ in general, no matter what, straight up.

From the folks who brought you ‘figure out who we technically have the ability to fire and then fire all of them, and if something breaks maybe hire them back, this is the Elon way, no seriously’ and also ‘whoops we misread something so we cancelled PEPFAR and a whole lot of people are going to die,’ Doge is proud to give you ‘if a regulation is not technically required by law it must be an unbridled bad thing we can therefore remove, I wonder why they put up this fence.’

Hannah Natanson, Jeff Stein, Dan Diamond and Rachel Siegel (WaPo): The tool, called the “DOGE AI Deregulation Decision Tool,” is supposed to analyze roughly 200,000 federal regulations to determine which can be eliminated because they are no longer required by law, according to a PowerPoint presentation obtained by The Post that is dated July 1 and outlines DOGE’s plans.

Roughly 100,000 of those rules would be deemed worthy of trimming, the PowerPoint estimates — mostly through the automated tool with some staff feedback. The PowerPoint also suggests the AI tool will save the United States trillions of dollars by reducing compliance requirements, slashing the federal budget and unlocking unspecified “external investment.”

The conflation here is absolute. There are two categories of regulations: The half ‘required by law,’ and the half ‘worthy of trimming.’ Think of the trillions you can save.

They then try to hedge and claim that’s not how it is going to work.

Asked about the AI-fueled deregulation, White House spokesman Harrison Fields wrote in an email that “all options are being explored” to achieve the president’s goal of deregulating government.

No decisions have been completed on using AI to slash regulations, a HUD spokesperson said.

The spokesperson continued: “The intent of the developments is not to replace the judgment, discretion and expertise of staff but be additive to the process.”

That would be nice. I’m far more ‘we would be better off with a lot less regulations’ than most. I think it’s great to have an AI tool that splits off the half we can consider cutting from the half we are stuck with. I still think that ‘cut everything that a judge wouldn’t outright reverse if you tried cutting it’ is not a good strategy.

I find the ‘no we will totally consider whether this is a good idea’ talk rather hollow, both because of track record and also they keep telling us what the plan is?

“The White House wants us higher on the leader board,” said one of the three people. “But you have to have staff and time to write the deregulatory notices, and we don’t. That’s a big reason for the holdup.”

That’s where the AI tool comes in, the PowerPoint proposes. The tool will save 93 percent of the human labor involved by reviewing up to 500,000 comments submitted by the public in response to proposed rule changes. By the end of the deregulation exercise, humans will have spent just a few hours to cancel each of the 100,000 regulations, the PowerPoint claims.

They then close by pointing out that the AI makes mistakes even on the technical level it is addressing. Well, yeah.

Also, welcome to the future of journalism:

China has its own AI Action Plan and is calling for international cooperation on AI. Wait, what do they mean by that? If you look in the press, that depends who you ask. All the news organizations will be like ‘the Chinese released an AI Action Plan’ and then not link to the actual plan, I had to have o3 dig it up.

Here’s o3’s translation of the actual text. This is almost all general gestures in the direction of capabilities, diffusion, infrastructure and calls for open models. It definitely is not an AI Action Plan in the sense that America offered an AI Action Plan, with had lots of specific actionable proposals. This is more of a general outline of a plan and statement of goals, at best. At least it doesn’t talk about or call for a ‘race’ but a call for everything to be open and accelerated is not obviously better.

  • Seize AI opportunities together. Governments, international organizations, businesses, research institutes, civil groups, and individuals should actively cooperate, accelerate digital‑infrastructure build‑out, explore frontier AI technologies, and spread AI applications worldwide, fully unlocking AI’s power to drive growth, achieve the UN‑2030 goals, and tackle global challenges.

  • Foster AI‑driven innovation. Uphold openness and sharing, encourage bold experimentation, build international S‑and‑T cooperation platforms, harmonize policy and regulation, and remove technical barriers to spur continuous breakthroughs and deep “AI +” applications.

  • Empower every sector. Deploy AI across manufacturing, consumer services, commerce, healthcare, education, agriculture, poverty reduction, autonomous driving, smart cities, and more; share infrastructure and best practices to supercharge the real economy.

  • Accelerate digital infrastructure. Expand clean‑energy grids, next‑gen networks, intelligent compute, and data centers; create interoperable AI infrastructure and unified compute‑power standards; support especially the Global South in accessing and applying AI.

  • Build a pluralistic open‑source ecosystem. Promote cross‑border open‑source communities and secure platforms, open technical resources and interfaces, improve compatibility, and let non‑sensitive tech flow freely.

  • Supply high‑quality data. Enable lawful, orderly, cross‑border data flows; co‑create top‑tier datasets while safeguarding privacy, boosting corpus diversity, and eliminating bias to protect cultural and ecosystem diversity.

  • Tackle energy and environmental impacts. Champion “sustainable AI,” set AI energy‑ and water‑efficiency standards, promote low‑power chips and efficient algorithms, and scale AI solutions for green transition, climate action, and biodiversity.

  • Forge standards and norms. Through ITU, ISO, IEC, and industry, speed up standards on safety, industry, and ethics; fight algorithmic bias and keep standards inclusive and interoperable.

  • Lead with public‑sector adoption. Governments should pioneer reliable AI in public services (health, education, transport), run regular safety audits, respect IP, enforce privacy, and explore lawful data‑trading mechanisms to upgrade governance.

  • Govern AI safety. Run timely risk assessments, create a widely accepted safety framework, adopt graded management, share threat intelligence, tighten data‑security across the pipeline, raise explainability and traceability, and prevent misuse.

  • Implement the Global Digital Compact. Use the UN as the main channel, aim to close the digital divide—especially for the Global South—and quickly launch an International AI Scientific Panel and a Global AI Governance Dialogue under UN auspices.

  • Boost global capacity‑building. Through joint labs, shared testing, training, industry matchmaking, and high‑quality datasets, help developing countries enhance AI innovation, application, and governance while improving public AI literacy, especially for women and children.

  • Create inclusive, multi‑stakeholder governance. Establish public‑interest platforms involving all actors; let AI firms share use‑case lessons; support think tanks and forums in sustaining global technical‑policy dialogue among researchers, developers, and regulators.

What does it have to say about safety or dealing with downsides? We have ‘forge standards and norms’ with a generic call for safety and ethics standards, which seems to mostly be about interoperability and ‘bias.’

Mainly we have ‘Govern AI safety,’ which is directionally nice to see I guess but essentially content free and shows no sign that the problems are being taken seriously on the levels we care about. Most concretely, in the ninth point, we have a call for regular safety audits of AI models. That all sounds like ‘the least you could do.’

Here’s one interpretation of the statement:

Brenda Goh (Reuters): China said on Saturday it wanted to create an organisation to foster global cooperation on artificial intelligence, positioning itself as an alternative to the U.S. as the two vie for influence over the transformative technology.

Li did not name the United States but appeared to refer to Washington’s efforts to stymie China’s advances in AI, warning that the technology risked becoming the “exclusive game” of a few countries and companies.

China wants AI to be openly shared and for all countries and companies to have equal rights to use it, Li said, adding that Beijing was willing to share its development experience and products with other countries, particularly the “Global South”. The Global South refers to developing, emerging or lower-income countries, mostly in the southern hemisphere.

The foreign ministry released online an action plan for global AI governance, inviting governments, international organisations, enterprises and research institutions to work together and promote international exchanges including through a cross-border open source community.

As in, we notice you are ahead in AI, and that’s not fair. You should do everything in the open so you let us catch up in all the ways you are ahead, so we can bury you using the ways in which you are behind. That’s not an unreasonable interpretation.

Here’s another.

The Guardian: Chinese premier Li Qiang has proposed establishing an organisation to foster global cooperation on artificial intelligence, calling on countries to coordinate on the development and security of the fast-evolving technology, days after the US unveiled plans to deregulate the industry.

Li warned Saturday that artificial intelligence development must be weighed against the security risks, saying global consensus was urgently needed.

“The risks and challenges brought by artificial intelligence have drawn widespread attention … How to find a balance between development and security urgently requires further consensus from the entire society,” the premier said.

Li said China would “actively promote” the development of open-source AI, adding Beijing was willing to share advances with other countries, particularly developing ones in the global south.

So that’s a call to keep security in mind, but every concrete reference is mundane and deals with misuse, and then they call for putting everything out into the open, with the main highlighted ‘risk’ to coordinate on being that America might get an advantage, and encouraging us to give it away via open models to ‘safeguard multilateralism.’

A third here, from the Japan Times, frames it as a call for an alliance to take aim at an American AI monopoly.

Director Michael Kratsios: China’s just-released AI Action Plan has a section that drives at a fundamental difference between our approaches to AI: whether the public or private sector should lead in AI innovation.

I like America’s odds of success.

He quotes point nine, which his translation has as ‘the public sector takes the lead in deploying applications.’ Whereas o3’s translation says ‘governments should pioneer reliable AI in public services (health, education, transport), run regular safety audits, respect IP, enforce privacy, and explore lawful data‑trading mechanisms to upgrade governance.’

Even in Michael’s preferred translation, this is saying government should aggressively deploy AI applications to improve government services. The American AI Action Plan, correctly, fully agrees with this. Nothing in the Chinese statement says to hold the private sector back. Quite the contrary.

The actual disagreement we have with point nine is the rest of it, where the Chinese think we should run regular safety audits, respect IP and enforce privacy. Those are not parts of the American AI Action Plan. Do you think we were right not to include those provisions, sir? If so, why?

Suppose in the future, we learned we were in a lot more danger than we think we are in now, and we did want to make a deal with China and others. Right now the two sides would be very far apart but circumstances could quickly change that.

Could we do it in a way that could be verified?

It wouldn’t be easy, but we do have tools.

This is the sort of thing we should absolutely be preparing to be able to do, whether or not we ultimately decide to do it.

Mauricio Baker: For the last year, my team produced the most technically detailed overview so far. Our RAND working paper finds: strong verification is possible—but we need ML and hardware research.

You can find the paper here and on arXiv. It includes a 5-page summary and a list of open challenges.

In the Cold War, the US and USSR used inspections and satellites to verify nuclear weapon limits. If future, powerful AI threatens to escape control or endanger national security, the US and China would both be better off with guardrails.

It’s a tough challenge:

– Verify narrow restrictions, like “no frontier AI training past some capability,” or “no mass-deploying if tests show unacceptable danger”

– Catch major state efforts to cheat

– Preserve confidentiality of models, data, and algorithms

– Keep overhead low

Still, reasons for optimism:

– No need to monitor all computers—frontier AI needs thousands of specialized AI chips.

– We can build redundant layers of verification. A cheater only needs to be caught once.

– We can draw from great work in cryptography and ML/hardware security.

One approach is to use existing chip security features like Confidential Computing, built to securely verify chip activities. But we’d need serious design vetting, teardowns, and maybe redesigns before the US could strongly trust Huawei’s chip security (or frankly, NVIDIA’s).

“Off-chip” mechanisms could be reliable sooner: network taps or analog sensors (vetted, limited use, tamper evident) retrofitted onto AI data centers. Then, mutually secured, airgapped clusters could check if claimed compute uses are reproducible and consistent with sensor data.

Add approaches “simple enough to work”: whistleblower programs, interviews of personnel, and intelligence activities. Whistleblower programs could involve regular in-person contact—carefully set up so employees can anonymously reveal violations, but not much more.

We could have an arsenal of tried-and-tested methods to confidentially verify a US-China AI treaty. But at the current pace, in three years, we’ll just have a few speculative options. We need ML and hardware researchers, new RFPs by funders, and AI company pilot programs.

Jeffrey Ladish: Love seeing this kind of in-depth work on AI treaty verification. A key fact is verification doesn’t have to be bullet proof to be useful. We can ratchet up increasingly robust technical solutions while using other forms of HUMINT and SIGINT to provide some level of assurance.

Remember, the AI race is a mixed-motive conflict, per Schelling. Both sides have an incentive to seek an advantage, but also have an incentive to avoid mutually awful outcomes. Like with nuclear war, everyone loses if any side loses control of superhuman AI.

This makes coordination easier, because even if both sides don’t like or trust each other, they have an incentive to cooperate to avoid extremely bad outcomes.

It may turn out that even with real efforts there are not good technical solutions. But I think it is far more likely that we don’t find the technical solutions due to lack of trying, rather than that the problem is so hard that it cannot be done.

The reaction to the AI Action Plan was almost universally positive, including here from Nvidia and AMD. My own review, focused on the concrete proposals within, also reflected this. It far exceeded my expectations on essentially all fronts, so much so that I would be actively happy to see most of its proposals implemented rather than nothing be done.

I and others focused on the concrete policy, and especially concrete policy relative to expectations and what was possible in context, for which it gets high praise.

But a document like this might have a lot of its impact due to the rhetoric instead, even if it lacks legal force, or cause people to endorse the approach as ideal in absolute terms rather than being the best that could be done at the time.

So, for example, the actual proposals for open models were almost reasonable, but if the takeaway is lots more rhetoric of ‘yay open models’ like it is in this WSJ editorial, where the central theme is very clearly ‘we must beat China, nothing else matters, this plan helps beat China, so the plan is good’ then that’s really bad.

Another important example: Nothing in the policy proposals here makes future international cooperation harder. The rhetoric? A completely different story.

The same WSJ article also noticed the same obvious contradictions with other Trump policies that I did – throttling renewable energy and high-skilled immigration and even visas are incompatible with our goals here, the focus on ‘woke AI’ could have been much worse but remains a distraction, also I would add, what is up with massive cuts to STEM research if we are taking this seriously? If we are serious about winning and worry that one false move would ‘forfeit the race’ then we need to act like it.

Of course, none of that is up to the people who were writing the AI Action Plan.

What the WSJ editorial board didn’t notice, or mention at all, is the possibility that there are other risks or downsides at play here, and it dismisses outright the possibility of any form of coordination or cooperation. That’s a very wrong, dangerous and harmful attitude, one it shares with many in or lobbying the government.

A worry I have on reflection, that I wasn’t focusing on at the time, is that officials and others might treat the endorsements of the good policy proposals here as an endorsement of the overall plan presented by the rhetoric, especially the rhetoric at the top of the plan, or of the plan’s sufficiency and that it is okay to ignore and not speak about what the plan ignores and does not speak about.

That rhetoric was alarmingly (but unsurprisingly) terrible, as it is the general administration plan of emphasizing whenever possible that we are in an ‘AI race’ that will likely go straight to AGI and superintelligence even if those words couldn’t themselves be used in the plan, where ‘winning’ is measured in the mostly irrelevant ‘market share.’

And indeed, the inability to mention AGI or superintelligence in the plan leads to such exactly the standard David Sacks lines that toxically center the situation on ‘winning the race’ by ‘exporting the American tech stack.’

I will keep repeating, if necessary until I am blue in the face, that this is effectively a call (the motivations for which I do not care to speculate) for sacrificing the future and get us all killed in order to maximize Nvidia’s market share.

There is no ‘tech stack’ in the meaningful sense of necessary integration. You can run any most any AI model on most any advanced chip, and switch on an hour’s notice.

It does not matter who built the chips. It matters who runs the chips and for whose benefit. Supply is constrained by manufacturing capacity, so every chip we sell is one less chip we have. The idea that failure to hand over large percentages of the top AI chips to various authoritarians, or even selling H20s directly to China as they currently plan to do, would ‘forfeit’ ‘the race’ is beyond absurd.

Indeed, both the rhetoric and actions discussed here do the exact opposite. It puts pressure on others especially China to push harder towards ‘the race’ including the part that counts, the one to AGI, and also the race for diffusion and AI’s benefits. And the chips we sell arm China and others to do this important racing.

There is later talk acknowledging that ‘we do not intend to ignore the risks of this revolutionary technological power.’ But Sacks frames this as entire about the risk that AI will be misused or stolen by malicious actors. Which is certainly a danger, but far from the primary thing to worry about.

That’s what happens when you are forced to pretend AGI, ASI, potential loss of control and all other existential risks do not exist as possibilities. The good news is that there are some steps in the actual concrete plan to start preparing for those problems, even if they are insufficient and it can’t be explained, but it’s a rough path trying to sustain even that level of responsibility under this kind of rhetorical oppression.

The vibes and rhetoric were accelerationist throughout, especially at the top, and completely ignored the risks and downsides of AI, and the dangers of embracing a rhetoric based on an ‘AI race’ that we ‘must win,’ and where that winning mostly means chip market share. Going down this path is quite likely to get us all killed.

I am happy to make the trade of allowing the rhetoric to be optimistic, and to present the Glorious Transhumanist Future as likely to be great even as we have no idea how to stay alive and in control while getting there, so long as we can still agree to take the actions we need to take in order to tackle that staying alive and in control bit – again, the actions are mostly the same even if you are highly optimistic that it will work out.

But if you dismiss the important dangers entirely, then your chances get much worse.

So I want to be very clear that I hate that rhetoric, I think it is no good, very bad rhetoric both in terms of what is present and what (often with good local reasons) is missing, while reiterating that the concrete particular policy proposals were as good as we could reasonably have hoped for on the margin, and the authors did as well as they could plausibly have done with people like Sacks acting as veto points.

That includes the actions on ‘preventing Woke AI,’ which have convinced even Sacks to frame this as preventing companies from intentionally building DEI into their models. That’s fine, I wouldn’t want that either.

Even outlets like Transformer weighed in positively, with them calling the plan ‘surprisingly okay’ and noting its ability to get consensus support, while ignoring the rhetoric. They correctly note the plan is very much not adequate. It was a missed opportunity to talk about or do something about various risks (although I understand why), and there was much that could have been done that wasn’t.

Seán Ó hÉigeartaigh: Crazy to reflect on the three global AI competitions going on right now:

– 1. US political leadership have made AI a prestige race, echoing the Space Race. It’s cool and important and strategic, and they’re going to Win.

– 2. For Chinese leadership AI is part of economic strength, soft power and influence. Technology is shared, developing economies will be built on Chinese fundamental tech, the Chinese economy and trade relations will grow. Weakening trust in a capricious US is an easy opportunity to take advantage of.

– 3. The AGI companies are racing something they think will out-think humans across the board, that they don’t yet know how to control, and think might literally kill everyone.

Scariest of all is that it’s not at all clear to decision-makers that these three things are happening in parallel. They think they’re playing the same game, but they’re not.

I would modify the US political leadership position. I think to a lot of them it’s literally about market share, primarily chip market share. I believe this because they keep saying, with great vigor, that it is literally about chip market share. But yes, they think this matters because of prestige, and because this is how you get power.

My guess is, mostly:

  1. The AGI companies understand these are three distinct things.

    1. They are using the confusions of political leadership for their own ends.

  2. The Chinese understand there are two distinct things, but not three.

    1. As in, they know what US leadership is doing, and they know what they are doing, and they know these are distinct things.

    2. They do not feel the AGI and understand its implications.

  3. The bulk of the American political class cannot differentiate between the US and Chinese strategies, or strategic positions, or chooses to pretend not to, cannot imagine things other than ordinary prestige, power and money, and cannot feel the AGI.

    1. There are those within the power structure who do feel the AGI, to varying extents, and are trying to sculpt actions (including the action plan) accordingly with mixed success.

    2. An increasing number of them, although still small, do feel the AGI to varying extents but have yet to cash that out into anything except ‘oh ’.

  4. There is of course a fourth race or competition, which is to figure out how to build it without everyone dying.

The actions one would take in each of these competitions are often very similar, especially the first three and often the fourth as well, but sometimes are very different. What frustrates me most is when there is an action that is wise on all levels, yet we still don’t do it.

Also, on the ‘preventing Woke AI’ question, the way the plan and order are worded seems designed to make compliance easy and not onerous, but given other signs from the Trump administration lately, I think we have reason to worry…

Fact Post: Trump’s FCC Chair says he will put a “bias monitor” in place who will “report directly” to Trump as part of the deal for Sky Dance to acquire CBS.

Ari Drennen: The term that the Soviet Union used for this job was “apparatchik” btw.

I was willing to believe that firing Colbert was primarily a business decision. This is very different imagine the headline in reverse: “Harris’s FCC Chair says she will put a “bias monitor” in place who will “report directly” to Harris as part of the deal for Sky Dance to acquire CBS.”

Now imagine it is 2029, and the headline is ‘AOC appoints new bias monitor for CBS.’ Now imagine it was FOX. Yeah. Maybe don’t go down this road?

Director Krastios has now given us his view on the AI Action Plan. This is a chance to see how much it is viewed as terrible rhetoric versus its good policy details, and to what extent overall policy is going to be guided by good details versus terrible rhetoric.

Peter Wildeford offers his takeaway summary.

Peter Wildeford: Winning the Global AI Race

  1. The administration’s core philosophy is a direct repudiation of the previous one, which Kratsios claims was a “fear-driven” policy “manically obsessed” with hypothetical risks that stifled innovation.

  2. The plan is explicitly called an “Action Plan” to signal a focus on immediate execution and tangible results, not another government strategy document that just lists aspirational goals.

  3. The global AI race requires America to show the world a viable, pro-innovation path for AI development that serves as an alternative to the EU’s precautionary, regulation-first model.

He leads with hyperbolic slander, which is par for the course, but yes concrete action plans are highly useful and the EU can go too far in its regulations.

There are kind of two ways to go with this.

  1. You could label any attempt to do anything to ensure we don’t die as ‘fear-driven’ and ‘maniacally obsessed’ with ‘hypothetical’ risks that ‘stifle’ innovation, and thus you probably die.

  2. You could label the EU and Biden Administration as ‘fear-driven’ and ‘manically obsessed’ with ‘hypothetical’ risks that ‘stifle’ innovation, contrasting that with your superior approach, and then having paid this homage do reasonable things.

The AI Action Plan as written was the second one. But you have to do that on purpose, because the default outcome is to shift to the first one.

Executing the ‘American Stack’ Export Strategy

  1. The strategy is designed to prevent a scenario where the world runs on an adversary’s AI stack by proactively offering a superior, integrated American alternative.

  2. The plan aims to make it simple for foreign governments to buy American by promoting a “turnkey solution”—combining chips, cloud, models, and applications—to reduce complexity for the buyer.

  3. A key action is to reorient US development-finance institutions like the DFC and EXIM to prioritize financing for the export of the American AI stack, shifting their focus from traditional hard infrastructure.

The whole ‘export’ strategy is either nonsensical, or an attempt to control capital flow, because I heard a rumor that it is good to be the ones directing capital flow.

Once again, the ‘tech stack’ thing is not, as described here, what’s the word? Real.

The ‘adversary’ does not have a ‘tech stack’ to offer, they have open models people can run on the same chips. They don’t have meaningful chips to even run their own operations, let alone export. And the ‘tech’ does not ‘stack’ in a meaningful way.

Turnkey solutions and package marketing are real. I don’t see any reason for our government to be so utterly obsessed with them, or even involved at all. That’s called marketing and serving the customer. Capitalism solves this. Microsoft and Amazon and Google and OpenAI and Anthropic and so on can and do handle it.

Why do we suddenly think the government needs to be prioritizing financing this? Given that it includes chip exports, how is it different from ‘traditional hard infrastructure’? Why do we need financing for the rest of this illusory stack when it is actually software? Shouldn’t we still be focusing on ‘traditional hard infrastructure’ in the places we want it, and then whenever possible exporting the inference?

Refining National Security Controls

  1. Kratsios argues the biggest issue with export controls is not the rules themselves but the lack of resources for enforcement, which is why the plan calls for giving the Bureau of Industry and Security (BIS) the tools it needs.

  2. The strategy is to maintain strict controls on the most advanced chips and critical semiconductor-manufacturing components, while allowing sales of less-advanced chips under a strict licensing regime.

  3. The administration is less concerned with physical smuggling of hardware and more focused on preventing PRC front companies from using legally exported hardware for large-scale, easily flaggable training runs.

  4. Proposed safeguards against misuse are stringent “Know Your Customer” (KYC) requirements paired with active monitoring for the scale and scope of compute jobs.

It is great to see the emphasis on enforcement. It is great to hear that the export control rules are not the issue.

In which case, can we stop waving them, such as with H20 sales to China? Thank you. There is of course a level at which chips can be safely sold even directly to China, but the experts all agree the H20 is past that level.

The lack of concern about smuggling is a blind eye in the face of overwhelming evidence of widespread smuggling. I don’t much care if they are claiming to be concerned, I care about the actual enforcement, but we need enforcement. Yes, we should stop ‘easily flaggable’ PRC training runs and use KYC techniques, but this is saying we should look for our keys under the streetlight and then if we don’t find the keys assume we can start our car without them.

Championing ‘Light-Touch’ Domestic Regulation

  1. The administration rejects the idea of a single, overarching AI law, arguing that expert agencies like the FDA and DOT should regulate AI within their specific domains.

  2. The president’s position is that a “patchwork of regulations” across 50 states is unacceptable because the compliance burden disproportionately harms innovative startups.

  3. While using executive levers to discourage state-level rules, the administration acknowledges that a durable solution requires an act of Congress to create a uniform federal standard.

Yes, a ‘uniform federal standard’ would be great, except they have no intention of even pretending to meaningfully pursue one. They want each federal agency to do its thing in its own domain, as in a ‘use case’ based AI regime which when done on its own is the EU approach and doomed to failure.

I do acknowledge the step down from ‘kill state attempts to touch anything AI’ (aka the insane moratorium) to ‘discourage’ state-level rules using ‘executive levers,’ at which point we are talking price. One worries the price will get rather extreme.

Addressing AI’s Economic Impact at Home

  1. Kratsios highlights that the biggest immediate labor need is for roles like electricians to build data centers, prompting a plan to retrain Americans for high-paying infrastructure jobs.

  2. The technology is seen as a major productivity tool that provides critical leverage for small businesses to scale and overcome hiring challenges.

  3. The administration issued a specific executive order on K-12 AI education to ensure America’s students are prepared to wield these tools in their future careers.

Ahem, immigration, ahem, also these things rarely work, but okay, sure, fine.

Prioritizing Practical Infrastructure Over Hypothetical Risk

  1. Kratsios asserts that chip supply is no longer a major constraint; the key barriers to the AI build-out are shortages of skilled labor and regulatory delays in permitting.

  2. Success will be measured by reducing the time from permit application to “shovels in the ground” for new power plants and data centers.

  3. The former AI Safety Institute is being repurposed to focus on the hard science of metrology—developing technical standards for measuring and evaluating models, rather than vague notions of “safety.”

It is not the only constraint, but it is simply false to say that chip supply is no longer a major constraint.

Defining success in infrastructure in this way would, if taken seriously, lead to large distortions in the usual obvious Goodhart’s Law ways. I am going to give the benefit of the doubt and presume this ‘success’ definition is local, confined to infrastructure.

If the only thing America’s former AISI can now do are formal measured technical standards, then that is at least a useful thing that it can hopefully do well, but yeah it basically rules out at the conceptual level the idea of actually addressing the most important safety issues, by dismissing them are ‘vague.’

This goes beyond ‘that which is measured is managed’ to an open plan of ‘that which is not measured is not managed, it isn’t even real.’ Guess how that turns out.

Defining the Legislative Agenda

  1. While the executive branch has little power here, Kratsios identifies the use of copyrighted data in model training as a “quite controversial” area that Congress may need to address.

  2. The administration would welcome legislation that provides statutory cover for the reformed, standards-focused mission of the Center for AI Standards and Innovation (CAISI).

  3. Continued congressional action is needed for appropriations to fund critical AI-related R&D across agencies like the National Science Foundation.

TechCrunch: 20 national security experts urge Trump administration to restrict Nvidia H20 sales to China.

The letter says the H20 is a potent accelerator of China’s frontier AI capabilities and could be used to strengthen China’s military.

Americans for Responsible Innovation: The H20 and the AI models it supports will be deployed by China’s PLA. Under Beijing’s “Military-Civil Fusion” strategy, it’s a guarantee that H20 chips will be swiftly adapted for military purposes. This is not a question of trade. It is a question of national security.

It would be bad enough if this was about selling the existing stock of H20s, that Nvidia has taken a writedown on, even though it could easily sell them in the West instead. It is another thing entirely that Nvidia is using its capacity on TSMC machines to make more of them, choosing to create chips to sell directly to China instead of creating chips for us.

Ruby Scanlon: Nvidia placed orders for 300,000 H20 chipsets with contract manufacturer TSMC last week, two sources said, with one of them adding that strong Chinese demand had led the US firm to change its mind about just relying on its existing stockpile.

It sounds like we’re planning on feeding what would have been our AI chips to China. And then maybe you should start crying? Or better yet tell them they can’t do it?

I share Peter Wildeford’s bafflement here:

Peter Wildeford: “China is close to catching up to the US in AI so we should sell them Nvidia chips so they can catch up even faster.”

I never understand this argument from Nvidia.

The argument is also false, Nvidia is lying, but I don’t understand even if it were true.

There is only a 50% premium to buy Nvidia B200 systems within China, which suggests quite a lot of smuggling is going on.

Tao Burga: Nvidia still insists that there’s “no evidence of any AI chip diversion.” Laughable. All while lobbying against the data center chip location verification software that would provide the evidence. Tell me, where does the $1bn [in AI chips smuggled to China] go?

Rob Wiblin: Nvidia successfully campaigning to get its most powerful AI chips into China has such “the capitalists will sell us the rope with which we will hang them” energy.

Various people I follow keep emphasizing that China is smuggling really a lot of advanced AI chips, including B200s and such, and perhaps we should be trying to do something about it, because it seems rather important.

Chipmakers will always oppose any proposal to track chips or otherwise crack down on smuggling and call it ‘burdensome,’ where the ‘burden’ is ‘if you did this they would not be able to smuggle as many chips, and thus we would make less money.’

Reuters Business: Demand in China has begun surging for a business that, in theory, shouldn’t exist: the repair of advanced v artificial intelligence chipsets that the US has banned the export of to its trade and tech rival.

Peter Wildeford: Nvidia position: “datacenters from smuggled products is a losing proposition […] Datacenters require service and support, which we provide only to authorized NVIDIA products.”

Reality: Nvidia AI chip repair industry booms in China for banned products.

Scott Bessent Warns TSMC’s $40 billion Arizona fab that could meet 7% of American chip demand keeps getting delayed, and blames inspectors and red tape. There’s confusion here in the headline that he is warning it would ‘only’ meet 7% of demand, but 7% of demand would be amazing for one plant and the article’s text reflects this.

Bessent criticized regulatory hurdles slowing construction of the $40 billion facility. “Evidently, these chip design plants are moving so quickly, you’re constantly calling an audible and you’ve got someone saying, ‘Well, you said the pipe was going to be there, not there. We’re shutting you down,’” he explained.

It does also mean that if we want to meet 100% or more of demand we will need a lot more plants, but we knew that.

Epoch reports that Chinese hardware is behind American hardware, and is ‘closing the gap’ but faces major obstacles in chip manufacturing capability.

Epoch: Even if we exclude joint ventures with U.S., Australian, or U.K. institutions (where the developers can access foreign silicon), the clear majority of homegrown models relied on NVIDIA GPUs. In fact, it took until January 2024 for the first large language model to reportedly be trained entirely on Chinese hardware, arguably years after the first large language models.

Probably the most important reason for the dominance of Western hardware is that China has been unable to manufacture these AI chips in adequate volumes. Whereas Huawei reportedly manufactured 200,000 Ascend 910B chips in 2024, estimates suggest that roughly one million NVIDIA GPUs were legally delivered to China in the same year.

That’s right. For every top level Huawei chip manufactured, Nvidia sold five to China. No, China is not about to export a ‘full Chinese tech stack’ for free the moment we turn our backs. They’re offering downloads of r1 and Kimi K2, to be run on our chips, and they use all their own chips internally because they still have a huge shortage.

Put bluntly, we don’t see China leaping ahead on compute within the next few years. Not only would China need to overcome major obstacles in chip manufacturing and software ecosystems, they would also need to surpass foreign companies making massive investments into hardware R&D and chip fabrication.

Unless export controls erode or Beijing solves multiple technological challenges in record time, we think that China will remain at least one generation behind in hardware. This doesn’t prevent Chinese developers from training and running frontier AI models, but it does make it much more costly.

Overall, we think these costs are large enough to put China at a substantial disadvantage in AI scaling for at least the rest of the decade.

Beating China may or may not be your number one priority. We do know that taking export controls seriously is the number one priority for ‘beating China.’

Intel will cancel 14A and following nodes, essentially abandoning the technological frontier, if it cannot win a major external customer.

Discussion about this post

The Week in AI Governance Read More »

research-roundup:-7-cool-science-stories-we-almost-missed

Research roundup: 7 cool science stories we almost missed


Other July stories: Solving a 150-year-old fossil mystery and the physics of tacking a sailboat.

150-year-old fossil of Palaeocampa anthrax isn’t a sea worm after all. Credit: Christian McCall

It’s a regrettable reality that there is never enough time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. July’s list includes the discovery of the tomb of the first Maya king of Caracol in Belize, the fluid dynamics of tacking a sailboat, how to determine how fast blood was traveling when it stained cotton fabric, and how the structure of elephant ears could lead to more efficient indoor temperature control in future building designs, among other fun stories.

Tomb of first king of Caracol found

University of Houston provost and archeologist Diane Chase in newly discovered tomb of the first ruler of the ancient Maya city Caracol and the founder of its royal dynasty.

Credit: Caracol Archeological Project/University of Houston

Archaeologists Arlen and Diane Chase are the foremost experts on the ancient Maya city of Caracol in Belize and are helping to pioneer the use of airborne LiDAR to locate hidden structures in dense jungle, including a web of interconnected roadways and a cremation site in the center of the city’s Northeast Acropolis plaza. They have been painstakingly excavating the site since the mid-1980s. Their latest discovery is the tomb of Te K’ab Chaak, Caracol’s first ruler, who took the throne in 331 CE and founded a dynasty that lasted more than 460 years.

This is the first royal tomb the husband-and-wife team has found in their 40+ years of excavating the Caracol site. Te K’ab Chaak’s tomb (containing his skeleton) was found at the base of a royal family shrine, along with pottery vessels, carved bone artifacts, jadeite jewelry, and a mosaic jadeite death mask. The Chases estimate that the ruler likely stood about 5’7″ tall and was probably quite old when he died, given his lack of teeth. The Chases are in the process of reconstructing the death mask and conducting DNA and stable isotope analysis of the skeleton.

How blood splatters on clothing

Cast-off blood stain pattern

Credit: Jimmy Brown/CC BY 2.0

Analyzing blood splatter patterns is a key focus in forensic science, and physicists have been offering their expertise for several years now, including in two 2019 studies on splatter patterns from gunshot wounds. The latest insights gleaned from physics concern the distinct ways in which blood stains cotton fabrics, according to a paper published in Forensic Science International.

Blood is a surprisingly complicated fluid, in part because the red blood cells in human blood can form long chains, giving it the consistency of sludge. And blood starts to coagulate immediately once it leaves the body. Blood is also viscoelastic: not only does it deform slowly when exposed to an external force, but once that force has been removed, it will return to its original configuration. Add in coagulation and the type of surface on which it lands, and correctly interpreting the resulting spatter patterns becomes incredibly difficult.

The co-authors of the July study splashed five different fabric surfaces with pig’s blood at varying velocities, capturing the action with high-speed cameras. They found that when a blood stain has “fingers” spreading out from the center, the more fingers there are, the faster the blood was traveling when it struck the fabric. And the faster the blood was moving, the more “satellite droplets” there will be—tiny stains surrounding the central stain. Finally, it’s much easier to estimate the velocity of blood splatter on plain-woven cotton than on other fabrics like twill. The researchers plan to extend future work to include a wider variety of fabrics, weaves, and yarns.

DOI: Forensic Science International, 2025. 10.1016/j.forsciint.2025.112543  (About DOIs).

Offshore asset practices of the uber-rich

The uber-rich aren’t like the rest of us in so many ways, including their canny exploitation of highly secretive offshore financial systems to conceal their assets and/or identities. Researchers at Dartmouth have used machine learning to analyze two public databases and identified distinct patterns in the strategies oligarchs and billionaires in 65 different countries employ when squirreling away offshore assets, according to a paper published in the journal PLoS ONE.

One database tracks offshore finance, while the other rates different countries on their “rule of law.” This enabled the team to study key metrics like how much of their assets elites move offshore, how much they diversify, and how much they make use of “blacklisted” offshore centers that are not part of the mainstream financial system. The researchers found three distinct patterns, all tied to where an oligarch comes from.

Billionaires from authoritarian countries are more likely to diversify their hidden assets across many different centers—a “confetti strategy”—perhaps because these are countries likely to exact political retribution. Others, from countries with effective government regulations—or where there is a pronounced lack of civil rights—are more likely to employ a “concealment strategy” that includes more blacklisted jurisdictions, relying more on bearer shares that protect their anonymity. Those elites most concerned about corruption and/or having their assets seized typically employ a hybrid strategy.

The work builds on an earlier 2023 study concluding that issuing sanctions on individual oligarchs in Russia, China, the US, and Hong Kong is less effective than targeting the small, secretive network of financial experts who manage that wealth on behalf of the oligarchs. That’s because sanctioning just one wealth manager effectively takes out several oligarchs at once, per the authors.

DOI: PLoS ONE, 2025. 10.1371/journal.pone.0326228  (About DOIs).

Medieval remedies similar to TikTok trends

Medieval manuscripts like the Cotton MS Vitellius C III highlight uses for herbs that reflect modern-day wellness trends.

Credit: The British Library

The Middle Ages are stereotypically described as the “Dark Ages,” with a culture driven by superstition—including its medical practices. But a perusal of the hundreds of medical manuscripts collected in the online Corpus of Early Medieval Latin Medicine (CEMLM) reveals that in many respects, medical practices were much more sophisticated; some of the remedies are not much different from alternative medicine remedies touted by TikTok influencers today. That certainly doesn’t make them medically sound, but it does suggest we should perhaps not be too hasty in who we choose to call backward and superstitious.

Per Binghamton University historian Meg Leja, medievalists were not “anti-science.” In fact, they were often quite keen on learning from the natural world. And their health practices, however dubious they might appear to us—lizard shampoo, anyone?—were largely based on the best knowledge available at the time. There are detox cleanses and topical ointments, such as crushing the stone of a peach, mixing it with rose oil, and smearing it on one’s forehead to relieve migraine pain. (Rose oil may actually be an effective migraine pain reliever.) The collection is well worth perusing; pair it with the Wellcome-funded Curious Cures in Cambridge Libraries to learn even more about medieval medical recipes.

Physics of tacking a sailboat

The Courant Institute's Christiana Mavroyiakoumou, above at Central Park's Conservatory Water with model sailboats

Credit: Jonathan King/NYU

Possibly the most challenging basic move for beginner sailors is learning how to tack to sail upwind. Done correctly, the sail will flip around into a mirror image of its previous shape. And in competitive sailboat racing, a bad tack can lose the race. So physicists at the University of Michigan decided to investigate the complex fluid dynamics at play to shed more light on the tricky maneuver, according to a paper published in the journal Physical Review Fluids.

After modeling the maneuver and conducting numerical simulations, the physicists concluded that there are three primary factors that determine a successful tack: the stiffness of the sail, its tension before the wind hits, and the final sail angle in relation to the direction of the wind. Ideally, one wants a less flexible, less curved sail with high tension prior to hitting the wind and to end up with a 20-degree final sail angle. Other findings: It’s harder to flip a slack sail when tacking, and how fast one manages to flip the sail depends on the sail’s mass and the speed and acceleration of the turn.

DOI: Physical Review Fluids, 2025. 10.1103/37xg-vcff  (About DOIs).

Elephant ears inspire building design

African bush elephant with ears spread in a threat or attentive position and visible blood vessels

Maintaining a comfortable indoor temperature constitutes the largest fraction of energy usage for most buildings, with the surfaces of walls, windows, and ceilings contributing to roughly 63 percent of energy loss. Engineers at Drexel University have figured out how to make surfaces that help rather than hamper efforts to maintain indoor temperatures: using so-called phase-change materials that can absorb and release thermal energy as needed as they shift between liquid and solid states. They described the breakthrough in a paper published in the Journal of Building Engineering.

The Drexel group previously developed a self-warming concrete using a paraffin-based material, similar to the stuff used to make candles. The trick this time around, they found, was to create the equivalent of a vascular network within cement-based building materials. They used a printed polymer matrix to create a grid of channels in the surface of concrete and filled those channels with the same paraffin-based material. When temperatures drop, the material turns into a solid and releases heat energy; as temperatures rise, it shifts its phase to a liquid and absorbs heat energy.

The group tested several different configurations and found that the most effective combination of strength and thermal regulation was realized with a diamond-shaped grid, which boasted the most vasculature surface area. This configuration successfully slowed the cooling and heating of its surface to between 1 and 1.2 degrees Celsius per hour, while holding up against stretching and compression tests. The structure is similar to that of jackrabbit and elephant ears, which have extensive vascular networks to help regulate body temperature.

DOI: Journal of Building Engineering, 2025. 10.1016/j.jobe.2025.112878  (About DOIs).

ID-ing a century-old museum specimen

Neotype of Palaeocampa anthrax from the Mazon Creek Lagerstätte and rediscovered in the Invertebrate Paleontology collection of the MCZ.

Credit: Richard J. Knecht

Natural history museums have lots of old specimens in storage, and revisiting those specimens can sometimes lead to new discoveries. That’s what happened to University of Michigan evolutionary biologist Richard J. Knecht as he was poring over a collection at Harvard’s Museum of Comparative Zoology while a grad student there. One of the fossils, originally discovered in 1865, was labeled a millipede. But Knecht immediately recognized it as a type of lobopod, according to a paper published in the journal Communications Biology. It’s the earliest lobopod yet found, and this particular species also marks an evolutionary leap since it’s the first known lobopod to be non-marine.

Lobopods are the evolutionary ancestors to arthropods (insects, spiders, and crustaceans), and their fossils are common along Paleozoic sea beds. Apart from tardigrades and velvet worms, however, they were thought to be confined to oceans. But Palaeocampa anthrax has legs on every trunk, as well as almost 1,000 bristly spines covering its body with orange halos at their tips. Infrared spectroscopy revealed traces of fossilized molecules—likely a chemical that emanated from the spinal tips. Since any chemical defense would just disperse in water, limiting its effectiveness, Knecht concluded that Palaeocampa anthrax was most likely amphibious rather than being solely aquatic.

DOI: Communications Biology, 2025. 10.1038/s42003-025-08483-0  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 7 cool science stories we almost missed Read More »

in-search-of-riches,-hackers-plant-4g-enabled-raspberry-pi-in-bank-network

In search of riches, hackers plant 4G-enabled Raspberry Pi in bank network

“One of the most unusual elements of this case was the attacker’s use of physical access to install a Raspberry Pi device,” Group-IB Senior Digital Forensics and Incident Response Specialist Nam Le Phuong wrote. “This device was connected directly to the same network switch as the ATM, effectively placing it inside the bank’s internal network. The Raspberry Pi was equipped with a 4G modem, allowing remote access over mobile data.”

To maintain persistence, UNC2891 also compromised a mail server because it had constant Internet connectivity. The Raspberry Pi and the mail server backdoor would then communicate by using the bank’s monitoring server as an intermediary. The monitoring server was chosen because it had access to almost every server within the data center.

The Network Monitoring Server as an intermediary between the Raspberry Pi and the Mail Server.

Credit: Group-IB

The Network Monitoring Server as an intermediary between the Raspberry Pi and the Mail Server. Credit: Group-IB

As Group-IB was initially investigating the bank’s network, researchers noticed some unusual behaviors on the monitoring server, including an outbound beaconing signal every 10 minutes and repeated connection attempts to an unknown device. The researchers then used a forensic tool to analyze the communications. The tool identified the endpoints as a Raspberry Pi and the mail server but was unable to identify the process names responsible for the beaconing.

The forensic triage tool is unable to collect the relevant process name or ID associated with the socket.

Credit: Group-IB

The forensic triage tool is unable to collect the relevant process name or ID associated with the socket. Credit: Group-IB

The researchers then captured the system memory as the beacons were sent. The review identified the process as lightdm, a process associated with an open source LightDM display manager. The process appeared to be legitimate, but the researchers found it suspicious because the LightDM binary was installed in an unusual location. After further investigation, the researchers discovered that the processes of the custom backdoor had been deliberately disguised in an attempt to throw researchers off the scent.

Phuong explained:

The backdoor process is deliberately obfuscated by the threat actor through the use of process masquerading. Specifically, the binary is named “lightdm”, mimicking the legitimate LightDM display manager commonly found on Linux systems. To enhance the deception, the process is executed with command-line arguments resembling legitimate parameters – for example,

lightdm –session child 11 19 — in an effort to evade detection and mislead forensic analysts during post-compromise investigations.

These backdoors were actively establishing connections to both the Raspberry Pi and the internal Mail Server.

As noted earlier, the processes were disguised using the Linux bind mount. Following that discovery, Group-IB added the technique to the MITRE ATT&CK framework as “T1564.013 – Hide Artifacts: Bind Mounts.”

Group-IB didn’t say where the compromised switching equipment was located or how attackers managed to plant the Raspberry Pi. The attack was detected and shut down before UNC2891 was able to achieve its final goal of infecting the ATM switching network with the CakeTap backdoor.

In search of riches, hackers plant 4G-enabled Raspberry Pi in bank network Read More »