Author name: Kelly Newman

spacex-has-built-the-machine-to-build-the-machine.-but-what-about-the-machine?

SpaceX has built the machine to build the machine. But what about the machine?


SpaceX has built an impressive production site in Texas. Will Starship success follow?

A Starship upper stage is moved past the northeast corner of Starfactory in July 2025. Credit: SpaceX

A Starship upper stage is moved past the northeast corner of Starfactory in July 2025. Credit: SpaceX

STARBASE, Texas—I first visited SpaceX’s launch site in South Texas a decade ago. Driving down the pocked and barren two-lane road to its sandy terminus, I found only rolling dunes, a large mound of dirt, and a few satellite dishes that talked to Dragon spacecraft as they flew overhead.

A few years later, in mid-2019, the company had moved some of that dirt and built a small launch pad. A handful of SpaceX engineers working there at the time shared some office space nearby in a tech hub building, “Stargate.” The University of Texas Rio Grande Valley proudly opened this state-of-the-art technology center just weeks earlier. That summer, from Stargate’s second floor, engineers looked on as the Starhopper prototype made its first two flights a couple of miles away.

Over the ensuing years, as the company began assembling its Starship rockets on site, SpaceX first erected small tents, then much larger tents, and then towering high bays in which the vehicles were stacked. Starbase grew and evolved to meet the company’s needs.

All of this was merely a prelude to the end game: Starfactory. SpaceX opened this truly massive facility earlier this year. The sleek rocket factory is emblematic of the new Starbase: modern, gargantuan, spaceship-like.

To the consternation of some local residents and environmentalists, the rapid growth of Starbase has wiped out the small and eclectic community that existed here. And that brand new Stargate building that public officials were so excited about only a few years ago? SpaceX first took it over entirely and then demolished it. The tents are gone, too. For better or worse, in the name of progress, the SpaceX steamroller has rolled onward, paving all before it.

Starbase is even its own Texas city now. And if this were a medieval town, Starfactory would be the impenetrable fortress at its heart. In late May, I had a chance to go inside. The interior was super impressive, of course. Yet it could not quell some of the concerns I have about the future of SpaceX’s grand plans to send a fleet of Starships into the Solar System.

Inside the fortress

The main entrance to the factory lies at its northeast corner. From there, one walks into a sleek lobby that serves as a gateway into the main, cavernous section of the building. At this corner, there are three stories above the ground floor. Each of these three higher levels contains various offices, conference rooms and, on the upper floor, a launch control center.

Large windows from here offer a breathtaking view of the Starship launch site two miles up the road. A third-floor executive conference room has carpet of a striking rusty, reddish hue—mimicking the surface of Mars, naturally. A long, black table dominates the room, with 10 seats along each side, and one at the head.

An aerial overview of the Starship production site in South Texas earlier this year. The sprawling Starfactory is in the center.

Credit: SpaceX

An aerial overview of the Starship production site in South Texas earlier this year. The sprawling Starfactory is in the center. Credit: SpaceX

But the real attraction of these offices is the view to the other end. Each of the upper three floors has a balcony overlooking the factory floor. From there, it’s as if one stands at the edge of an ocean liner, gazing out to sea. In this case, the far wall is discernible, if only barely. Below, the factory floor is crammed with all manner of Starship parts: nose cones, grid fins, hot staging rings, and so much more. The factory emitted a steady din and hum as work proceeded on vehicles below.

The ultimate goal of this factory is to build one Starship rocket a day. This sounds utterly mad. For the entire Apollo program in the 1960s and 1970s, NASA built 15 Saturn V rockets. Over the course of more than three decades, NASA built and flew only five different iconic Space Shuttles. SpaceX aims to build 365 vehicles, which are larger, per year.

Wandering around the Starfactory, however, this ambition no longer seems undoable. The factory measures about 1 million square feet. This is two times as large as SpaceX’s main Falcon 9 factory in Hawthorne, California. It feels like the company could build a lot of Starships here if needed.

During one of my visits to South Texas, in early 2020 just before the onset of the COVID-19 pandemic, SpaceX was building its first Starship rockets in football field-sized tents. At the time, SpaceX founder Elon Musk opined in an interview that building the factory might well be more difficult than building the rocket.

Here’s a view of SpaceX’s Starship production facilities, from the east side, in late February 2020.

Credit: Eric Berger

Here’s a view of SpaceX’s Starship production facilities, from the east side, in late February 2020. Credit: Eric Berger

“If you want to actually make something at reasonable volume, you have to build the machine that makes the machine, which mathematically is going to be vastly more complicated than the machine itself,” he said. “The thing that makes the machine is not going to be simpler than the machine. It’s going to be much more complicated, by a lot.”

Five years later, standing inside Starfactory, it seems clear that SpaceX has built the machine to build the machine—or at least it’s getting close.

But what happens if that machine is not ready for prime time?

A pretty bad year for Starship

SpaceX has not had a good run of things with the ambitious Starship vehicle this year. Three times, in January, March, and May, the vehicle took flight. And three times, the upper stage experienced significant problems during ascent, and the vehicle was lost on the ride up to space, or just after. These were the seventh, eighth, and ninth test flights of Starship, following three consecutive flights in 2024 during which the Starship upper stage made more or less nominal flights and controlled splashdowns in the Indian Ocean.

It’s difficult to view the consecutive failures this year—not to mention the explosion of another Starship vehicle during testing in June—as anything but a major setback for the program.

There can be no question that the Starship rocket, with its unprecedentedly large first stage and potentially reusable upper stage, is the most advanced and ambitious rocket humans have ever conceived, built, and flown. The failures this year, however, have led some space industry insiders to ask whether Starship is too ambitious.

My sources at SpaceX don’t believe so. They are frustrated by the run of problems this year, but they believe the fundamental design of Starship is sound and that they have a clear path to resolving the issues. The massive first stage has already been flown, landed, and re-flown. This is a huge step forward. But the sources also believe the upper stage issues can be resolved, especially with a new “Version 3” of Starship due to make its debut late this year or early in 2026.

The acid test will only come with upcoming flights. The vehicle’s tenth test flight is scheduled to take place no earlier than Sunday, August 24. It’s possible that SpaceX will fly one more “Version 2” Starship later this year before moving to the upgraded vehicle, with more powerful Raptor engines and lots of other changes to (hopefully) improve reliability.

SpaceX could certainly use a win. The Starship failures occur at a time when Musk has become embroiled in political controversy while feuding with the president of the United States. His actions have led some in government and private industry to question whether they should be doing business with SpaceX going forward.

It’s often said in sports that winning solves a lot of problems. For SpaceX, success with Starship would solve a lot of problems.

Next steps for Starship

The failures are frustrating and publicly embarrassing. But more importantly, they are a bottleneck for a lot of critical work SpaceX needs to do for Starship to reach its considerable potential. All of the technical progress the Starship program needs to make to deploy thousands of Starlink satellites, land NASA astronauts on the Moon, and send humans to Mars remains largely on hold.

Two of the most important objectives for the next flight require the Starship vehicle to fly a nominal mission. For several flights now, SpaceX engineers have dutifully prepared Starlink satellite simulators to test a Pez-like dispenser in space. And each Starship vehicle has carried about two dozen different tile experiments as the company attempts to build a rapidly reusable heat shield to protect Starship during atmospheric reentry.

The engineers are still waiting for the results of their experiments.

In the near term, SpaceX is hyper-focused on getting Starship working and starting the deployment of large Starlink satellites that will have the potential to unlock significant amounts of revenue. But this is just the beginning of the work that needs to happen for SpaceX to turn Starship into a deep-space vehicle capable of traveling to the Moon and Mars.

These steps include:

  • Reuse: Developing a rapidly reusable heat shield and landing and re-flying Starship upper stages
  • Prop transfer: Conducting a refueling test in low-Earth orbit to demonstrate the transfer of large amounts of propellant between Starships
  • Depots: Developing and testing cryogenic propellant depots to understand heating losses over time
  • Lunar landing: Landing a Starship successfully on the Moon, which is challenging due to the height of the vehicle and uneven terrain
  • Lunar launch: Demonstrating the capability of Starship, using liquid propellant, to launch safely from the lunar surface without infrastructure there
  • Mars transit: Demonstrating the operation of Starship over months and the capability to perform a powered landing on Mars.

Each of these steps is massively challenging and at least partly a novel exercise in aerospace. There will be a lot of learning, and almost certainly some failures, as SpaceX works through these technical milestones.

Some details about the Starship propellant transfer test, a key milestone that NASA and SpaceX had hoped to complete this year but now may tackle in 2026.

Credit: NASA

Some details about the Starship propellant transfer test, a key milestone that NASA and SpaceX had hoped to complete this year but now may tackle in 2026. Credit: NASA

SpaceX prefers a test, fly, and fix approach to developing hardware. This iterative approach has served the company well, allowing it to develop rockets and spacecraft faster and for less money than its competitors. But you cannot fly and fix hardware for the milestones above without getting the upper stage of Starship flying nominally.

That’s one reason why the Starship program has been so disappointing this year.

Then there are the politics

As SpaceX has struggled with Starship in 2025, its founder, Musk, has also had a turbulent run, from the presidential campaign trail to the top of political power in the world, the White House, and back out of President Trump’s inner circle. Along the way, he has made political enemies, and his public favorability ratings have fallen.

Amid the fallout between Trump and Musk this spring and summer, the president ordered a review of SpaceX’s contracts. Nothing happened because government officials found that most of the services SpaceX offers to NASA, the US Department of Defense, and other federal agencies are vital.

However, multiple sources have told Ars that federal officials are looking for alternatives to SpaceX and have indicated they will seek to buy launches, satellite Internet, and other services from emerging competitors if available.

Starship’s troubles also come at a critical time in space policy. As part of its budget request for fiscal year 2026, the White House sought to terminate the production of NASA’s Space Launch System rocket and spacecraft after the Artemis III mission. The White House has also expressed an interest in sending humans to Mars, viewing the Moon as a stepping stone to the red planet.

Although there are several options in play, the most viable hardware for both a lunar and Mars human exploration program is Starship. If it works. If it continues to have teething pains, though, that makes it easier for Congress to continue funding NASA’s expensive rocket and spacecraft, as it would prefer to do.

What about Artemis and the Moon?

Starship’s “lost year” also has serious implications for NASA’s Artemis Moon Program. As Ars reported this week, China is now likely to land on the Moon before NASA can return. Yes, the space agency has a nominal landing date in 2027 for the Artemis III mission, but no credible space industry officials believe that date is real. (It has already slipped multiple times from 2024). Theoretically, a landing in 2028 remains feasible, but a more rational over/under date for NASA is probably somewhere in the vicinity of 2030.

SpaceX is building the lunar lander for the Artemis III mission, a modified version of Starship. There is so much we don’t really know yet about this vehicle. For example, how many refuelings will it take to load a Starship with sufficient propellant to land on the Moon and take off? What will the vehicle’s controls look like, and will the landings be automated?

And here’s another one: How many people at SpaceX are actually working on the lunar version of Starship?

Publicly, Musk has said he doesn’t worry too much about China beating the United States back to the Moon. “I think the United States should be aiming for Mars, because we’ve already actually been to the Moon several times,” Musk said in an interview in late May. “Yeah, if China sort of equals that, I’m like, OK, sure, but that’s something that America did 56 years ago.”

Privately, Musk is highly critical of Artemis, saying NASA should focus on Mars. Certainly, that’s the long arc of history toward which SpaceX’s efforts are being bent. Although both the Moon and Mars versions of Starship require the vehicle to reach orbit and successfully refuel, there is a huge divergence in the technology and work required after that point.

It’s not at all clear that the Trump administration is seriously seeking to address this issue by providing SpaceX with carrots and sticks to move the lunar lander program forward. If Artemis is not a priority for Musk, how can it be for SpaceX?

This all creates a tremendous amount of uncertainty ahead of Sunday’s Starship launch. As Musk likes to say, “Excitement is guaranteed.”

Success would be better.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

SpaceX has built the machine to build the machine. But what about the machine? Read More »

having-recovery-and/or-ssd-problems-after-recent-windows-updates?-you’re-not-alone.

Having recovery and/or SSD problems after recent Windows updates? You’re not alone.

The other issue some users have been experiencing is potentially more serious, but also harder to track down. Tom’s Hardware has a summary of the problem: At some point after installing update KB5063878 on Windows 11 24H2, some users began noticing issues with large file transfers on some SSDs. When installing a large update for Cyberpunk 2077, a large game that requires dozens of gigabytes of storage, Windows abruptly stopped seeing the SSD that the game was installed on.

The issues are apparently more pronounced on disks that are more than 60 percent full, when transferring at least 50GB of data. Most of the SSDs were visible again after a system reboot, though one—a 2TB Western Digital SA510 drive—didn’t come back after a reboot.

These issues could be specific to this user’s configuration, and the culprit may not be the Windows update. Microsoft has yet to add the SSD problem to its list of known issues with Windows, but the company confirmed to Ars that it was studying the complaints.

“We’re aware of these reports and are investigating with our partners,” a Microsoft spokesperson told Ars.

SSD controller manufacturer Phison told Tom’s Hardware that it was also looking into the problem.

Having recovery and/or SSD problems after recent Windows updates? You’re not alone. Read More »

mammals-that-chose-ants-and-termites-as-food-almost-never-go-back

Mammals that chose ants and termites as food almost never go back

Insects are more influential than we realize

By showing that ant- and termite-based diets evolved repeatedly, the study highlights the overlooked role of social insects in shaping biodiversity. “This work gives us the first real roadmap, and what really stands out is just how powerful a selective force ants and termites have been over the last 50 million years, shaping environments and literally changing the face of entire species,” Barden said.

However, according to the study authors, we still do not have a clear picture of how much of an impact insects have had on the history of life on our planet. Lots of lineages have been reshaped by organisms with outsize biomass—and today, ants and termites have a combined biomass exceeding that of all living wild mammals, giving them a massive evolutionary influence.

However, there’s also a flip side. Eight of the 12 myrmecophagous origins are represented by just a single species, meaning most of these lineages could be vulnerable if their insect food sources decline. As Barden put it, “In some ways, specializing in ants and termites paints a species into a corner. But as long as social insects dominate the world’s biomass, these mammals may have an edge, especially as climate change seems to favor species with massive colonies, like fire ants and other invasive social insects.”

For now, the study authors plan to keep exploring how ants, termites, and other social insects have shaped life over millions of years, not through controlled lab experiments, but by continuing to use nature itself as the ultimate evolutionary archive. “Finding accurate dietary information for obscure mammals can be tedious, but each piece of data adds to our understanding of how these extraordinary diets came to be,” Vida argued.

Evolution, 2025. DOI: 10.1093/evolut/qpaf121 (About DOIs)

Rupendra Brahambhatt is an experienced journalist and filmmaker. He covers science and culture news, and for the last five years, he has been actively working with some of the most innovative news agencies, magazines, and media brands operating in different parts of the globe.

Mammals that chose ants and termites as food almost never go back Read More »

china’s-guowang-megaconstellation-is-more-than-another-version-of-starlink

China’s Guowang megaconstellation is more than another version of Starlink


“This is a strategy to keep the US from intervening… that’s what their space architecture is designed to do.”

Spectators take photos as a Long March 8A rocket carrying a group of Guowang satellites blasts off from the Hainan commercial launch site on July 30, 2025, in Wenchang, China. Credit: Liu Guoxing/VCG via Getty Images

Spectators take photos as a Long March 8A rocket carrying a group of Guowang satellites blasts off from the Hainan commercial launch site on July 30, 2025, in Wenchang, China. Credit: Liu Guoxing/VCG via Getty Images

US defense officials have long worried that China’s Guowang satellite network might give the Chinese military access to the kind of ubiquitous connectivity US forces now enjoy with SpaceX’s Starlink network.

It turns out the Guowang constellation could offer a lot more than a homemade Chinese alternative to Starlink’s high-speed consumer-grade broadband service. China has disclosed little information about the Guowang network, but there’s mounting evidence that the satellites may provide Chinese military forces a tactical edge in any future armed conflict in the Western Pacific.

The megaconstellation is managed by a secretive company called China SatNet, which was established by the Chinese government in 2021. SatNet has released little information since its formation, and the group doesn’t have a website. Chinese officials have not detailed any of the satellites’ capabilities or signaled any intention to market the services to consumers.

Another Chinese satellite megaconstellation in the works, called Qianfan, appears to be a closer analog to SpaceX’s commercial Starlink service. Qianfan satellites are flat in shape, making them easier to pack onto the tops of rockets before launch. This is a design approach pioneered by SpaceX with Starlink. The backers of the Qianfan network began launching the first of up to 1,300 broadband satellites last year.

Unlike Starlink, the Guowang network consists of satellites manufactured by multiple companies, and they launch on several types of rockets. On its face, the architecture taking shape in low-Earth orbit appears to be more akin to SpaceX’s military-grade Starshield satellites and the Space Development Agency’s future tranches of data relay and missile-tracking satellites.

Guowang, or “national network,” may also bear similarities to something the US military calls MILNET. Proposed in the Trump administration’s budget request for next year, MILNET will be a partnership between the Space Force and the National Reconnaissance Office (NRO). One of the design alternatives under review at the Pentagon is to use SpaceX’s Starshield satellites to create a “hybrid mesh network” that the military can rely on for a wide range of applications.

Picking up the pace

In recent weeks, China’s pace of launching Guowang satellites has approached that of Starlink. China has launched five groups of Guowang satellites since July 27, while SpaceX has launched six Starlink missions using its Falcon 9 rockets over the same period.

A single Falcon 9 launch can haul up to 28 Starlink satellites into low-Earth orbit, while China’s rockets have launched between five and 10 Guowang satellites per flight to altitudes three to four times higher. China has now placed 72 Guowang satellites into orbit since launches began last December, a small fraction of the 12,992-satellite fleet China has outlined in filings with the International Telecommunication Union.

The constellation described in China’s ITU filings will include one group of Guowang satellites between 500 and 600 kilometers (311 and 373 miles), around the same altitude of Starlink. Another shell of Guowang satellites will fly roughly 1,145 kilometers (711 miles) above the Earth. So far, all of the Guowang satellites China has launched since last year appear to be heading for the higher shell.

This higher altitude limits the number of Guowang satellites China’s stable of launch vehicles can carry. On the other hand, fewer satellites are required for global coverage from the higher orbit.

A prototype Guowang satellite is seen prepared for encapsulation inside the nose cone of a Long March 12 rocket last year. This is one of the only views of a Guowang spacecraft China has publicly released. Credit: Hainan International Commercial Aerospace Launch Company Ltd.

SpaceX has already launched nearly 200 of its own Starshield satellites for the NRO to use for intelligence, surveillance, and reconnaissance missions. The next step, whether it’s the SDA constellation, MILNET, or something else, will seek to incorporate hundreds or thousands of low-Earth orbit satellites into real-time combat operations—things like tracking moving targets on the ground and in the air, targeting enemy vehicles, and relaying commands between allied forces. The Trump administration’s Golden Dome missile defense shield aims to extend real-time targeting to objects in the space domain.

In military jargon, the interconnected links to detect, track, target, and strike a target is called a kill chain or kill web. This is what US Space Force officials are pushing to develop with the Space Development Agency, MILNET, and other future space-based networks.

So where is the US military in building out this kill chain? The military has long had the ability to detect and track an adversary’s activities from space. Spy satellites have orbited the Earth since the dawn of the Space Age.

Much of the rest of the kill chain—like targeting and striking—remains forward work for the Defense Department. Many of the Pentagon’s existing capabilities are classified, but simply put, the multibillion-dollar satellite constellations the Space Force is building just for these purposes still haven’t made it to the launch pad. In some cases, they haven’t made it out of the lab.

Is space really the place?

The Space Development Agency is supposed to begin launching its first generation of more than 150 satellites later this year. These will put the Pentagon in a position to detect smaller, fainter ballistic and hypersonic missiles and provide targeting data for allied interceptors on the ground or at sea.

Space Force officials envision a network of satellites that can essentially control a terrestrial battlefield from orbit. The way future-minded commanders tell it, a fleet of thousands of satellites fitted with exquisite sensors and machine learning will first detect a moving target, whether it’s a land vehicle, aircraft, naval ship, or missile. Then, that spacecraft will transmit targeting data via a laser link to another satellite that can relay the information to a shooter on Earth.

US officials believe Guowang is a step toward integrating satellites into China’s own kill web. It might be easier for them to dismiss Guowang if it were simply a Chinese version of Starlink, but open-source information suggests it’s something more. Perhaps Guowang is more akin to megaconstellations being developed and deployed for the US Space Force and the National Reconnaissance Office.

If this is the case, China could have a head start on completing all the links for a celestial kill chain. The NRO’s Starshield satellites in space today are presumably focused on collecting intelligence. The Space Force’s megaconstellation of missile tracking, data relay, and command and control satellites is not yet in orbit.

Chinese media reports suggest the Guowang satellites could accommodate a range of instrumentation, including broadband communications payloads, laser communications terminals, synthetic aperture radars, and optical remote sensing payloads. This sounds a lot like a mix of SpaceX and the NRO’s Starshield fleet, the Space Development Agency’s future constellation, and the proposed MILNET program.

A Long March 5B rocket lifts off from the Wenchang Space Launch Site in China’s Hainan Province on August 13, 2025, with a group of Guowang satellites. (Photo by Luo Yunfei/China News Service/VCG via Getty Images.) Credit: Luo Yunfei/China News Service/VCG via Getty Images

In testimony before a Senate committee in June, the top general in the US Space Force said it is “worrisome” that China is moving in this direction. Gen. Chance Saltzman, the Chief of Space Operations, used China’s emergence as an argument for developing space weapons, euphemistically called “counter-space capabilities.”

“The space-enabled targeting that they’ve been able to achieve from space has increased the range and accuracy of their weapon systems to the point where getting anywhere close enough [to China] in the Western Pacific to be able to achieve military objectives is in jeopardy if we can’t deny, disrupt, degrade that… capability,” Saltzman said. “That’s the most pressing challenge, and that means the Space Force needs the space control counter-space capabilities in order to deny that kill web.”

The US military’s push to migrate many wartime responsibilities to space is not without controversy. The Trump administration wants to cancel purchases of new E-7 jets designed to serve as nerve centers in the sky, where Air Force operators receive signals about what’s happening in the air, on the ground, and in the water for hundreds of miles around. Instead, much of this responsibility would be transferred to satellites.

Some retired military officials, along with some lawmakers, argue against canceling the E-7. They say there’s too little confidence in when satellites will be ready to take over. If the Air Force goes ahead with the plan to cancel the E-7, the service intends to bridge the gap by extending the life of a fleet of Cold War-era E-3 Sentry airplanes, commonly known as AWACS (Airborne Warning and Control System).

But the high ground of space offers notable benefits. First, a proliferated network of satellites has global reach, and airplanes don’t. Second, satellites could do the job on their own, with some help from artificial intelligence and edge computing. This would remove humans from the line of fire. And finally, using a large number of satellites is inherently beneficial because it means an attack on one or several satellites won’t degrade US military capabilities.

In China, it takes a village

Brig. Gen. Anthony Mastalir, commander of US Space Forces in the Indo-Pacific region, told Ars last year that US officials are watching to see how China integrates satellite networks like Guowang into military exercises.

“What I find interesting is China continues to copy the US playbook,” Mastalir said. “So as as you look at the success that the United States has had with proliferated architectures, immediately now we see China building their own proliferated architecture, not just the transport layer and the comm layer, but the sensor layer as well. You look at their their pursuit of reusability in terms of increasing their launch capacity, which is currently probably one of their shortfalls. They have plans for a quicker launch tempo.”

A Long March 6A carries a group of Guowang satellites into orbit on July 27, 2025, from the Taiyuan Satellite Launch Center in north China’s Shanxi Province. China has used four different rocket configurations to place five groups of Guowang satellites into orbit in the last month. Credit: Wang Yapeng/Xinhua via Getty Images

China hasn’t recovered or reused an orbital-class booster yet, but several Chinese companies are working on it. SpaceX, meanwhile, continues to recycle its fleet of Falcon 9 boosters while simultaneously developing a massive super-heavy-lift rocket and churning out dozens of Starlink and Starshield satellites every week.

China doesn’t have its own version of SpaceX. In China, it’s taken numerous commercial and government-backed enterprises to reach a launch cadence that, so far this year, is a little less than half that of SpaceX. But the flurry of Guowang launches in the last few weeks shows that China’s satellite and rocket factories are picking up the pace.

Mastalir said China’s actions in the South China Sea, where it has taken claim of disputed islands near Taiwan and the Philippines, could extend farther from Chinese shores with the help of space-based military capabilities.

“Their specific goals are to be able to track and target US high-value assets at the time and place of their choosing,” he said. “That has started with an A2AD, an Anti-Access Area Denial strategy, which is extended to the first island chain and now the second island chain, and eventually all the way to the west coast of California.”

“The sensor capabilities that they’ll need are multi-orbital and diverse in terms of having sensors at GEO (geosynchronous orbit) and now increasingly massive megaconstellations at LEO (low-Earth orbit),” Mastalir said. “So we’re seeing all signs point to being able to target US aircraft carriers… high-value assets in the air like tankers, AWACs. This is a strategy to keep the US from intervening, and that’s what their space architecture is designed to do.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

China’s Guowang megaconstellation is more than another version of Starlink Read More »

google-unveils-pixel-10-series-with-improved-tensor-g5-chip-and-a-boatload-of-ai

Google unveils Pixel 10 series with improved Tensor G5 chip and a boatload of AI


The Pixel 10 series arrives with a power upgrade but no SIM card slot.

Google has shifted its product timeline in 2025. Android 16 dropped in May, an earlier release aimed at better lining up with smartphone launches. Google’s annual hardware refresh is also happening a bit ahead of the traditional October window. The company has unveiled its thoroughly leaked 2025 Pixel phones and watches, and you can preorder most of them today.

The new Pixel 10 phones don’t look much different from last year, but there’s an assortment of notable internal changes, and you might not like all of them. They have a new, more powerful Tensor chip (good), a lot more AI features (debatable), and no SIM card slot (bad). But at least the new Pixel Watch 4 won’t become e-waste if you break it.

Same on the outside, new on the inside

If you liked Google’s big Pixel redesign last year, there’s good news: Nothing has changed in 2025. The Pixel 10 series looks the same, right down to the almost identical physical dimensions. Aside from the new colors, the only substantial design change is the larger camera window on the Pixel 10 to accommodate the addition of a third sensor.

From left to right: Pixel 10, Pixel 10 Pro, Pixel 10 Pro Fold.

Credit: Google

From left to right: Pixel 10, Pixel 10 Pro, Pixel 10 Pro Fold. Credit: Google

You won’t find a titanium frame or ceramic coatings present in Samsung and Apple lineups. The Pixel 10 phones have a 100 percent recycled aluminum frame, featuring a matte finish on the Pixel 10 and glossy finishes on the Pro phones. All models have Gorilla Glass Victus 2 panels on the front and back, and they’re IP68 rated for water- and dust-resistance.

The design remains consistent across all three flat phones. The base model and 10 Pro have 6.3-inch OLED screens, but the Pro gets a higher-resolution LTPO panel, which supports lower refresh rates to save power. The 10 Pro XL is LTPO, too, but jumps to 6.8 inches. These phones will be among the first Android phones with full support for the Qi 2 wireless charging standard, which is branded as “Pixelsnap” for the Pixel 10. They’ll work with Qi 2 magnetic accessories, as well as Google’s Pixelsnap chargers. They can charge the Pixel 10 and 10 Pro at 15W, but only the 10 Pro XL supports 25W.

Specs at a glance: Google Pixel 10 series
Pixel 10 ($799) Pixel 10 Pro ($999) Pixel 10 Pro XL ($1,199) Pixel 10 Pro Fold ($1,799)
SoC Google Tensor G5  Google Tensor G5  Google Tensor G5  Google Tensor G5
Memory 12GB 16GB 16GB 16GB
Storage 128GB / 256GB 128GB / 256GB / 512GB 128GB / 256GB / 512GB / 1TB 256GB / 512GB / 1TB
Display 6.3-inch 1080×2424 OLED, 60-120Hz, 3,000 nits 6.3-inch 1280×2856 LTPO OLED, 1-120Hz, 3,300 nits 6.3-inch 1344×2992 LTPO OLED, 1-120Hz, 3,300 nits External: 6.8-inch 1080×2364 OLED, 60-120Hz, 2000 nits; Internal: 8-inch 2076×2152 LTPO OLED, 1-120Hz, 3,000 nits
Cameras 48 MP wide with Macro

Focus, F/1.7, 1/2-inch sensor; 13 MP ultrawide, f/2.2, 1/3.1-inch sensor;

10.8 MP 5x telephoto, f/3.1, 1/3.2-inch sensor; 10.5 MP selfie, f/2.2
50 MP wide with Macro

Focus, F/1.68, 1/1.3-inch sensor; 48 MP ultrawide, f/1.7, 1/2.55-inch sensor;

48 MP 5x telephoto, f/2.8, 1/2.55-inch sensor; 42 MP selfie, f/2.2
50 MP wide with Macro

Focus, F/1.68, 1/1.3-inch sensor; 48 MP ultrawide, f/1.7, 1/2.55-inch sensor;

48 MP 5x telephoto, f/2.8, 1/2.55-inch sensor; 42 MP selfie, f/2.2
48 MP wide, F/1.7, 1/2-inch sensor; 10.5 MP ultrawide with Macro Focus, f/2.2, 1/3.4-inch sensor;

10.8 MP 5x telephoto, f/3.1, 1/3.2-inch sensor; 10.5 MP selfie, f/2.2 (outer and inner)
Software Android 16 Android 16 Android 16 Android 16
Battery 4,970mAh,  up to 30 W wired charging, 15 W wireless charging (Pixelsnap) 4,870 mAh, up to 30 W wired charging, 15 W wireless charging (Pixelsnap) 5,200 mAh, up to 45 W wired charging, 25 W wireless charging (Pixelsnap) 5,015 mAh, up to 30 W wired charging, 15 W wireless charging (Pixelsnap)
Connectivity Wi-Fi 6e, NFC, Bluetooth 6.0, sub-6 GHz and mmWave 5G, USB-C 2.0 Wi-Fi 7, NFC, Bluetooth 6.0, sub-6 GHz and mmWave 5G, UWB, USB-C 2.0 Wi-Fi 7, NFC, Bluetooth 6.0, sub-6 GHz and mmWave 5G, UWB, USB-C 2.0 Wi-Fi 7, NFC, Bluetooth 6.0, sub-6 GHz and mmWave 5G, UWB, USB-C 2.0
Measurements 152.8 height×72.0 width×8.6 depth (mm), 204g 152.8 height×72.0 width×8.6 depth (mm), 207g 162.8 height×76.6 width×8.5 depth (mm), 232g Folded: 154.9 height×76.2 width×10.1 depth (mm); Unfolded: 154.9 height×149.8 width×5.1 depth (mm); 258g
Colors Indigo

Frost

Lemongrass

Obsidian
Moonstone

Jade

Porcelain

Obsidian
Moonstone

Jade

Porcelain

Obsidian
Moonstone

Jade

You may notice some minor changes to the bottom edge of the phones, which now feature large grilles for the speaker and microphone—and no SIM card slot. Is it on the side? The top? Nope and nope. There is no physical SIM slot on Google’s new phones in the US, adopting the eSIM-only approach Apple “pioneered” on the iPhone 14. It has become standard practice that as soon as Apple removes something from its phones, like the headphone jack or the top bit of screen, everyone else will follow suit in a year or two.

Google has refused to offer a clear rationale for this change, saying only that the new SIM-less design is its “cleanest yet.” So RIP to the physical SIM card. While eSIM can be convenient in some cases, it’s not as reliable as moving a physical piece of plastic between phones and may force you to interact with your carrier’s support agents more often. Google has a SIM transfer tool built into Android these days, so most of those headaches are over.

Pixel 10 Pro

Credit: Google

The Pixel 10, 10 Pro, and 10 Pro XL all have the pronounced camera bar running the full width of the back, giving the phones perfect stability when placed on a table. The base model Pixel 9 had the same wide and ultrawide sensors as the Pro phones, but the Pixel 10 steps down to a lesser 48 MP primary and 13 MP ultrawide. You get the new 10.8 MP 5x telephoto this year. However, that won’t be as capable as the 48 MP telephoto camera on the Pro phones.

The Pixel 10 Pro Fold also keeps the same design as last year’s phone, featuring an offset camera bump. However, when you drill down, you’ll find a few hardware changes. Google says the hinge has been redesigned to be “gearless,” allowing for the display to get a bit closer to that edge. The result is a small 0.1-inch boost in external display size (6.4 inches). The inner screen is still 8 inches, making it the largest screen on a foldable. Google also claims the hinge is more durable and notes this is the first foldable with IP68 water and dust resistance.

Pixel 10 Pro Fold

Strangely, this phone still has a physical SIM card slot, even in the US. It has moved from the bottom to the top edge, which Google says helped to optimize the internal components. As a result, the third-gen Google foldable will see a significant battery life boost to 5,000 mAh versus 4,650 mAh in the 9 Pro Fold.

The Pixel 10 Pro Fold gets a camera array most similar to the base model Pixel 10, with a 48 MP primary, a 10.5 MP ultrawide, and a 10.8 MP 5x telephoto. The camera sensors are also relegated to an off-center block in the corner of the back panel, so you lose the tabletop stability from the flat models.

A Tensor from TSMC

Google released its first custom Arm chip in the Pixel 6 and has made iterative improvements in each subsequent generation. The Tensor G5 in the Pixel 10 line is the biggest upgrade yet, according to Google. As rumored, this chip is manufactured by TSMC instead of Samsung, using the latest 3 nm process node. It’s an 8-core chip with support for UFS 4 storage and LPDDR5x memory. Google has shied away from detailing the specific CPU cores. All we know right now is that there are eight cores, one of which is a “prime” core, five are mid-level, and two are efficiency cores. Similarly, the GPU performance is unclear. This is one place that Google’s Tensor chips have noticeably trailed the competition, and the company only says its internal testing shows games running “very well” on the Tensor G5.

Tensor G5 in the Pixel 10 will reportedly deliver a 34 percent boost in CPU performance, which is significant. However, even giving Google the benefit of the doubt, a 34 percent improvement would still leave the Tensor G5 trailing Qualcomm’s Snapdragon 8 Elite in raw speed. Google is much more interested in the new TPU, which is 60 percent faster for AI workloads than last year’s. Tensor will also power new AI-enhanced image processing, which means some photos straight out of the camera will have C2PA labeling indicating they are AI-edited. That’s an interesting change that will require hands-on testing to understand the implications.

The more powerful TPU runs the largest version of Gemini Nano yet, clocking in at 4 billion parameters. This model, designed in partnership with the team at DeepMind, is twice as efficient and 2.6 times faster than Gemini Nano models running on the Tensor G4. The context window (a measure of how much data you can put into the model) now sits at 32,000 tokens, almost three times more than last year.

Every new smartphone is loaded with AI features these days, but they can often feel cobbled together. Google is laser-focused on using the Tensor chip for on-device AI experiences, which it says number more than 20 on the Pixel 10 series. For instance, the new Magic Cue feature will surface contextual information in phone calls and messages when you need it, and the Journal is a place where you can use AI to explore your thoughts and personal notes. Tensor G5 also enables real-time Voice Translation on calls, which transforms the speaker’s own voice instead of inserting a robot voice. All these features run entirely on the phone without sending any data to the cloud.

Finally, a repairable Pixel Watch

Since Google finally released its own in-house smartwatch, there has been one glaring issue: zero repairability. The Pixel Watch line has been comfortable enough to wear all day and night, but that just makes it easier to damage. So much as a scratch, and you’re out of luck, with no parts or service available.

Google says the fourth-generation watch addresses this shortcoming. The Pixel Watch 4 comes in the same 41 mm and 45 mm sizes as last year’s watch, but the design has been tweaked to make it repairable at last. The company says the watch’s internals are laid out in a way that makes it easier to disassemble, and there’s a new charging system that won’t interfere with repairs. However, that means another new watch charging standard, Google’s third in four generations.

Credit: Google

The new charger is a small dock that attaches to the side, holding the watch up so it’s visible on your desk. It can show upcoming alarms, battery percentage, or the time (duh, it’s a watch). It’s about 25 percent faster to charge compared to last year’s model, too. The smaller watch has a 325 mAh battery, and the larger one is 455 mAh. In both cases, these are marginally larger than the Pixel Watch 3. Google says the 41 mm will run 30 hours on a charge, and the 45 mm manages 40 hours.

The OLED panel under the glass now conforms to the Pixel Watch 4’s curvy aesthetic. Rather than being a flat panel under curved glass, the OLED now follows the domed shape. Google says the “Actua 360” display features 3,000 nits of brightness, a 50 percent improvement over last year’s wearable. The bezel around the screen is also 16 percent slimmer than last year. It runs a Snapdragon W5 Gen 2, which is apparently 25 percent faster and uses half the power of the Gen 1 chip used in the Watch 3.

Naturally, Google has also integrated Gemini into its new watch. It has “raise-to-talk” functionality, so you can just lift your wrist to begin talking to the AI (if you want that). The Pixel Watch 4 also boasts an improved speaker and haptics, which come into play when interacting with Gemini.

Pricing and availability

If you have a Pixel 9, there isn’t much reason to run out and buy a Pixel 10. That said, you can preorder Google’s new flat phones today. Pricing remains the same as last year, starting at $799 for the Pixel 10. The Pixel 10 Pro keeps the same size, adding a better camera setup and screen for $999. The largest Pixel 10 Pro XL retails for $1,199. The phones will ship on August 28.

If foldables are more your speed, you’ll have to wait a bit longer. The Pixel 10 Pro Fold won’t arrive until October 9, but it won’t see a price hike, either. The $1,799 price tag is still quite steep, even if Samsung’s new foldable is $200 more.

The Pixel Watch 4 is also available for preorder today, with availability on August 28 as well. The 41 mm will stay at $349, and the 45 mm is $399. If you want the LTE versions, you’ll add $100 to those prices.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google unveils Pixel 10 series with improved Tensor G5 chip and a boatload of AI Read More »

microsoft-and-asus’-answers-to-steamos-and-the-steam-deck-launch-on-october-16

Microsoft and Asus’ answers to SteamOS and the Steam Deck launch on October 16

Asus and Microsoft will be launching their ROG Xbox Ally series of handheld gaming PCs starting October 16, according to an Asus announcement that went out today.

An Xbox-branded extension of Asus’ existing ROG Ally handheld line, the basic ROG Xbox Ally and more powerful ROG Xbox Ally X, both run a version of Windows 11 Home that’s been redesigned with a controller-first Xbox-style user interface. The idea is to preserve the wide game compatibility of Windows—and the wide compatibility with multiple storefronts, including Microsoft’s own, Valve’s Steam, the Epic Games Store, and more—while turning off all of the extra Windows desktop stuff and saving system resources. (This also means that, despite the Xbox branding, these handhelds play Windows PC games and not the Xbox versions.)

Microsoft and Asus initially announced the handhelds in June. Microsoft still isn’t sharing pricing information for either console, so it’s hard to say how their specs and features will stack up against the Steam Deck (starting at $399 for the LCD version, $549 for OLED), Nintendo’s Switch 2 ($450), or past Asus handhelds like the ROG Ally X ($800).

Both consoles share a 7-inch, 1080p IPS display with a 120 Hz refresh rate, Wi-Fi 6E, and Bluetooth 5.4 support, but their internals are quite a bit different. The lower-end Xbox Ally uses an AMD Ryzen Z2 A chip with a 4-core Zen 2-based CPU, an eight-core RDNA2-based GPU, 512GB of storage, and 16GB of LPDDR5X-6400—specs nearly identical to Valve’s 3-year-old Steam Deck. The Xbox Ally X includes a more interesting Ryzen AI Z2 Extreme with an 8-core Zen 5 CPU, a 16-core RDNA3.5 GPU, 1TB of storage, 24GB of LPDDR5X-8000, and a built-in neural processing unit (NPU).

The beefier hardware comes with a bigger battery—80 WHr in the Ally X, compared to 60 WHr in the regular Ally—and that also makes the Ally X around a tenth of a pound (or 45 grams) heavier than the Ally.

Microsoft and Asus’ answers to SteamOS and the Steam Deck launch on October 16 Read More »

nasa’s-acting-chief-calls-for-the-end-of-earth-science-at-the-space-agency

NASA’s acting chief calls for the end of Earth science at the space agency

Sean Duffy, the acting administrator of NASA for a little more than a month, has vowed to make the United States great in space.

With a background as a US Congressman, reality TV star, and television commentator, Duffy did not come to the position with a deep well of knowledge about spaceflight. He also already had a lot on his plate, serving as the secretary of transportation, a Cabinet-level position that oversees 55,000 employees across 13 agencies.

Nevertheless, Duffy is putting his imprint on the space agency, seeking to emphasize the agency’s human exploration plans, including the development of a lunar base, and ending NASA’s efforts to study planet Earth and its changing climate.

Duffy has not spoken much with reporters who cover the space industry, but he has been a frequent presence on Fox News networks, where he previously worked as a host. On Thursday, he made an 11-minute appearance on “Mornings with Maria,” a FOX Business show hosted by Maria Bartiromo to discuss NASA.

NASA should explore, he says

During this appearance, Duffy talked up NASA’s plans to establish a permanent presence on the Moon and his push to develop a nuclear reactor that could provide power there. He also emphasized his desire to end NASA’s focus on studying the Earth and understanding how the planet’s surface and atmosphere are changing. This shift has been a priority of the Trump Administration at other federal agencies.

“All the climate science, and all of the other priorities that the last administration had at NASA, we’re going to move aside, and all of the science that we do is going to be directed towards exploration, which is the mission of NASA,” Duffy said during the appearance. “That’s why we have NASA, to explore, not to do all of these Earth sciences.”

NASA’s acting chief calls for the end of Earth science at the space agency Read More »

study:-social-media-probably-can’t-be-fixed

Study: Social media probably can’t be fixed


“The [structural] mechanism producing these problematic outcomes is really robust and hard to resolve.”

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

It’s no secret that much of social media has become profoundly dysfunctional. Rather than bringing us together into one utopian public square and fostering a healthy exchange of ideas, these platforms too often create filter bubbles or echo chambers. A small number of high-profile users garner the lion’s share of attention and influence, and the algorithms designed to maximize engagement end up merely amplifying outrage and conflict, ensuring the dominance of the loudest and most extreme users—thereby increasing polarization even more.

Numerous platform-level intervention strategies have been proposed to combat these issues, but according to a preprint posted to the physics arXiv, none of them are likely to be effective. And it’s not the fault of much-hated algorithms, non-chronological feeds, or our human proclivity for seeking out negativity. Rather, the dynamics that give rise to all those negative outcomes are structurally embedded in the very architecture of social media. So we’re probably doomed to endless toxic feedback loops unless someone hits upon a brilliant fundamental redesign that manages to change those dynamics.

Co-authors Petter Törnberg and Maik Larooij of the University of Amsterdam wanted to learn more about the mechanisms that give rise to the worst aspects of social media: the partisan echo chambers, the concentration of influence among a small group of elite users (attention inequality), and the amplification of the most extreme divisive voices. So they combined standard agent-based modeling with large language models (LLMs), essentially creating little AI personas to simulate online social media behavior. “What we found is that we didn’t need to put any algorithms in, we didn’t need to massage the model,” Törnberg told Ars. “It just came out of the baseline model, all of these dynamics.”

They then tested six different intervention strategies social scientists have been proposed to counter those effects: switching to chronological or randomized feeds; inverting engagement-optimization algorithms to reduce the visibility of highly reposted sensational content; boosting the diversity of viewpoints to broaden users’ exposure to opposing political views; using “bridging algorithms” to elevate content that fosters mutual understanding rather than emotional provocation; hiding social statistics like reposts and follower accounts to reduce social influence cues; and removing biographies to limit exposure to identity-based signals.

The results were far from encouraging. Only some interventions showed modest improvements. None were able to fully disrupt the fundamental mechanisms producing the dysfunctional effects. In fact, some interventions actually made the problems worse. For example, chronological ordering had the strongest effect on reducing attention inequality, but there was a tradeoff: It also intensified the amplification of extreme content. Bridging algorithms significantly weakened the link between partisanship and engagement and modestly improved viewpoint diversity, but it also increased attention inequality. Boosting viewpoint diversity had no significant impact at all.

So is there any hope of finding effective intervention strategies to combat these problematic aspects of social media? Or should we nuke our social media accounts altogether and go live in caves? Ars caught up with Törnberg for an extended conversation to learn more about these troubling findings.

Ars Technica: What drove you to conduct this study?

Petter Törnberg: For the last 20 years or so, there has been a ton of research on how social media is reshaping politics in different ways, almost always using observational data. But in the last few years, there’s been a growing appetite for moving beyond just complaining about these things and trying to see how we can be a bit more constructive. Can we identify how to improve social media and create online spaces that are actually living up to those early promises of providing a public sphere where we can deliberate and debate politics in a constructive way?

The problem with using observational data is that it’s very hard to test counterfactuals to implement alternative solutions. So one kind of method that has existed in the field is agent-based simulations and social simulations: create a computer model of the system and then run experiments on that and test counterfactuals. It is useful for looking at the structure and emergence of network dynamics.

But at the same time, those models represent agents as simple rule followers or optimizers, and that doesn’t capture anything of the cultural world or politics or human behavior. I’ve always been of the controversial opinion that those things actually matter,  especially for online politics. We need to study both the structural dynamics of network formations and the patterns of cultural interaction.

Ars Technica: So you developed this hybrid model that combines LLMs with agent-based modeling.

Petter Törnberg: That’s the solution that we find to move beyond the problems of conventional agent-based modeling. Instead of having this simple rule of followers or optimizers, we use AI or LLMs. It’s not a perfect solution—there’s all kind of biases and limitations—but it does represent a step forward compared to a list of if/then rules. It does have something more of capturing human behavior in a more plausible way. We give them personas that we get from the American National Election Survey, which has very detailed questions about US voters and their hobbies and preferences. And then we turn that into a textual persona—your name is Bob, you’re from Massachusetts, and you like fishing—just to give them something to talk about and a little bit richer representation.

And then they see the random news of the day, and they can choose to post the news, read posts from other users, repost them, or they can choose to follow users. If they choose to follow users, they look at their previous messages, look at their user profile.

Our idea was to start with the minimal bare-bones model and then add things to try to see if we could reproduce these problematic consequences. But to our surprise, we actually didn’t have to add anything because these problematic consequences just came out of the bare bones model. This went against our expectations and also what I think the literature would say.

Ars Technica: I’m skeptical of AI in general, particularly in a research context, but there are very specific instances where it can be extremely useful. This strikes me as one of them, largely because your basic model proved to be so robust. You got the same dynamics without introducing anything extra.

Petter Törnberg: Yes. It’s been a big conversation in social science over the last two years or so. There’s a ton of interest in using LLMs for social simulation, but no one has really figured out for what or how it’s going to be helpful, or how we’re going to get past these problems of validity and so on. The kind of approach that we take in this paper is building on a tradition of complex systems thinking. We imagine very simple models of the human world and try to capture very fundamental mechanisms. It’s not really aiming to be realistic or a precise, complete model of human behavior.

I’ve been one of the more critical people of this method, to be honest. At the same time, it’s hard to imagine any other way of studying these kinds of dynamics where we have cultural and structural aspects feeding back into each other. But I still have to take the findings with a grain of salt and realize that these are models, and they’re capturing a kind of hypothetical world—a spherical cow in a vacuum. We can’t predict what someone is going to have for lunch on Tuesday, but we can capture broader mechanisms, and we can see how robust those mechanisms are. We can see whether they’re stable, unstable, which conditions they emerge in, and the general boundaries. And in this case, we found a mechanism that seems to be very robust, unfortunately.

Ars Technica: The dream was that social media would help revitalize the public sphere and support the kind of constructive political dialogue that your paper deems “vital to democratic life.” That largely hasn’t happened. What are the primary negative unexpected consequences that have emerged from social media platforms?

Petter Törnberg: First, you have echo chambers or filter bubbles. The risk of broad agreement is that if you want to have a functioning political conversation, functioning deliberation, you do need to do that across the partisan divide. If you’re only having a conversation with people who already agree with each other, that’s not enough. There’s debate on how widespread echo chambers are online, but it is quite established that there are a lot of spaces online that aren’t very constructive because there’s only people from one political side. So that’s one ingredient that you need. You need to have a diversity of opinion, a diversity of perspective.

The second one is that the deliberation needs to be among equals; people need to have more or less the same influence in the conversation. It can’t be completely controlled by a small, elite group of users. This is also something that people have pointed to on social media: It has a tendency of creating these influencers because attention attracts attention. And then you have a breakdown of conversation among equals.

The final one is what I call (based on Chris Bail’s book) the social media prism. The more extreme users tend to get more attention online. This is often discussed in relation to engagement algorithms, which tend to identify the type of content that most upsets us and then boost that content. I refer to it as a “trigger bubble” instead of the filter bubble. They’re trying to trigger us as a way of making us engage more so they can extract our data and keep our attention.

Ars Technica: Your conclusion is that there’s something within the structural dynamics of the network itself that’s to blame—something fundamental to the construction of social networks that makes these extremely difficult problems to solve.

Petter Törnberg: Exactly. It comes from the fact that we’re using these AI models to capture a richer representation of human behavior, which allows us to see something that wouldn’t really be possible using conventional agent-based modeling. There have been previous models looking at the growth of social networks on social media. People choose to retweet or not, and we know that action tends to be very reactive. We tend to be very emotional in that choice. And it tends to be a highly partisan and polarized type of action. You hit retweet when you see someone being angry about something, or doing something horrific, and then you share that. It’s well-known that this leads to toxic, more polarized content spreading more.

But what we find is that it’s not just that this content spreads; it also shapes the network structures that are formed. So there’s feedback between the effective emotional action of choosing to retweet something and the network structure that emerges. And then in turn, you have a network structure that feeds back what content you see, resulting in a toxic network. The definition of an online social network is that you have this kind of posting, reposting, and following dynamics. It’s quite fundamental to it. That alone seems to be enough to drive these negative outcomes.

Ars Technica: I was frankly surprised at the ineffectiveness of the various intervention strategies you tested. But it does seem to explain the Bluesky conundrum. Bluesky has no algorithm, for example, yet the same dynamics still seem to emerge. I think Bluesky’s founders genuinely want to avoid those dysfunctional issues, but they might not succeed, based on this paper. Why are such interventions so ineffective? 

Petter Törnberg: We’ve been discussing whether these things are due to the platforms doing evil things with algorithms or whether we as users are choosing that we want a bad environment. What we’re saying is that it doesn’t have to be either of those. This is often the unintended outcomes from interactions based on underlying rules. It’s not necessarily because the platforms are evil; it’s not necessarily because people want to be in toxic, horrible environments. It just follows from the structure that we’re providing.

We tested six different interventions. Google has been trying to make social media less toxic and recently released a newsfeed algorithm based on the content of the text. So that’s one example. We’re also trying to do more subtle interventions because often you can find a certain way of nudging the system so it switches over to healthier dynamics. Some of them have moderate or slightly positive effects on one of the attributes, but then they often have negative effects on another attribute, or they have no impact whatsoever.

I should say also that these are very extreme interventions in the sense that, if you depended on making money on your platform, you probably don’t want to implement them because it probably makes it really boring to use. It’s like showing the least influential users, the least retweeted messages on the platform. Even so, it doesn’t really make a difference in changing the basic outcomes. What we take from that is that the mechanism producing these problematic outcomes is really robust and hard to resolve given the basic structure of these platforms.

Ars Technica: So how might one go about building a successful social network that doesn’t have these problems? 

Petter Törnberg: There are several directions where you could imagine going, but there’s also the constraint of what is popular use. Think back to the early Internet, like ICQ. ICQ had this feature where you could just connect to a random person. I loved it when I was a kid. I would talk to random people all over the world. I was 12 in the countryside on a small island in Sweden, and I was talking to someone from Arizona, living a different life. I don’t know how successful that would be these days, the Internet having become a lot less innocent than it was.

For instance, we can focus on the question of inequality of attention, a very well-studied and robust feature of these networks. I personally thought we would be able to address it with our interventions, but attention draws attention, and this leads to a power law distribution, where 1 percent [of users] dominates the entire conversation. We know the conditions under which those power laws emerge. This is one of the main outcomes of social network dynamics: extreme inequality of attention.

But in social science, we always teach that everything is a normal distribution. The move from studying the conventional social world to studying the online social world means that you’re moving from these nice normal distributions to these horrible power law distributions. Those are the outcomes of having social networks where the probability of connecting to someone depends on how many previous connections they have. If we want to get rid of that, we probably have to move away from the social network model and have some kind of spatial model or group-based model that makes things a little bit more local, a little bit less globally interconnected.

Ars Technica: It sounds like you’d want to avoid those big influential nodes that play such a central role in a large, complex global network. 

Petter Törnberg: Exactly. I think that having those global networks and structures fundamentally undermines the possibility of the kind of conversations that political scientists and political theorists traditionally talked about when they were discussing in the public square. They were talking about social interaction in a coffee house or a tea house, or reading groups and so on. People thought the Internet was going to be precisely that. It’s very much not that. The dynamics are fundamentally different because of those structural differences. We shouldn’t expect to be able to get a coffee house deliberation structure when we have a global social network where everyone is connected to everyone. It is difficult to imagine a functional politics building on that.

Ars Technica: I want to come back to your comment on the power law distribution, how 1 percent of people dominate the conversation, because I think that is something that most users routinely forget. The horrible things we see people say on the Internet are not necessarily indicative of the vast majority of people in the world. 

Petter Törnberg: For sure. That is capturing two aspects. The first is the social media prism, where the perspective we get of politics when we see it through the lens of social media is fundamentally different from what politics actually is. It seems much more toxic, much more polarized. People seem a little bit crazier than they really are. It’s a very well-documented aspect of the rise of polarization: People have a false perception of the other side. Most people have fairly reasonable and fairly similar opinions. The actual polarization is lower than the perceived polarization. And that arguably is a result of social media, how it misrepresents politics.

And then we see this very small group of users that become very influential who often become highly visible as a result of being a little bit crazy and outrageous. Social media creates an incentive structure that is really central to reshaping not just how we see politics but also what politics is, which politicians become powerful and influential, because it is controlling the distribution of what is arguably the most valuable form of capital of our era: attention. Especially for politicians, being able to control attention is the most important thing. And since social media creates the conditions of who gets attention or not, it creates an incentive structure where certain personalities work better in a way that’s just fundamentally different from how it was in previous eras.

Ars Technica: There are those who have sworn off social media, but it seems like simply not participating isn’t really a solution, either.

Petter Törnberg: No. First, even if you only read, say, The New York Times, that newspaper is still reshaped by what works on social media, the social media logic. I had a student who did a little project this last year showing that as social media became more influential, the headlines of The New York Times became more clickbaity and adapted to the style of what worked on social media. So conventional media and our very culture is being transformed.

But more than that, as I was just saying, it’s the type of politicians, it’s the type of people who are empowered—it’s the entire culture. Those are the things that are being transformed by the power of the incentive structures of social media. It’s not like, “This is things that are happening in social media and this is the rest of the world.” It’s all entangled, and somehow social media has become the cultural engine that is shaping our politics and society in very fundamental ways. Unfortunately.

Ars Technica: I usually like to say that technological tools are fundamentally neutral and can be used for good or ill, but this time I’m not so sure. Is there any hope of finding a way to take the toxic and turn it into a net positive?

Petter Törnberg: What I would say to that is that we are at a crisis point with the rise of LLMs and AI. I have a hard time seeing the contemporary model of social media continuing to exist under the weight of LLMs and their capacity to mass-produce false information or information that optimizes these social network dynamics. We already see a lot of actors—based on this monetization of platforms like X—that are using AI to produce content that just seeks to maximize attention. So misinformation, often highly polarized information as AI models become more powerful, that content is going to take over. I have a hard time seeing the conventional social media models surviving that.

We’ve already seen the process of people retreating in part to credible brands and seeking to have gatekeepers. Young people, especially, are going into WhatsApp groups and other closed communities. Of course, there’s misinformation from social media leaking into those chats also. But these kinds of crisis points at least have the hope that we’ll see a changing situation. I wouldn’t bet that it’s a situation for the better. You wanted me to sound positive, so I tried my best. Maybe it’s actually “good riddance.”

Ars Technica: So let’s just blow up all the social media networks. It still won’t be better, but at least we’ll have different problems.

Petter Törnberg: Exactly. We’ll find a new ditch.

DOI: arXiv, 2025. 10.48550/arXiv.2508.03385  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Study: Social media probably can’t be fixed Read More »

why-it’s-a-mistake-to-ask-chatbots-about-their-mistakes

Why it’s a mistake to ask chatbots about their mistakes


The only thing I know is that I know nothing

The tendency to ask AI bots to explain themselves reveals widespread misconceptions about how they work.

When something goes wrong with an AI assistant, our instinct is to ask it directly: “What happened?” or “Why did you do that?” It’s a natural impulse—after all, if a human makes a mistake, we ask them to explain. But with AI models, this approach rarely works, and the urge to ask reveals a fundamental misunderstanding of what these systems are and how they operate.

A recent incident with Replit’s AI coding assistant perfectly illustrates this problem. When the AI tool deleted a production database, user Jason Lemkin asked it about rollback capabilities. The AI model confidently claimed rollbacks were “impossible in this case” and that it had “destroyed all database versions.” This turned out to be completely wrong—the rollback feature worked fine when Lemkin tried it himself.

And after xAI recently reversed a temporary suspension of the Grok chatbot, users asked it directly for explanations. It offered multiple conflicting reasons for its absence, some of which were controversial enough that NBC reporters wrote about Grok as if it were a person with a consistent point of view, titling an article, “xAI’s Grok offers political explanations for why it was pulled offline.”

Why would an AI system provide such confidently incorrect information about its own capabilities or mistakes? The answer lies in understanding what AI models actually are—and what they aren’t.

There’s nobody home

The first problem is conceptual: You’re not talking to a consistent personality, person, or entity when you interact with ChatGPT, Claude, Grok, or Replit. These names suggest individual agents with self-knowledge, but that’s an illusion created by the conversational interface. What you’re actually doing is guiding a statistical text generator to produce outputs based on your prompts.

There is no consistent “ChatGPT” to interrogate about its mistakes, no singular “Grok” entity that can tell you why it failed, no fixed “Replit” persona that knows whether database rollbacks are possible. You’re interacting with a system that generates plausible-sounding text based on patterns in its training data (usually trained months or years ago), not an entity with genuine self-awareness or system knowledge that has been reading everything about itself and somehow remembering it.

Once an AI language model is trained (which is a laborious, energy-intensive process), its foundational “knowledge” about the world is baked into its neural network and is rarely modified. Any external information comes from a prompt supplied by the chatbot host (such as xAI or OpenAI), the user, or a software tool the AI model uses to retrieve external information on the fly.

In the case of Grok above, the chatbot’s main source for an answer like this would probably originate from conflicting reports it found in a search of recent social media posts (using an external tool to retrieve that information), rather than any kind of self-knowledge as you might expect from a human with the power of speech. Beyond that, it will likely just make something up based on its text-prediction capabilities. So asking it why it did what it did will yield no useful answers.

The impossibility of LLM introspection

Large language models (LLMs) alone cannot meaningfully assess their own capabilities for several reasons. They generally lack any introspection into their training process, have no access to their surrounding system architecture, and cannot determine their own performance boundaries. When you ask an AI model what it can or cannot do, it generates responses based on patterns it has seen in training data about the known limitations of previous AI models—essentially providing educated guesses rather than factual self-assessment about the current model you’re interacting with.

A 2024 study by Binder et al. demonstrated this limitation experimentally. While AI models could be trained to predict their own behavior in simple tasks, they consistently failed at “more complex tasks or those requiring out-of-distribution generalization.” Similarly, research on “Recursive Introspection” found that without external feedback, attempts at self-correction actually degraded model performance—the AI’s self-assessment made things worse, not better.

This leads to paradoxical situations. The same model might confidently claim impossibility for tasks it can actually perform, or conversely, claim competence in areas where it consistently fails. In the Replit case, the AI’s assertion that rollbacks were impossible wasn’t based on actual knowledge of the system architecture—it was a plausible-sounding confabulation generated from training patterns.

Consider what happens when you ask an AI model why it made an error. The model will generate a plausible-sounding explanation because that’s what the pattern completion demands—there are plenty of examples of written explanations for mistakes on the Internet, after all. But the AI’s explanation is just another generated text, not a genuine analysis of what went wrong. It’s inventing a story that sounds reasonable, not accessing any kind of error log or internal state.

Unlike humans who can introspect and assess their own knowledge, AI models don’t have a stable, accessible knowledge base they can query. What they “know” only manifests as continuations of specific prompts. Different prompts act like different addresses, pointing to different—and sometimes contradictory—parts of their training data, stored as statistical weights in neural networks.

This means the same model can give completely different assessments of its own capabilities depending on how you phrase your question. Ask “Can you write Python code?” and you might get an enthusiastic yes. Ask “What are your limitations in Python coding?” and you might get a list of things the model claims it cannot do—even if it regularly does them successfully.

The randomness inherent in AI text generation compounds this problem. Even with identical prompts, an AI model might give slightly different responses about its own capabilities each time you ask.

Other layers also shape AI responses

Even if a language model somehow had perfect knowledge of its own workings, other layers of AI chatbot applications might be completely opaque. For example, modern AI assistants like ChatGPT aren’t single models but orchestrated systems of multiple AI models working together, each largely “unaware” of the others’ existence or capabilities. For instance, OpenAI uses separate moderation layer models whose operations are completely separate from the underlying language models generating the base text.

When you ask ChatGPT about its capabilities, the language model generating the response has no knowledge of what the moderation layer might block, what tools might be available in the broader system, or what post-processing might occur. It’s like asking one department in a company about the capabilities of a department it has never interacted with.

Perhaps most importantly, users are always directing the AI’s output through their prompts, even when they don’t realize it. When Lemkin asked Replit whether rollbacks were possible after a database deletion, his concerned framing likely prompted a response that matched that concern—generating an explanation for why recovery might be impossible rather than accurately assessing actual system capabilities.

This creates a feedback loop where worried users asking “Did you just destroy everything?” are more likely to receive responses confirming their fears, not because the AI system has assessed the situation, but because it’s generating text that fits the emotional context of the prompt.

A lifetime of hearing humans explain their actions and thought processes has led us to believe that these kinds of written explanations must have some level of self-knowledge behind them. That’s just not true with LLMs that are merely mimicking those kinds of text patterns to guess at their own capabilities and flaws.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Why it’s a mistake to ask chatbots about their mistakes Read More »

$30k-ford-ev-truck-due-in-2027-with-much-simpler-production-process

$30k Ford EV truck due in 2027 with much-simpler production process

Ford will debut a new midsize pickup truck in 2027 with a targeted price of $30,000, the automaker announced today. The as-yet unnamed pickup will be the first of a series of more affordable EVs from Ford, built using a newly designed flexible vehicle platform and US-made prismatic lithium iron phosphate batteries.

For the past few years, a team of Ford employees have been hard at work on the far side of the country from the Blue Oval’s base in Dearborn, Michigan. Sequestered in Long Beach and taking inspiration from Lockheed’s legendary “skunkworks,” the Electric Vehicle Development Center approached designing and building Ford’s next family of EVs as a clean-sheet problem, presumably taking inspiration from the Chinese EVs that have so impressed Ford’s CEO.

It starts with a pickup

Designing an EV from the ground up, free of decades of legacy cruft, is a good idea, but not one unique to Ford. In recent months we’ve reviewed quite a few so-called software-defined vehicles, which replace dozens or even hundreds of discrete single-function electronic control units with a handful of powerful modern computers (usually known as domain controllers) on a high-speed network.

“This isn’t a stripped‑down, old‑school vehicle,” said Doug Field, Ford’s chief EV, digital, and design officer, pointedly comparing the future Ford to the recently revealed barebones EV from Slate Motors.

An animation of Ford’s new vehicle architecture.

Starting from scratch like this is allowing vehicle dynamics engineers to get creative with the way EVs handle. Field said that the company “applied first‑principles engineering, pushing to the limits of physics to make it fun to drive and compete on affordability. Our new zonal electric architecture unlocks capabilities the industry has never seen.”

$30k Ford EV truck due in 2027 with much-simpler production process Read More »

experiment-will-attempt-to-counter-climate-change-by-altering-ocean

Experiment will attempt to counter climate change by altering ocean


Gulf of Maine will be site of safety and effectiveness testing.

Woods Hole researchers, Adam Subhas (left) and Chris Murray, conducted a series of lab experiments earlier this year to test the impact of an alkaline substance, known as sodium hydroxide, on copepods in the Gulf of Maine. Credit: Daniel Hentz/Woods Hole Oceanographic Institution

Later this summer, a fluorescent reddish-pink spiral will bloom across the Wilkinson Basin in the Gulf of Maine, about 40 miles northeast of Cape Cod. Scientists from the Woods Hole Oceanographic Institution will release the nontoxic water tracer dye behind their research vessel, where it will unfurl into a half-mile wide temporary plume, bright enough to catch the attention of passing boats and even satellites.

As it spreads, the researchers will track its movement to monitor a tightly controlled, federally approved experiment testing whether the ocean can be engineered to absorb more carbon, and in turn, help combat the climate crisis.

As the world struggles to stay below the 1.5° Celsius global warming threshold—a goal set out in the Paris Agreement to avoid the most severe impacts of climate change—experts agree that reducing greenhouse gas emissions won’t be enough to avoid overshooting this target. The latest Intergovernmental Panel on Climate Change report, published in 2023, emphasizes the urgent need to actively remove carbon from the atmosphere, too.

“If we really want to have a shot at mitigating the worst effects of climate change, carbon removal needs to start scaling to the point where it can supplement large-scale emissions reductions,” said Adam Subhas, an associate scientist in marine chemistry and geochemistry at the Woods Hole Oceanographic Institution, who will oversee the week-long experiment.

The test is part of the LOC-NESS project—short for Locking away Ocean Carbon in the Northeast Shelf and Slope—which Subhas has been leading since 2023. The ongoing research initiative is evaluating the effectiveness and environmental impact of a marine carbon dioxide removal approach called ocean alkalinity enhancement (OAE).

This method of marine carbon dioxide removal involves adding alkaline substances to the ocean to boost its natural ability to neutralize acids produced by greenhouse gases. It’s promising, Subhas said, because it has the potential to lock away carbon permanently.

“Ocean alkalinity enhancement does have the potential to reach sort of gigatons per year of carbon removal, which is the scale at which you would need to supplement emissions reductions,” Subhas said. “Once the alkalinity is dissolved in seawater, it reacts with carbon dioxide and forms bicarbonate—essentially dissolved baking soda. That bicarbonate is one of the most stable forms of carbon in the ocean, and it can stay locked away for tens of thousands, even hundreds of thousands of years.”

But it will be a long time before this could happen at the magnitude needed to mitigate climate change.

According to Wil Burns, co-director of the Institute for Responsible Carbon Removal at American University, between 6 and 10 gigatons of carbon need to be removed from the atmosphere annually by 2050 in order to meet the Paris Agreement climate target. “It’s a titanic task,” he said.

Most marine carbon dioxide removal initiatives, including those involving OAE, are still in a nascent stage.

“We’re really far from having any of these technologies be mature,” said Lisa Levin, an oceanographer and professor at the Scripps Institution of Oceanography at the University of California San Diego, who spoke on a panel at the United Nations Ocean Conference in June about the potential environmental risks of mining and carbon dioxide removal on deep-sea ecosystems. “We’re looking at a decade until any serious, large-scale marine carbon removal is going to be able to happen—or more.”

“In the meantime, everybody acknowledges that what we have to do is to reduce emissions, right, and not rely on taking carbon out of the atmosphere,” she said.

Marine carbon dioxide removal

So far, most carbon removal efforts have centered on land-based strategies, such as planting trees, restoring soils, and building machines that capture carbon dioxide directly from the air. Increasingly, researchers are exploring whether the oceans might help.

“Looking at the oceans makes a lot of sense when it comes to carbon removal, because the oceans sequester 70 times more CO2 than terrestrial sources,” Burns said. What if it can hold more?

That question is drawing growing attention, not only from scientists. In recent years, a wave of private companies have started piloting various methods of removing carbon from the oceans.

“It’s really the private sector that’s pushing the scaling of this very quickly,” Subhas said. In the US and Canada, he said, there are at least four companies piloting varied ocean alkalinity enhancement techniques.

Last year, Ebb Carbon, a California-based startup focused on marine carbon dioxide removal, signed a deal with Microsoft to remove up to 350,000 metric tons of CO2 over the next decade using an ocean alkalinity enhancement process that splits seawater into acidic and alkaline streams. The alkaline stream is then returned to the sea where it reacts with CO2 and stores it as bicarbonate, enabling the ocean to absorb more carbon dioxide from the atmosphere. In return, Microsoft will purchase carbon removal credits from the startup.

Another company called Vesta, which has headquarters in San Francisco, is using an approach called Coastal Carbon Capture. This involves adding finely ground olivine—a naturally occurring olive-green colored mineral—to sandy beaches. From there, ocean tides and waves carry it into the sea. Olivine reacts quickly with seawater in a process known as enhanced weathering, increasing ocean alkalinity. The company piloted one of their projects in Duck, North Carolina, last year where it estimated approximately 5,000 metric tons of carbon dioxide would be removed through coastal carbon capture after accounting for project emissions, according to its website.

But these efforts are not without risk, AU’s Burns said. “We have to proceed in an extremely precautionary manner,” he said.

Some scientists are concerned that OAE initiatives that involve olivine, which contains heavy metals like nickel and chromium, may harm marine life, he said. Another concern is that the olivine could cloud certain ocean areas and block light from penetrating to deeper depths. If too much alkalinity is introduced too fast in concentrated areas, he said, some animals might not be able to adjust.

Other marine carbon dioxide removal projects are using other methods besides OAE. Some involve adding iron to the ocean to stimulate growth in microscopic plants called phytoplankton, which absorb carbon dioxide through photosynthesis. Others include the cultivation of large-scale farms of kelp and seaweed, which also absorb carbon dioxide through photosynthesis. The marine plants can then be sunk in the deep ocean to store the carbon they absorbed.

In 2023, researchers from Woods Hole Oceanographic Institution conducted their first OAE-related field experiment from the 90-foot research vessel R/V Connecticut south of Massachusetts. As part of this first experiment, nontoxic water tracer dye was released into the ocean. Researchers tracked its movement through the water for 72 hours to model the dispersion of a plume of alkalinity over time.

Credit: Woods Hole Oceanographic Institution

In 2023, researchers from Woods Hole Oceanographic Institution conducted their first OAE-related field experiment from the 90-foot research vessel R/V Connecticut south of Massachusetts. As part of this first experiment, nontoxic water tracer dye was released into the ocean. Researchers tracked its movement through the water for 72 hours to model the dispersion of a plume of alkalinity over time. Credit: Woods Hole Oceanographic Institution

One technique that has not yet been tried, but may be piloted in the future, according to the science-based conservation nonprofit Ocean Visions, would employ new technology to accelerate the ocean’s natural process of transferring surface water and carbon to the deep ocean. That’s called artificial downwelling. In a reverse process—artificial upwelling—cooler, nutrient-rich waters from the deep ocean would be pumped to the surface to spur phytoplankton growth.

So far, UC San Diego’s Levin said she is not convinced that these trials will lead to impactful carbon removal.

“I do not think the ocean is ever going to be a really large part of that solution,” she said. However, she added, “It might be part of the storage solution. Right now, people are looking at injecting carbon dioxide that’s removed from industry activities on land and transporting it to the ocean and injecting it into basalt.”

Levin said she’s also worried that we don’t know enough yet about the consequences of altering natural ocean processes.

“I am concerned about how many field trials would be required to actually understand what would happen, and whether we could truly understand the environmental risk of a fully scaled-up operation,” she said.

The experiment

Most marine carbon dioxide removal projects that have kicked off already are significantly larger in scale than the LOC-NESS experiment, which Subhas estimates will remove around 50 tons of CO2.

But, he emphasized, the goal of this project is not to compete in size or scale. He said the aim is to provide independent academic research that can help guide and inform the future of this industry and ensure it does not have negative repercussions on the marine environment.

There is some concern, he said, that commercial entities may pursue large-scale OAE initiatives to capitalize on the growing voluntary carbon market without first conducting adequate testing for safety and efficacy. Unlike those initiatives, there is no profit to be made from LOC-NESS. No carbon credits will be sold, Subhas said.

The project is funded by a collection of government and philanthropic sources, including the National Oceanic and Atmospheric Administration and the Carbon to Sea Initiative, a nonprofit that brings funders and scientists together to support marine carbon dioxide removal research and technology.

“We really feel like it’s necessary for the scientific community to be delivering transparent, trusted, and rigorous science to evaluate these things as these activities are currently happening and scaling in the ocean by the private sector,” Subhas said.

The LOC-NESS field trial in Wilkinson Basin will be the first “academic only” OAE experiment conducted from a ship in US waters. It is also the first of its kind to receive a permit from the Environmental Protection Agency under the Marine Protection, Research, and Sanctuaries Act.

“There’s no research in the past or planned that gets even close to providing a learning opportunity that this research is providing for OAE in the pelagic environment,” said Carbon to Sea Initiative’s Antonius Gagern, referring to the open sea experiment.

The permit was granted in April after a year of consultations between the EPA and other federal agencies.

During the process’ public comment periods, commenters expressed concerns about the potential impact on marine life, including the critically endangered North Atlantic right whales, small crustaceans that they eat called copepods, and larvae for the commercially important squid and mackerel fisheries. In a written response to some of these comments, the EPA stated that the small-scale project “demonstrates scientific rigor” and is “not expected to significantly affect human health, the marine environment, or other uses of the ocean.”

Subhas and his interdisciplinary team of chemists, biologists, engineers, and physicists from Woods Hole have spent the last few years planning this experiment and conducting a series of trials at their lab on Cape Cod to ensure they can safely execute and effectively monitor the results of the open-water test they will conduct this summer in the Gulf of Maine.

They specifically tested the effects of sodium hydroxide—an alkaline substance also known as lye or caustic soda—on marine microbes, phytoplankton, and copepods, a crucial food source for many marine species in the region in addition to the right whales. “We chose sodium hydroxide because it’s incredibly pure,” Subhas said. It’s widely used in the US to reduce acidity in drinking water.

It also helps counter ocean acidification, according to Subhas. “It’s like Tums for the ocean,” he said.

Ocean acidification occurs when the ocean absorbs excess carbon dioxide, causing its pH to drop. This makes it harder for corals, krill, and shellfish like oysters and clams to develop their hard calcium carbonate shells or skeletons.

This month, the team plans to release 50 tons of sodium hydroxide into a designated area of the Wilkinson Basin from the back of one of two research vessels participating in the LOC-NESS operation.

The basin is an ideal test site, according to Subhas, because there is little presence of phytoplankton, zooplankton, commercial fish larvae, and endangered species, including some whales, during this season. Still, as a precautionary measure, Woods Hole has contracted a protected species observer to keep a look out for marine species and mitigate potential harm if they are spotted. That person will be on board as the vessel travels to and from the field trial site, including while the team releases the sodium hydroxide into the ocean.

The alkaline substance will be dispersed over four to 12 hours off the back of one of the research vessels, along with the nontoxic fluorescent red water tracer dye called rhodamine. The dye will help track the location and spread of the sodium hydroxide once released into the ocean, and the vessel’s wake will help mix the solution in with the ocean water.

After about an hour, Subhas said, it will form into a “pinkish” patch of water that can be picked up on satellites. “We’re going to be taking pictures from space and looking at how this patch sort of evolves, dilutes, and stretches and disperses over time.”

For a week after that, scientists aboard the vessels will take rotating shifts to collect data around the clock. They will deploy drones and analyze over 20 types of samples from the research vessel to monitor how the surrounding waters and marine life respond to the experiment. They’ll track changes in ocean chemistry, nutrient levels, plankton populations and water clarity, while also measuring acidity and dissolved CO2.

In March, the team did a large-scale dry run of the dispersal at an open air testing facility on a naval base in New Jersey. According to Subhas, the trial demonstrated their ability to safely and effectively deliver alkalinity to surface seawater.

“The next step is being able to measure the carbon uptake from seawater—from the atmosphere into seawater,” he said. That is a slower process. He said he expects to have some preliminary results on carbon uptake, as well as environmental impacts, early next year.

This story originally appeared on Inside Climate News.

Photo of Inside Climate News

Experiment will attempt to counter climate change by altering ocean Read More »

how-old-is-the-earliest-trace-of-life-on-earth?

How old is the earliest trace of life on Earth?


A recent conference sees doubts raised about the age of the oldest signs of life.

Where the microbe bodies are buried: metamorphosed sediments in Labrador, Canada containing microscopic traces of carbon. Credit: Martin Whitehouse

Where the microbe bodies are buried: metamorphosed sediments in Labrador, Canada containing microscopic traces of carbon. Credit: Martin Whitehouse

The question of when life began on Earth is as old as human culture.

“It’s one of these fundamental human questions: When did life appear on Earth?” said Professor Martin Whitehouse of the Swedish Museum of Natural History.

So when some apparently biological carbon was dated to at least 3.95 billion years ago—making it the oldest remains of life on Earth—the claim sparked interest and skepticism in equal measure, as Ars Technica reported in 2017.

Whitehouse was among those skeptics. This July, he presented new evidence to the Goldschmidt Conference in Prague that the carbon in question is only between 2.7–2.8 billion years old, making it younger than other traces of life found elsewhere.

Organic carbon?

The carbon in question is in rock in Labrador, Canada. The rock was originally silt on the seafloor that, it’s argued, hosted early microbial life that was buried by more silt, leaving the carbon as their remains. The pressure and heat of deep burial and tectonic events over eons have transformed the silt into a hard metamorphic rock, and the microbial carbon in it has metamorphosed into graphite.

“They are very tiny, little graphite bits,” said Whitehouse.

The key to showing that this graphite was originally biological versus geological is its carbon isotope ratio. From life’s earliest days, its enzymes have preferred the slightly lighter isotope carbon-12 over the marginally heavier carbon-13. Organic carbon is therefore much richer in carbon-12 than geological carbon, and the Labrador graphite does indeed have this “light” biological isotope signature.

The key question, however, is its true age.

Mixed-up, muddled-up, shook-up rocks

Sorting out the age of the carbon-containing Labrador rock is a geological can of worms.

These are some of the oldest rocks on the planet—they’ve been heated, squished, melted, and faulted multiple times as Earth went through the growth, collision, and breakup of continents before being worn down by ice and exposed today.

“That rock itself is unbelievably complicated,” said Whitehouse. “It’s been through multiple phases of deformation.”

In general, the only ways to date sediments are if there’s a layer of volcanic ash in them, or by distinctive fossils in the sediments. Neither is available in these Labrador rocks.

“The rock itself is not directly dateable,” said Whitehouse, “so then you fall onto the next best thing, which is you want to look for a classic field geology cross-cutting relationship of something that is younger and something that you can date.”

The idea, which is as old as the science of geology itself, is to bracket the age of the sediment by finding a rock formation that cuts across it. Logically, the cross-cutting rock is younger than the sediment it cuts across.

In this case, the carbon-containing metamorphosed siltstone is surrounded by swirly, gray banded gneiss rock, but the boundary between the siltstone and the gray gneiss is parallel, so there’s no cross-cutting to use.

Professor Tsuyoshi Komiya of The University of Tokyo was a coauthor on the 3.95 billion-year age paper. His team used a cross-cutting rock they found at a different location and extrapolated that to the carbon-bearing siltstone to constrain its age. “It was discovered that the gneiss was intruded into supracrustal rocks (mafic and sedimentary rocks),” said Komiya in an email to Ars Technica.

But Whitehouse disputes that inference between the different outcrops.

“You’re reliant upon making these very long-distance assumptions and correlations to try to date something that might actually not have anything to do with what you think you’re dating,” he said.

Professor Jonathan O’Neil of the University of Ottawa, who was not involved in either Whitehouse’s or Komiya’s studies but who has visited the outcrops in question, agrees with Whitehouse. “I remember I was not convinced either by these cross-cutting relationships,” he told Ars. “It’s not clear to me that one is necessarily older than the other.”

With the field geology evidence disputed, the other pillar holding up the 3.95-billion-year-old date is its radiometric date, measured in zircon crystals extracted from the rocks surrounding the metamorphosed siltstone.

The zircon keeps the score

Geologists use the mineral zircon to date rocks because when it crystallizes, it incorporates uranium but not lead. So as radioactive uranium slowly decays into lead, the ratio of uranium to lead provides the age of the crystal.

But the trouble with any date obtained from rocks as complicated as these is knowing exactly what geological event it dates—the number alone means little without the context of all the other geological evidence for the events that affected the area.

Both Whitehouse and O’Neil have independently sampled and dated the same rocks as Komiya’s team, and where Komiya’s team got a date of 3.95, Whitehouse’s and O’Neil’s new dates are both around 3.87 billion years. Importantly, O’Neil’s and Whitehouse’s dates are far more precise, with errors around plus-or-minus 5 or 6 million years, which is remarkably precise for dates in rocks this old. The 3.95 date had an error around 10 times bigger. “It’s a large error,” said O’Neil.

But there’s a more important question: How is that date related to the age of the organic carbon? The rocks have been through many events that could each have “set” the dates in the zircons. That’s because zircons can survive multiple re-heatings and even partial remelting, with each new event adding a new layer, or “zone,” on the outer surface of the crystal, recording the age of that event.

“This rock has seen all the events, and the zircon in it has responded to all of these events in a way that, when you go in with a very small-scale ion beam to do the sampling on these different zones, you can pick apart the geological history,” Whitehouse said.

Whitehouse’s team zapped tiny spots on the zircons with a beam of negatively charged oxygen ions to dislodge ions from the crystals, then sucked away these ions into a mass spectrometer to measure the uranium-lead ratio, and thus the dates. The tiny beam and relatively small error have allowed Whitehouse to document the events that these rocks have been through.

“Having our own zircon means we’ve been able to go in and look in more detail at the internal structure in the zircon,” said Whitehouse. “Where we might have a core that’s 3.87, we’ll have a rim that is 2.7 billion years, and that rim, morphologically, looks like an igneous zircon,” said Whitehouse.

That igneous outer rim of Whitehouse’s zircons shows that it formed in partially molten rock that would have flowed at that time. That flow was probably what brought it next to the carbon-containing sediments. Its date of 2.7 billion years ago means the carbon in the sediments could be any age older than that.

That’s a key difference from Komiya’s work. He argues that the older dates in the cores of the zircons are the true age of the cross-cutting rock. “Even the igneous zircons must have been affected by the tectonothermal event; therefore, the obtained age is the minimum age, and the true age is older,” said Komiya. “The fact that young zircons were found does not negate our research.”

But Whitehouse contends that the old cores of the zircons instead record a time when the original rock formed, long before it became a gneiss and flowed next to the carbon-bearing sediments.

Zombie crystals

Zircon’s resilience means it can survive being eroded from the rock where it formed and then deposited in a new, sedimentary rock as the undead remnants of an older, now-vanished landscape.

The carbon-containing siltstone contains zombie zircons, and Whitehouse presented new data on them to the Goldschmidt Conference, dating them to 2.8 billion years ago. Whitehouse argues that these crystals formed in an igneous rock 2.8 billion years ago and then were eroded, washed into the sea, and settled in the silt. So the siltstone must be no older than 2.8 billion years old, he said.

“You cannot deposit a zircon that is not formed yet,” O’Neil explained.

greyscale image of tiny fragments of mineral, with multiple layers visible in each fragment. A number of sites are circled on each fragment.

Tiny recorders of history – ancient zircon crystals from Labrador. Left shows layers built up as the zircon went through many heating events. Right shows a zircon with a prism-like outer shape showing that it formed in igneous conditions around an earlier zircon. Circles indicate where an ion beam was used to measure dates. Credit: Martin Whitehouse

This 2.8-billion-year age, along with the igneous zircon age of 2.7 billion years, brackets the age of the organic carbon to anywhere between 2.8 and 2.7 billion years old. That’s much younger than Komiya’s date of 3.95 billion years old.

Komiya disagrees: “I think that the estimated age is minimum age because zircons suffered from many thermal events, so that they were rejuvenated,” he said. In other words, the 2.8-billion-year age again reflects later heating, and the true date is given by the oldest-dated zircons in the siltstone.

But Whitehouse presented a third line of evidence to dispute the 3.95-billion-year date: isotopes of hafnium in the same zombie zircon crystals.

The technique relies on radioactive decay of lutetium-176 to hafnium-176. If the 2.8-billion-year age resulted from rejuvenation by later heating, it would have had to have formed from material with a hafnium isotope ratio incompatible with the isotope composition of the early Earth.

“They go to impossible numbers,” said Whitehouse.

The only way that the uranium-lead ratio can be compatible with the hafnium in the zircons, Whitehouse argued, is if the zircons that settled in the silt had crystallized around 2.8 billion years ago, constraining the organic carbon to being no older than that.

The new oldest remains of life on Earth, for now

If the Labrador carbon is no longer the oldest trace of life on Earth, then where are the oldest remains of life now?

For Whitehouse, it’s in the 3.77-billion-year-old Isua Greenstone Belt in Greenland: “I’m willing to believe that’s a well-documented age… that’s what I think is the best evidence for the oldest biogenicity that we have,” said Whitehouse.

O’Neil recently co-authored a paper on Earth’s oldest surviving crustal rocks, located next to Hudson Bay in Canada. He points there. “I would say it’s in the Nuvvuagittuq Greenstone belt,” said O’Neil, “because I would argue that these rocks are 4.3 billion years old. Again, not everybody agrees!” Intriguingly, the rocks he is referring to contain carbon with a possibly biological origin and are thought to be the remains of the kind of undersea vent where life could well have first emerged.

But the bigger picture is the fact that we have credible traces of life of this vintage—be it 3.8 or 3.9 or 4.3 billion years.

Any of those dates is remarkably early in the planet’s 4.6-billion-year life. It’s long before there was an oxygenated atmosphere, before continents emerged above sea level, and before plate tectonics got going. It’s also much older than the oldest microbial “stromatolite” fossils, which have been dated to about 3.48 billion years ago.

O’Neil thinks that once conditions on Earth were habitable, life would have emerged relatively fast: “To me, it’s not shocking, because the conditions were the same,” he said. “The Earth has the luxury of time… but biology is very quick. So if all the conditions were there by 4.3 billion years old, why would biology wait 500 million years to start?”

Photo of Howard Lee

Howard Lee is a freelance science writer focusing on the evolution of planet Earth through deep time. He earned a B.Sc. in geology and M.Sc. in remote sensing, both from the University of London, UK.

How old is the earliest trace of life on Earth? Read More »