Author name: DJ Henderson

rocket-report:-starship-fails-for-a-second-time;-what’s-to-blame-for-vulcan-delays?

Rocket Report: Starship fails for a second time; what’s to blame for Vulcan delays?


“During Starship’s ascent burn, the vehicle experienced a rapid unscheduled disassembly.”

The first commercial flight of Ariane 6, operated by Arianespace, lifts off on Thursday. Credit: Arianespace

Welcome to Edition 7.34 of the Rocket Report! What a day in space Thursday was. During the morning hours we saw the triumphant second flight of the Ariane 6 rocket, a pivotal moment for European sovereignty in space. Then Intuitive Machines had a partially successful landing on the Moon. And finally, on Thursday evening, SpaceX’s Starship failed during its second consecutive test flight.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Firefly sets date for next Alpha launch. Having completed a static-fire test, Firefly Aerospace has set a target date of March 15 for the launch of its “Message in a Booster” mission. The Alpha rocket will launch Lockheed Martin’s LM 400 spacecraft from Vandenberg Space Force Base, with the 52-minute launch window opening at 6: 25 am PT (14: 25 UTC). Lockheed is self-funding the demonstration mission of its new satellite bus, the LM 400, which it says can serve civil, military, and commercial customers.

A slow build … This is Alpha’s second launch for Lockheed Martin, and the first of Firefly’s multi-launch agreement with the company that includes up to 25 missions over the next five years. Alpha is capable of lifting 1 metric ton to low-Earth orbit, and this will be the rocket’s sixth launch since its debut in September 2021. The company has recorded one failure, two partial failures, and two successes during the time. It’s been a slow ramp up for Alpha, with the rocket having launched just a single time in 2024, in July.

Isar Aerospace wins Asian launch contract. A Japanese microgravity services startup named ElevationSpace has become the first Asian customer for Germany’s Isar Aerospace, Space News reports. ElevationSpace said Monday it has booked a launch during the second half of 2026 with Isar Aerospace for AOBA, a 200-kilogram spacecraft designed to test a recoverable platform for space-based experiments and manufacturing. This is a hopeful sign that European startups will have commercial appeal beyond the continent.

Spectrum rocket nearing debut launch … The Japanese firm cited Isar Aerospace’s direct injection capability into low Earth orbit and flexible launch scheduling as key factors in its decision to sign the contract. Isar Aerospace said last month that Spectrum, designed to deliver up to 1,000 kilograms to low-Earth orbit, has completed static-fire testing and is prepared for its first flight from Andøya Spaceport in northern Norway, pending final regulatory approval.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

A small launch site in French Guiana. The French space agency, CNES, has opened a public consultation period for the new multi-user micro-launcher facility at the Guiana Space Centre in French Guiana, European Spaceflight reports. Last month, the first of four public consultation sessions into the construction of the new Multi-Launcher Launch Complex at the Guiana Space Centre was held at Kourou Town Hall. In March 2021, CNES announced plans to transform the old Diamant launch site into a new multi-use facility for commercial micro-launcher providers, supporting rockets with payloads of up to 1,500 kilograms.

Lots of potential users … The final mission launched from the Guiana Space Centre’s Diamant facility lifted off in 1976, after which it was abandoned and left to be reclaimed by the jungle. In 2019, the site was earmarked for revitalization to serve as a testing ground for the Callisto and Themis reusable rocket booster demonstrators. This testing was, however, always going to serve as a temporary justification for the launch facility’s rebirth. In July 2022, CNES pre-selected Avio, HyImpulse, Isar Aerospace, MaiaSpace, PLD Space, Rocket Factory Augsburg, and Latitude to use the facility. However, MaiaSpace has since allocated the Guiana Space Centre’s old Soyuz launch pad for its partially reusable Maia rocket.

Firefly nets Earth science launch contract. Amid its successful lunar landing, forthcoming Alpha launch, and a new launch contract, Firefly is having one heck of a week. NASA revealed this week that it has selected Firefly Aerospace to launch a trio of Earth science smallsats that will study the formation of storms, Space News reports. The agency said March 4 that it awarded a task order through its Venture-Class Acquisition of Dedicated and Rideshare (VADR) contract to Firefly to launch the three-satellite Investigation of Convective Updrafts mission.

Hello, Virginia … NASA did not disclose the value of the task order, a practice it has followed on other VADR awards. The three satellites will launch on a Firefly Alpha rocket from Wallops Flight Facility in Virginia. NASA did not disclose a launch date in its announcement, but Firefly, in its own statement, said the launch would take place as soon as 2026. Firefly said it will launch the mission from Pad 0A at the Mid-Atlantic Regional Spaceport on Wallops Island, Virginia, which has been used by Northrop Grumman’s Antares rocket and will also be used by Alpha and the future MLV rocket.

Ariane 6 delivers for Europe when it is needed. Europe’s Ariane 6 rocket lifted off Thursday from French Guiana and deployed a high-resolution reconnaissance satellite into orbit for the French military, notching a success on its first operational flight. “This is an absolute pleasure for me today to announce that Ariane 6 has successfully placed into orbit the CSO-3 satellite,” said David Cavaillolès, who took over in January as CEO of Arianespace, the Ariane 6’s commercial operator. “Today, here in Kourou, we can say that thanks to Ariane 6, Europe and France have their own autonomous access to space back, and this is great news.”

Can no longer rely on US rockets … This was the second flight of Europe’s new Ariane 6 rocket, following a mostly successful debut launch last July. The first test flight of the unproven Ariane 6 carried a batch of small, relatively inexpensive satellites. An auxiliary propulsion unit (APU)—essentially a miniature second engine—on the upper stage shut down in the latter portion of the inaugural Ariane 6 flight, after the rocket reached orbit and released some of its payloads. Philippe Baptiste, France’s minister for research and higher education, says Ariane 6 is “proof of our space sovereignty,” as many European officials feel they can no longer rely on the United States.

US launch facilities are not prepared for a surge. Rocket firm executives warned this week that the nation’s primary launch facilities may soon be unable to handle the projected surge in rocket launches, potentially hampering America’s competitiveness in the rapidly expanding commercial space sector, Space News reports. “I don’t think that people realize how many rockets are going to be launching five or eight years from now,” Dave Limp, CEO of Blue Origin, said at the Air & Space Forces Association’s Warfare Conference in Aurora, Colorado.

Support needed for multiple daily launches … Limp’s concerns were echoed by executives from SpaceX and United Launch Alliance during a panel discussion, where all three agreed that the industry must collectively prepare for a future where multiple daily launches become the norm. Jon Edwards, SpaceX’s vice president of Falcon launch vehicles, highlighted that even at Cape Canaveral, the busiest US spaceport, current protocols don’t allow simultaneous launches by different providers.

Falcon 9 first stage fails to land safely. After what appeared to be a routine Starlink mission on Sunday, a Falcon 9 first stage landed on the Just Read the Instructions drone ship in the Atlantic Ocean. Shortly after the landing, however, a fire broke out in the aft end of the rocket. This damaged a landing leg and caused the rocket to topple over. Florida Today has video of the badly damaged rocket returning to Port Canaveral.

Space remains hard … The Starlink satellites safely reached orbit, so this did not impact the primary mission. However, Falcon 9 landings have become so seemingly routine, such a failure now stands out. This booster was relatively new, having launched three Starlink missions, GOES-U, and Maxar 3. It was only the first-stage booster’s fifth flight. To date, SpaceX has successfully flown a single booster 26 times.

India begins construction of a new launch site. The Indian space agency, ISRO, presently has two operational launch pads at the Satish Dhawan Space Centre in Sriharikota. The space agency launches Indian and foreign satellites aboard rockets like PSLV and GSLV from here. As it seeks to expand its launch activities, ISRO officially began constructing a new launch site at Kulasekaranpattinam, in Tamil Nadu, this week, The National reports.

Avoiding the dogs … The Kulasekaranpattinam launch site is strategically located near the equator. With open seas to the south of it, the site allows for direct southward launches over the Indian Ocean. This will minimize fuel consumption and maximize payload capacity for small satellite launch vehicles, particularly beneficial for cost-effective commercial satellite launches. The site also avoids the need for complex “dogleg” maneuvers around Sri Lanka.

SpaceX launches Starship on its eighth flight. SpaceX launched the eighth full-scale test flight of its enormous Starship rocket on Thursday evening after receiving regulatory approval from the Federal Aviation Administration. The test flight sought a repeat of what SpaceX hoped to achieve on the previous Starship launch in January, when the rocket broke apart and showered debris over the Atlantic Ocean and Turks and Caicos Islands.

Alas … Unfortunately for SpaceX, the Starship upper stage failed again, in a similar location, with similar impacts. About a minute before reaching the cutoff of the vehicle’s engines en route to space, the upper stage spun out of control and broke apart. “During Starship’s ascent burn, the vehicle experienced a rapid unscheduled disassembly and contact was lost,” SpaceX said in a statement about an hour later. “Our team immediately began coordination with safety officials to implement pre-planned contingency responses.” Ars will have full coverage of what is a serious setback for the company.

Amazon culture comes to Blue Origin. Jeff Bezos has moved to introduce a tough Amazon-like approach to his rocket maker Blue Origin, as the world’s third-richest person seeks to revive a company that has lagged behind Elon Musk’s SpaceX, the Financial Times reports. The space company’s founder and sole shareholder has pushed to shift its internal culture with management hires from Amazon, while implementing policies akin to the e-commerce giant, including longer working hours and more aggressive targets.

Work-life balance, what? … Key to Bezos’s effort is chief executive Dave Limp. The former Amazon devices chief was appointed in late 2023 and has been followed in quick succession by several veterans from the $2.2 trillion tech giant, including supply chain chief Tim Collins, chief information officer Josh Koppelman, and chief financial officer Allen Parker. The changes in leadership have been accompanied by significant layoffs. In February, roughly 10 percent of Blue Origin’s more than 10,000-strong workforce was dismissed. Employees are now expected to work longer hours, and badge scanners have been introduced to track employee time similar to Amazon.

Space Force is to blame for Vulcan delays? The debut of United Launch Alliance’s Vulcan rocket was delayed more than four years, ultimately from 2019 to January 2024. The first flight went very well, but during the second certification mission in October 2024 there was an anomaly with one of the two solid rocket boosters powering the vehicle. Although the rocket reached its intended orbit, this issue necessitated an investigation. Vulcan has yet to fly again, and with the certification process still ongoing, it is now likely to launch no earlier than sometime this summer.

Spacecraft end up moving to the right … No one is more interested in seeing Vulcan fly than the US Space Force, which has dozens of missions lined up for the rocket. These missions were supposed to be launched between 2022 and 2026. To make up for lost time, the Space Force now hopes to launch 11 national security missions this year (this almost certainly won’t happen). In a curious comment to Space News, Bruno appeared to put some of the blame for delays on the Space Force, rather than Vulcan’s tardiness: Bruno pointed out there is inherent unpredictability in national security launch schedules, noting that “about half of the spacecraft end up needing to move right, and they move right by a lot.” It is a weird comment to make with a rocket that is years late.

Next three launches

March 9: Falcon 9 | SPHEREx & PUNCH | Vandenberg Space Force Base, Calif. | 03: 09 UTC

March 9: Falcon 9 | Starlink 12-21 | Cape Canaveral, Fla. | 04: 10 UTC

March 10: Electron | The Lightning God Reigns | Māhia Peninsula, New Zealand | 00:oo UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: Starship fails for a second time; what’s to blame for Vulcan delays? Read More »

the-starship-program-hits-another-speed-bump-with-second-consecutive-failure

The Starship program hits another speed bump with second consecutive failure

The flight flight plan going into Thursday’s mission called for sending Starship on a journey halfway around the world from Texas, culminating in a controlled reentry over the Indian Ocean before splashing down northwest of Australia.

The test flight was supposed to be a do-over of the previous Starship flight on January 16, when the rocket’s upper stage—itself known as Starship, or ship—succumbed to fires fueled by leaking propellants in its engine bay. Engineers determined the most likely cause of the propellant leak was a harmonic response several times stronger than predicted, suggesting the vibrations during the ship’s climb into space were in resonance with the vehicle’s natural frequency. This would have intensified the vibrations beyond the levels engineers expected.

The Super Heavy booster returned to Starbase in Texas to be caught back at the launch pad. Credit: SpaceX

Engineers test-fired the Starship vehicle for this week’s test flight earlier this month, validating changes to the ship’s fuel feed lines leading its six Raptor engines, adjustments to propellant temperatures, and a new operating thrust.

But engineers missed something. On Thursday, the Raptor engines began shutting down on Starship about eight minutes into the flight, and the rocket started tumbling 90 miles (146 kilometers) over the southeastern Gulf of Mexico. SpaceX ground controllers lost all contact with the rocket about nine-and-a-half minutes after liftoff.

“Prior to the end of the ascent burn, an energetic event in the aft portion of Starship resulted in the loss of several Raptor engines,” SpaceX wrote on X. “This in turn led to a loss of attitude control and ultimately a loss of communications with Starship.”

Just like in January, residents and tourists across the Florida peninsula, the Bahamas, and the Turks and Caicos Islands shared videos of fiery debris trails appearing in the twilight sky. Air traffic controllers diverted or delayed dozens of commercial airline flights flying through the debris footprint, just as they did in response to the January incident.

There were no immediate reports Thursday of any Starship wreckage falling over populated areas. In January, residents in the Turks and Caicos Islands recovered small debris fragments, including one piece that caused minor damage when it struck a car. The debris field from Thursday’s failed flight appeared to fall west of the areas where debris fell after Starship Flight 7.

A spokesperson for the Federal Aviation Administration said the regulatory agency will require SpaceX perform an investigation into Thursday’s Starship failure.

The Starship program hits another speed bump with second consecutive failure Read More »

“literally-just-a-copy”—hit-ios-game-accused-of-unauthorized-html5-code-theft

“Literally just a copy”—hit iOS game accused of unauthorized HTML5 code theft

Viral success (for someone else)

VoltekPlay writes on Reddit that it was only alerted to the existence of My Baby or Not! on iOS by “a suspicious burst of traffic on our itch.io page—all coming from Google organic search.” Only after adding a “where did you find our game?” player poll to the page were the developers made aware of some popular TikTok videos featuring the iOS version.

“Luckily, some people in the [Tiktok] comments mentioned the real game name—Diapers, Please!—so a few thousand players were able to google their way to our page,” VoltekPlay writes. “I can only imagine how many more ended up on the thief’s App Store page instead.”

Earlier this week, the $2.99 iOS release of My Baby or Not! was quickly climbing iOS’s paid games charts, attracting an estimated 20,000 downloads overall, according to Sensor Tower.

Marwane Benyssef’s only previous iOS release, Kiosk Food Night Shift, also appears to be a direct copy of an itch.io release.

Marwane Benyssef’s only previous iOS release, Kiosk Food Night Shift, also appears to be a direct copy of an itch.io release.

The App Store listing credited My Baby or Not! to “Marwane Benyssef,” a new iOS developer with no apparent history in the game development community. Benyssef’s only other iOS game, Kiosk Food Night Shift, was released last August and appears to be a direct copy of Kiosk, a pay-what-you-want title that was posted to itch.io last year (with a subsequent “full” release on Steam this year)

In a Reddit post, the team at VoltekPlay said that they had filed a DMCA copyright claim against My Baby or Not! Apple subsequently shared that claim with Bennysof, VoltekPlay writes, along with a message that “Apple encourages the parties to a dispute to work directly with one another to resolve the claim.”

This morning, Ars reached out to Apple to request a comment on the situation. While awaiting a response (which Apple has yet to provide), Apple appears to have removed Benyssef’s developer page and all traces of their games from the iOS App Store.

“Literally just a copy”—hit iOS game accused of unauthorized HTML5 code theft Read More »

researchers-surprised-to-find-less-educated-areas-adopting-ai-writing-tools-faster

Researchers surprised to find less-educated areas adopting AI writing tools faster


From the mouths of machines

Stanford researchers analyzed 305 million texts, revealing AI-writing trends.

Since the launch of ChatGPT in late 2022, experts have debated how widely AI language models would impact the world. A few years later, the picture is getting clear. According to new Stanford University-led research examining over 300 million text samples across multiple sectors, AI language models now assist in writing up to a quarter of professional communications across sectors. It’s having a large impact, especially in less-educated parts of the United States.

“Our study shows the emergence of a new reality in which firms, consumers and even international organizations substantially rely on generative AI for communications,” wrote the researchers.

The researchers tracked large language model (LLM) adoption across industries from January 2022 to September 2024 using a dataset that included 687,241 consumer complaints submitted to the US Consumer Financial Protection Bureau (CFPB), 537,413 corporate press releases, 304.3 million job postings, and 15,919 United Nations press releases.

By using a statistical detection system that tracked word usage patterns, the researchers found that roughly 18 percent of financial consumer complaints (including 30 percent of all complaints from Arkansas), 24 percent of corporate press releases, up to 15 percent of job postings, and 14 percent of UN press releases showed signs of AI assistance during that period of time.

The study also found that while urban areas showed higher adoption overall (18.2 percent versus 10.9 percent in rural areas), regions with lower educational attainment used AI writing tools more frequently (19.9 percent compared to 17.4 percent in higher-education areas). The researchers note that this contradicts typical technology adoption patterns where more educated populations adopt new tools fastest.

“In the consumer complaint domain, the geographic and demographic patterns in LLM adoption present an intriguing departure from historical technology diffusion trends where technology adoption has generally been concentrated in urban areas, among higher-income groups, and populations with higher levels of educational attainment.”

Researchers from Stanford, the University of Washington, and Emory University led the study, titled, “The Widespread Adoption of Large Language Model-Assisted Writing Across Society,” first listed on the arXiv preprint server in mid-February. Weixin Liang and Yaohui Zhang from Stanford served as lead authors, with collaborators Mihai Codreanu, Jiayu Wang, Hancheng Cao, and James Zou.

Detecting AI use in aggregate

We’ve previously covered that AI writing detection services aren’t reliable, and this study does not contradict that finding. On a document-by-document basis, AI detectors cannot be trusted. But when analyzing millions of documents in aggregate, telltale patterns emerge that suggest the influence of AI language models on text.

The researchers developed an approach based on a statistical framework in a previously released work that analyzed shifts in word frequencies and linguistic patterns before and after ChatGPT’s release. By comparing large sets of pre- and post-ChatGPT texts, they estimated the proportion of AI-assisted content at a population level. The presumption is that LLMs tend to favor certain word choices, sentence structures, and linguistic patterns that differ subtly from typical human writing.

To validate their approach, the researchers created test sets with known percentages of AI content (from zero percent to 25 percent) and found their method predicted these percentages with error rates below 3.3 percent. This statistical validation gave them confidence in their population-level estimates.

While the researchers specifically note their estimates likely represent a minimum level of AI usage, it’s important to understand that actual AI involvement might be significantly greater. Due to the difficulty in detecting heavily edited or increasingly sophisticated AI-generated content, the researchers say their reported adoption rates could substantially underestimate true levels of generative AI use.

Analysis suggests AI use as “equalizing tools”

While the overall adoption rates are revealing, perhaps more insightful are the patterns of who is using AI writing tools and how these patterns may challenge conventional assumptions about technology adoption.

In examining the CFPB complaints (a US public resource that collects complaints about consumer financial products and services), the researchers’ geographic analysis revealed substantial variation across US states.

Arkansas showed the highest adoption rate at 29.2 percent (based on 7,376 complaints), followed by Missouri at 26.9 percent (16,807 complaints) and North Dakota at 24.8 percent (1,025 complaints). In contrast, states like West Virginia (2.6 percent), Idaho (3.8 percent), and Vermont (4.8 percent) showed minimal AI writing adoption. Major population centers demonstrated moderate adoption, with California at 17.4 percent (157,056 complaints) and New York at 16.6 percent (104,862 complaints).

The urban-rural divide followed expected technology adoption patterns initially, but with an interesting twist. Using Rural Urban Commuting Area (RUCA) codes, the researchers found that urban and rural areas initially adopted AI writing tools at similar rates during early 2023. However, adoption trajectories diverged by mid-2023, with urban areas reaching 18.2 percent adoption compared to 10.9 percent in rural areas.

Contrary to typical technology diffusion patterns, areas with lower educational attainment showed higher AI writing tool usage. Comparing regions above and below state median levels of bachelor’s degree attainment, areas with fewer college graduates stabilized at 19.9 percent adoption rates compared to 17.4 percent in more educated regions. This pattern held even within urban areas, where less-educated communities showed 21.4 percent adoption versus 17.8 percent in more educated urban areas.

The researchers suggest that AI writing tools may serve as a leg-up for people who may not have as much educational experience. “While the urban-rural digital divide seems to persist,” the researchers write, “our finding that areas with lower educational attainment showed modestly higher LLM adoption rates in consumer complaints suggests these tools may serve as equalizing tools in consumer advocacy.”

Corporate and diplomatic trends in AI writing

According to the researchers, all sectors they analyzed (consumer complaints, corporate communications, job postings) showed similar adoption patterns: sharp increases beginning three to four months after ChatGPT’s November 2022 launch, followed by stabilization in late 2023.

Organization age emerged as the strongest predictor of AI writing usage in the job posting analysis. Companies founded after 2015 showed adoption rates up to three times higher than firms established before 1980, reaching 10–15 percent AI-modified text in certain roles compared to below 5 percent for older organizations. Small companies with fewer employees also incorporated AI more readily than larger organizations.

When examining corporate press releases by sector, science and technology companies integrated AI most extensively, with an adoption rate of 16.8 percent by late 2023. Business and financial news (14–15.6 percent) and people and culture topics (13.6–14.3 percent) showed slightly lower but still significant adoption.

In the international arena, Latin American and Caribbean UN country teams showed the highest adoption among international organizations at approximately 20 percent, while African states, Asia-Pacific states, and Eastern European states demonstrated more moderate increases to 11–14 percent by 2024.

Implications and limitations

In the study, the researchers acknowledge limitations in their analysis due to a focus on English-language content. Also, as we mentioned earlier, they found they could not reliably detect human-edited AI-generated text or text generated by newer models instructed to imitate human writing styles. As a result, the researchers suggest their findings represent a lower bound of actual AI writing tool adoption.

The researchers noted that the plateauing of AI writing adoption in 2024 might reflect either market saturation or increasingly sophisticated LLMs producing text that evades detection methods. They conclude we now live in a world where distinguishing between human and AI writing becomes progressively more difficult, with implications for communications across society.

“The growing reliance on AI-generated content may introduce challenges in communication,” the researchers write. “In sensitive categories, over-reliance on AI could result in messages that fail to address concerns or overall release less credible information externally. Over-reliance on AI could also introduce public mistrust in the authenticity of messages sent by firms.”

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Researchers surprised to find less-educated areas adopting AI writing tools faster Read More »

the-2025-genesis-gv80-coupe-proves-to-be-a-real-crowd-pleaser

The 2025 Genesis GV80 Coupe proves to be a real crowd-pleaser

The 27-inch OLED screen combines the main instrument display and an infotainment screen. It’s a big improvement on what you’ll find in older GV80s (and G80s and GV70s), and the native system is by no means unpleasant to use. Although with Android Auto and Apple CarPlay, most drivers will probably just cast their phones. That will require a wire—while there is a Qi wireless charging pad, I was not able to wirelessly cast my iPhone using CarPlay; I had to plug into the USB-C port. (The press specs say it should have wireless CarPlay and Android Auto, for what it’s worth.)

Having a jog dial to interact with the infotainment is a plus in terms of driver distraction, but that’s immediately negated by having to use a touchscreen for the climate controls.

Beyond those gripes, the dark leather and contrast stitching look and feel good, and I appreciate the way the driver’s seat side bolsters hug you a little tighter when you switch into Sport mode or accelerate hard in one of the other modes. Our week with the Genesis GV80 coincided with some below-freezing weather, and I was glad to find that the seat heaters got warm very quickly—within a block of leaving the house, in fact.

I was also grateful for the fact that the center console armrest warms up when you turn on your seat heater—I’m not sure I’ve come across that feature in a car until now.

Tempting the former boss of BMW’s M division, Albert Biermann, away to set up Genesis’ vehicle dynamics department was also a good move. Biermann has been retired for a while now, but he evidently passed on some skills before that happened. The GV80 Coupe is particularly well-damped and won’t bounce you around in your seat over low-speed obstacles like potholes or speed bumps that, in other SUVs, can result in the occupants being shaken from side to side in their seats.

The 2025 Genesis GV80 Coupe proves to be a real crowd-pleaser Read More »

apple’s-m4-macbook-air-refresh-may-be-imminent,-with-ipads-likely-to-follow

Apple’s M4 MacBook Air refresh may be imminent, with iPads likely to follow

Aside from the M4’s modest performance improvements over the M3, it seems likely that Apple will add a new webcam to match the ones added to the iMac and MacBook Pros. The M4 can also support up to three displays simultaneously—two external, plus a Mac’s internal display. The M3 supported two external displays, but only if the Mac’s built-in screen was turned off.

Gurman also indicates that refreshes for the basic 10.9-inch iPad and the iPad Air are coming soon, though they’re apparently not as imminent as the M4 MacBook Airs. The report doesn’t indicate which processors either of those refreshes will include; the current iPad Air lineup uses the M2, so either the M3 or M4 would be an upgrade. If Apple wants to bring Apple Intelligence to the 10.9-inch iPad, that would limit it to either the A17 Pro (like the 7th-gen iPad mini) or a variant of the Apple A18 (like the iPhone 16e). Apple Intelligence requires a chip with at least 8GB of RAM.

The iPad Air was refreshed a little less than a year ago, but the 10.9-inch iPad is due for an update. Apple gave it a price cut in 2024, but its hardware has been the same since October of 2022.

Apple’s M4 MacBook Air refresh may be imminent, with iPads likely to follow Read More »

ai-versus-the-brain-and-the-race-for-general-intelligence

AI versus the brain and the race for general intelligence


Intelligence, ±artificial

We already have an example of general intelligence, and it doesn’t look like AI.

There’s no question that AI systems have accomplished some impressive feats, mastering games, writing text, and generating convincing images and video. That’s gotten some people talking about the possibility that we’re on the cusp of AGI, or artificial general intelligence. While some of this is marketing fanfare, enough people in the field are taking the idea seriously that it warrants a closer look.

Many arguments come down to the question of how AGI is defined, which people in the field can’t seem to agree upon. This contributes to estimates of its advent that range from “it’s practically here” to “we’ll never achieve it.” Given that range, it’s impossible to provide any sort of informed perspective on how close we are.

But we do have an existing example of AGI without the “A”—the intelligence provided by the animal brain, particularly the human one. And one thing is clear: The systems being touted as evidence that AGI is just around the corner do not work at all like the brain does. That may not be a fatal flaw, or even a flaw at all. It’s entirely possible that there’s more than one way to reach intelligence, depending on how it’s defined. But at least some of the differences are likely to be functionally significant, and the fact that AI is taking a very different route from the one working example we have is likely to be meaningful.

With all that in mind, let’s look at some of the things the brain does that current AI systems can’t.

Defining AGI might help

Artificial general intelligence hasn’t really been defined. Those who argue that it’s imminent are either vague about what they expect the first AGI systems to be capable of or simply define it as the ability to dramatically exceed human performance at a limited number of tasks. Predictions of AGI’s arrival in the intermediate term tend to focus on AI systems demonstrating specific behaviors that seem human-like. The further one goes out on the timeline, the greater the emphasis on the “G” of AGI and its implication of systems that are far less specialized.

But most of these predictions are coming from people working in companies with a commercial interest in AI. It was notable that none of the researchers we talked to for this article were willing to offer a definition of AGI. They were, however, willing to point out how current systems fall short.

“I think that AGI would be something that is going to be more robust, more stable—not necessarily smarter in general but more coherent in its abilities,” said Ariel Goldstein, a researcher at Hebrew University of Jerusalem. “You’d expect a system that can do X and Y to also be able to do Z and T. Somehow, these systems seem to be more fragmented in a way. To be surprisingly good at one thing and then surprisingly bad at another thing that seems related.”

“I think that’s a big distinction, this idea of generalizability,” echoed neuroscientist Christa Baker of NC State University. “You can learn how to analyze logic in one sphere, but if you come to a new circumstance, it’s not like now you’re an idiot.”

Mariano Schain, a Google engineer who has collaborated with Goldstein, focused on the abilities that underlie this generalizability. He mentioned both long-term and task-specific memory and the ability to deploy skills developed in one task in different contexts. These are limited-to-nonexistent in existing AI systems.

Beyond those specific limits, Baker noted that “there’s long been this very human-centric idea of intelligence that only humans are intelligent.” That’s fallen away within the scientific community as we’ve studied more about animal behavior. But there’s still a bias to privilege human-like behaviors, such as the human-sounding responses generated by large language models

The fruit flies that Baker studies can integrate multiple types of sensory information, control four sets of limbs, navigate complex environments, satisfy their own energy needs, produce new generations of brains, and more. And they do that all with brains that contain under 150,000 neurons, far fewer than current large language models.

These capabilities are complicated enough that it’s not entirely clear how the brain enables them. (If we knew how, it might be possible to engineer artificial systems with similar capacities.) But we do know a fair bit about how brains operate, and there are some very obvious ways that they differ from the artificial systems we’ve created so far.

Neurons vs. artificial neurons

Most current AI systems, including all large language models, are based on what are called neural networks. These were intentionally designed to mimic how some areas of the brain operate, with large numbers of artificial neurons taking an input, modifying it, and then passing the modified information on to another layer of artificial neurons. Each of these artificial neurons can pass the information on to multiple instances in the next layer, with different weights applied to each connection. In turn, each of the artificial neurons in the next layer can receive input from multiple sources in the previous one.

After passing through enough layers, the final layer is read and transformed into an output, such as the pixels in an image that correspond to a cat.

While that system is modeled on the behavior of some structures within the brain, it’s a very limited approximation. For one, all artificial neurons are functionally equivalent—there’s no specialization. In contrast, real neurons are highly specialized; they use a variety of neurotransmitters and take input from a range of extra-neural inputs like hormones. Some specialize in sending inhibitory signals while others activate the neurons they interact with. Different physical structures allow them to make different numbers and connections.

In addition, rather than simply forwarding a single value to the next layer, real neurons communicate through an analog series of activity spikes, sending trains of pulses that vary in timing and intensity. This allows for a degree of non-deterministic noise in communications.

Finally, while organized layers are a feature of a few structures in brains, they’re far from the rule. “What we found is it’s—at least in the fly—much more interconnected,” Baker told Ars. “You can’t really identify this strictly hierarchical network.”

With near-complete connection maps of the fly brain becoming available, she told Ars that researchers are “finding lateral connections or feedback projections, or what we call recurrent loops, where we’ve got neurons that are making a little circle and connectivity patterns. I think those things are probably going to be a lot more widespread than we currently appreciate.”

While we’re only beginning to understand the functional consequences of all this complexity, it’s safe to say that it allows networks composed of actual neurons far more flexibility in how they process information—a flexibility that may underly how these neurons get re-deployed in a way that these researchers identified as crucial for some form of generalized intelligence.

But the differences between neural networks and the real-world brains they were modeled on go well beyond the functional differences we’ve talked about so far. They extend to significant differences in how these functional units are organized.

The brain isn’t monolithic

The neural networks we’ve generated so far are largely specialized systems meant to handle a single task. Even the most complicated tasks, like the prediction of protein structures, have typically relied on the interaction of only two or three specialized systems. In contrast, the typical brain has a lot of functional units. Some of these operate by sequentially processing a single set of inputs in something resembling a pipeline. But many others can operate in parallel, in some cases without any input activity going on elsewhere in the brain.

To give a sense of what this looks like, let’s think about what’s going on as you read this article. Doing so requires systems that handle motor control, which keep your head and eyes focused on the screen. Part of this system operates via feedback from the neurons that are processing the read material, causing small eye movements that help your eyes move across individual sentences and between lines.

Separately, there’s part of your brain devoted to telling the visual system what not to pay attention to, like the icon showing an ever-growing number of unread emails. Those of us who can read a webpage without even noticing the ads on it presumably have a very well-developed system in place for ignoring things. Reading this article may also mean you’re engaging the systems that handle other senses, getting you to ignore things like the noise of your heating system coming on while remaining alert for things that might signify threats, like an unexplained sound in the next room.

The input generated by the visual system then needs to be processed, from individual character recognition up to the identification of words and sentences, processes that involve systems in areas of the brain involved in both visual processing and language. Again, this is an iterative process, where building meaning from a sentence may require many eye movements to scan back and forth across a sentence, improving reading comprehension—and requiring many of these systems to communicate among themselves.

As meaning gets extracted from a sentence, other parts of the brain integrate it with information obtained in earlier sentences, which tends to engage yet another area of the brain, one that handles a short-term memory system called working memory. Meanwhile, other systems will be searching long-term memory, finding related material that can help the brain place the new information within the context of what it already knows. Still other specialized brain areas are checking for things like whether there’s any emotional content to the material you’re reading.

All of these different areas are engaged without you being consciously aware of the need for them.

In contrast, something like ChatGPT, despite having a lot of artificial neurons, is monolithic: No specialized structures are allocated before training starts. That’s in sharp contrast to a brain. “The brain does not start out as a bag of neurons and then as a baby it needs to make sense of the world and then determine what connections to make,” Baker noted. “There already a lot of constraints and specifics that are already set up.”

Even in cases where it’s not possible to see any physical distinction between cells specialized for different functions, Baker noted that we can often find differences in what genes are active.

In contrast, pre-planned modularity is relatively new to the AI world. In software development, “This concept of modularity is well established, so we have the whole methodology around it, how to manage it,” Schain said, “it’s really an aspect that is important for maybe achieving AI systems that can then operate similarly to the human brain.” There are a few cases where developers have enforced modularity on systems, but Goldstein said these systems need to be trained with all the modules in place to see any gain in performance.

None of this is saying that a modular system can’t arise within a neural network as a result of its training. But so far, we have very limited evidence that they do. And since we mostly deploy each system for a very limited number of tasks, there’s no reason to think modularity will be valuable.

There is some reason to believe that this modularity is key to the brain’s incredible flexibility. The region that recognizes emotion-evoking content in written text can also recognize it in music and images, for example. But the evidence here is mixed. There are some clear instances where a single brain region handles related tasks, but that’s not consistently the case; Baker noted that, “When you’re talking humans, there are parts of the brain that are dedicated to understanding speech, and there are different areas that are involved in producing speech.”

This sort of re-use of would also provide an advantage in terms of learning since behaviors developed in one context could potentially be deployed in others. But as we’ll see, the differences between brains and AI when it comes to learning are far more comprehensive than that.

The brain is constantly training

Current AIs generally have two states: training and deployment. Training is where the AI learns its behavior; deployment is where that behavior is put to use. This isn’t absolute, as the behavior can be tweaked in response to things learned during deployment, like finding out it recommends eating a rock daily. But for the most part, once the weights among the connections of a neural network are determined through training, they’re retained.

That may be starting to change a bit, Schain said. “There is now maybe a shift in similarity where AI systems are using more and more what they call the test time compute, where at inference time you do much more than before, kind of a parallel to how the human brain operates,” he told Ars. But it’s still the case that neural networks are essentially useless without an extended training period.

In contrast, a brain doesn’t have distinct learning and active states; it’s constantly in both modes. In many cases, the brain learns while doing. Baker described that in terms of learning to take jumpshots: “Once you have made your movement, the ball has left your hand, it’s going to land somewhere. So that visual signal—that comparison of where it landed versus where you wanted it to go—is what we call an error signal. That’s detected by the cerebellum, and its goal is to minimize that error signal. So the next time you do it, the brain is trying to compensate for what you did last time.”

It makes for very different learning curves. An AI is typically not very useful until it has had a substantial amount of training. In contrast, a human can often pick up basic competence in a very short amount of time (and without massive energy use). “Even if you’re put into a situation where you’ve never been before, you can still figure it out,” Baker said. “If you see a new object, you don’t have to be trained on that a thousand times to know how to use it. A lot of the time, [if] you see it one time, you can make predictions.”

As a result, while an AI system with sufficient training may ultimately outperform the human, the human will typically reach a high level of performance faster. And unlike an AI, a human’s performance doesn’t remain static. Incremental improvements and innovative approaches are both still possible. This also allows humans to adjust to changed circumstances more readily. An AI trained on the body of written material up until 2020 might struggle to comprehend teen-speak in 2030; humans could at least potentially adjust to the shifts in language. (Though maybe an AI trained to respond to confusing phrasing with “get off my lawn” would be indistinguishable.)

Finally, since the brain is a flexible learning device, the lessons learned from one skill can be applied to related skills. So the ability to recognize tones and read sheet music can help with the mastery of multiple musical instruments. Chemistry and cooking share overlapping skillsets. And when it comes to schooling, learning how to learn can be used to master a wide range of topics.

In contrast, it’s essentially impossible to use an AI model trained on one topic for much else. The biggest exceptions are large language models, which seem to be able to solve problems on a wide variety of topics if they’re presented as text. But here, there’s still a dependence on sufficient examples of similar problems appearing in the body of text the system was trained on. To give an example, something like ChatGPT can seem to be able to solve math problems, but it’s best at solving things that were discussed in its training materials; giving it something new will generally cause it to stumble.

Déjà vu

For Schain, however, the biggest difference between AI and biology is in terms of memory. For many AIs, “memory” is indistinguishable from the computational resources that allow it to perform a task and was formed during training. For the large language models, it includes both the weights of connections learned then and a narrow “context window” that encompasses any recent exchanges with a single user. In contrast, biological systems have a lifetime of memories to rely on.

“For AI, it’s very basic: It’s like the memory is in the weights [of connections] or in the context. But with a human brain, it’s a much more sophisticated mechanism, still to be uncovered. It’s more distributed. There is the short term and long term, and it has to do a lot with different timescales. Memory for the last second, a minute and a day or a year or years, and they all may be relevant.”

This lifetime of memories can be key to making intelligence general. It helps us recognize the possibilities and limits of drawing analogies between different circumstances or applying things learned in one context versus another. It provides us with insights that let us solve problems that we’ve never confronted before. And, of course, it also ensures that the horrible bit of pop music you were exposed to in your teens remains an earworm well into your 80s.

The differences between how brains and AIs handle memory, however, are very hard to describe. AIs don’t really have distinct memory, while the use of memory as the brain handles a task more sophisticated than navigating a maze is generally so poorly understood that it’s difficult to discuss at all. All we can really say is that there are clear differences there.

Facing limits

It’s difficult to think about AI without recognizing the enormous energy and computational resources involved in training one. And in this case, it’s potentially relevant. Brains have evolved under enormous energy constraints and continue to operate using well under the energy that a daily diet can provide. That has forced biology to figure out ways to optimize its resources and get the most out of the resources it does commit to.

In contrast, the story of recent developments in AI is largely one of throwing more resources at them. And plans for the future seem to (so far at least) involve more of this, including larger training data sets and ever more artificial neurons and connections among them. All of this comes at a time when the best current AIs are already using three orders of magnitude more neurons than we’d find in a fly’s brain and have nowhere near the fly’s general capabilities.

It remains possible that there is more than one route to those general capabilities and that some offshoot of today’s AI systems will eventually find a different route. But if it turns out that we have to bring our computerized systems closer to biology to get there, we’ll run into a serious roadblock: We don’t fully understand the biology yet.

“I guess I am not optimistic that any kind of artificial neural network will ever be able to achieve the same plasticity, the same generalizability, the same flexibility that a human brain has,” Baker said. “That’s just because we don’t even know how it gets it; we don’t know how that arises. So how do you build that into a system?”

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

AI versus the brain and the race for general intelligence Read More »

yes,-it-turns-out-you-can-make-a-tesla-cybertruck-even-uglier

Yes, it turns out you can make a Tesla Cybertruck even uglier

There’s a saying about putting lipstick on a pig, but what if it’s not lipstick? That’s the question the universe set out to answer when it aligned in such a way that famed (or perhaps infamous) car customizer Mansory got itself a Tesla Cybertruck. The Mansory Elongation—a name that must have taken ages to think of—offers exterior, interior, and wheel and tire upgrades for the straight-edged stainless steel-wrapped pickup.

Among those who mod cars, there are the tuners, who focus on adding power and (one hopes) performance, and then there are the customizers, who concentrate more on aesthetics. Once upon a time, the entire luxury car industry worked like that—a client would buy a rolling chassis from Bugatti, Rolls-Royce, or Talbot and then have bodywork added by coachbuilders like Gurney Nutting, Touring, or Figoni et Falaschi.

The rear 3/4 view of a modified Cybertruck

At least the rear winglets don’t entirely compromise access to the bed. Credit: Mansory

Modern homologation requirements have mostly put an end to that level of coachbuilding, but for the ultra-wealthy prepared to spend telephone numbers on cars, brands like Rolls-Royce will still occasionally oblige. More common now are those aftermarket shops that spiff up already luxurious cars, changing normal doors for gullwing versions, adding flaring fenders and bulging wheel arches, and plastering the interior in any hue of leather one might imagine.

Mansory has been on the scene since the end of the 1980s and has made a name for itself festooning Rolls-Royces, Lamborghinis, Ferraris, and even Bugattis with extra bits that their original designers surely did not want added. Now it’s the Tesla Cybertruck’s turn.

Yes, it turns out you can make a Tesla Cybertruck even uglier Read More »

details-on-amd’s-$549-and-$599-radeon-rx-9070-gpus,-which-aim-at-nvidia-and-4k

Details on AMD’s $549 and $599 Radeon RX 9070 GPUs, which aim at Nvidia and 4K

AMD is releasing the first detailed specifications of its next-generation Radeon RX 9070 series GPUs and the RDNA4 graphics architecture today, almost two months after teasing them at CES.

The short version is that these are both upper-midrange graphics cards targeting resolutions of 1440p and 4K and meant to compete mainly with Nvidia’s incoming and outgoing 4070- and 5070-series GeForce GPUs, including the RTX 4070, RTX 5070, RTX 4070 Ti and Ti Super, and the RTX 5070 Ti.

AMD says the RX 9070 will start at $549, the same price as Nvidia’s RTX 5070. The slightly faster 9070 XT starts at $599, $150 less than the RTX 5070 Ti. The cards go on sale March 6, a day after Nvidia’s RTX 5070.

Neither Nvidia nor Intel has managed to keep its GPUs in stores at their announced starting prices so far, though, so how well AMD’s pricing stacks up to Nvidia in the real world may take a few weeks or months to settle out. For its part, AMD says it’s confident that it has enough supply to meet demand, but that’s as specific as the company’s reassurances got.

Specs and speeds: Radeon RX 9070 and 9070 XT

RX 9070 XT RX 9070 RX 7900 XTX RX 7900 XT RX 7900 GRE RX 7800 XT
Compute units (Stream processors) 64 RDNA4 (4,096) 56 RDNA4 (3,584) 96 RDNA3 (6,144) 84 RDNA3 (5,376) 80 RDNA3 (5,120) 60 RDNA3 (3,840)
Boost Clock 2,970 MHz 2,520 MHz 2,498 MHz 2,400 MHz 2,245 MHz 2,430 MHz
Memory Bus Width 256-bit 256-bit 384-bit 320-bit 256-bit 256-bit
Memory Bandwidth 650 GB/s 650 GB/s 960 GB/s 800 GB/s 576 GB/s 624 GB/s
Memory size 16GB GDDR6 16GB GDDR6 24GB GDDR6 20GB GDDR6 16GB GDDR6 16GB GDDR6
Total board power (TBP) 304 W 220 W 355 W 315 W 260 W 263 W

As is implied by their similar price tags, the 9070 and 9070 XT have more in common than not. Both are based on the same GPU die—the 9070 has 56 of the chip’s compute units enabled, while the 9070 XT has 64. Both cards come with 16GB of RAM (4GB more than the 5070, the same amount as the 5070 Ti) on a 256-bit memory bus, and both use two 8-pin power connectors by default, though the 9070 XT can use significantly more power than the 9070 (304 W, compared to 220 W).

AMD says that its partners are free to make Radeon cards with the 12VHPWR or 12V-2×6 power connectors on them, though given the apparently ongoing issues with the connector, we’d expect most Radeon GPUs to stick with the known quantity that is the 8-pin connector.

AMD says that the 9070 series is made using a 4 nm TSMC manufacturing process and that the chips are monolithic rather than being split up into chiplets as some RX 7000-series cards were. AMD’s commitment to its memory controller chiplets was always hit or miss with the 7000-series—the high-end cards tended to use them, while the lower-end GPUs were usually monolithic—so it’s not clear one way or the other whether this means AMD is giving up on chiplet-based GPUs altogether or if it’s just not using them this time around.

Details on AMD’s $549 and $599 Radeon RX 9070 GPUs, which aim at Nvidia and 4K Read More »

amd’s-fsr-4-upscaling-is-exclusive-to-90-series-radeon-gpus,-won’t-work-on-other-cards

AMD’s FSR 4 upscaling is exclusive to 90-series Radeon GPUs, won’t work on other cards

AMD’s new Radeon RX 90-series cards and the RDNA4 architecture make their official debut on March 5, and a new version of AMD’s FidelityFX Super Resolution (FSR) upscaling technology is coming along with them.

FSR and Nvidia’s Deep Learning Super Sampling (DLSS) upscalers have the same goal: to take a lower-resolution image rendered by your graphics card, bump up the resolution, and fill in the gaps between the natively rendered pixels to make an image that looks close to natively rendered without making the GPU do all that rendering work. These upscalers can make errors, and they won’t always look quite as good as a native-resolution image. But they’re both nice alternatives to living with a blurry, non-native-resolution picture on an LCD or OLED display.

FSR and DLSS are especially useful for older or cheaper 1080p or 1440p-capable GPUs that are connected to a 4K monitor, where you’d otherwise have to decide between a sharp 4K image and a playable frame rate; it’s also useful for hitting higher frame rates at lower resolutions, which can be handy for high-refresh-rate gaming monitors.

But unlike past versions of FSR, FSR 4 is upscaling images using hardware-backed machine-learning algorithms, hardware newly added to RDNA4 and the RX 90-series graphics cards. This mirrors Nvidia’s strategy with DLSS, which has always leveraged the tensor cores found in RTX GPUs to run machine-learning models to achieve superior image quality for upscaled and AI-generated frames. If you don’t have an RDNA4 GPU, you can’t use FSR 4.

AMD’s FSR 4 upscaling is exclusive to 90-series Radeon GPUs, won’t work on other cards Read More »

federal-firings-could-wreak-havoc-on-great-lakes-fishery

Federal firings could wreak havoc on Great Lakes fishery

Her performance reviews for the last year had been glowing, so the letter made no sense. “It’s not a real explanation,” she said.

The USFWS layoffs will not affect the sea lamprey control program in Canada, McClinchey said. “The Canadian government has assured us that the money from Canada will continue to be there and we’re on track to deliver a full program in Canadian waters,” he said. “That’s great, but this program works because it’s border blind.”

In other words: Cuts to lamprey control in US waters are a threat to fish and fishermen everywhere on the Great Lakes.

Just a week ago, the Great Lakes Fishery Commission faced a more dire staffing situation, as the USFWS informed directors they’d also be unable to hire seasonal workers to spread lampricide come April. Within a few days, that hiring freeze was reversed, said McClinchey.

This reversal gives him a bit of hope. “That at least tells us no one is rooting for the lamprey,” he said.

McClinchey is currently in DC for appropriation season, presenting the commission’s work to members of Congress and defending the agency’s budget. It’s an annual trip, but this year he’s also advocating for the reinstatement of laid-off lamprey control employees.

He is optimistic. “It seems clear to me that it’s important we preserve this program, and so far everyone we’ve encountered thinks that way and are working to that end,” he said.

Cutting back the program isn’t really on the table for the commission. Even minor cuts to scope would be devastating for the fishery, he said.

Even the former USFWS employee from Marquette is remaining hopeful. “I still think that they’re going to scramble to make it happen,” she said. “Because it’s not really an option to just stop treating for a whole season.”

This story originally appeared on Inside Climate News.

Federal firings could wreak havoc on Great Lakes fishery Read More »

mars’-polar-ice-cap-is-slowly-pushing-its-north-pole-inward

Mars’ polar ice cap is slowly pushing its north pole inward

The orbiters that carried the radar hardware, along with one or two others, have been orbiting long enough that any major changes in Mars’ gravity caused by ice accumulation or crustal displacement would have shown up in their orbital behavior. The orbital changes they do see, “indicates that the increase in the gravitational potential associated with long-term ice accumulation is higher than the decrease in gravitational potential from downward deflection.” They calculate that the deformation has to be less than 0.13 millimeters per year to be consistent with the gravitational signal.

Finally, the model had to have realistic conditions at the polar ice cap, with a density consistent with a mixture of ice and dust.

Out of those 84 models, only three were consistent with all of these constraints. All three had a very viscous Martian interior, consistent with a relatively cold interior. That’s not a surprise, given what we’ve already inferred about Mars’ history. But it also suggests that most of the radioactive elements that provide heat to the red planet are in the crust, rather than deeper in the interior. That’s something we might have been able to check, had InSight’s temperature measurement experiment deployed correctly. But as it is, we’ll have to wait until some unidentified future mission to get a picture of Mars’ heat dynamics.

In any case, the models also suggest that Mars’ polar ice cap is less than 10 million years old, consistent with the orbitally driven climate models.

In a lot of ways, the new information is an update of earlier attempts to model the Martian interior, given a few more years of orbital data and the information gained from the InSight lander, which also determined the thickness of Mars’ crust and size of its core. But it’s also a good way of understanding how scientists can take bits and pieces of information from seemingly unrelated sources and build them into a coherent picture.

Nature, 2025. DOI: 10.1038/s41586-024-08565-9  (About DOIs).

Mars’ polar ice cap is slowly pushing its north pole inward Read More »