Author name: Kelly Newman

fermented-meat-with-a-side-of-maggots:-a-new-look-at-the-neanderthal-diet

Fermented meat with a side of maggots: A new look at the Neanderthal diet

Traditionally, Indigenous peoples almost universally viewed thoroughly putrefied, maggot-infested animal foods as highly desirable fare, not starvation rations. In fact, many such peoples routinely and often intentionally allowed animal foods to decompose to the point where they were crawling with maggots, in some cases even beginning to liquefy.

This rotting food would inevitably emit a stench so overpowering that early European explorers, fur trappers, and missionaries were sickened by it. Yet Indigenous peoples viewed such foods as good to eat, even a delicacy. When asked how they could tolerate the nauseating stench, they simply responded, “We don’t eat the smell.”

Neanderthals’ cultural practices, similar to those of Indigenous peoples, might be the answer to the mystery of their high δ¹⁵N values. Ancient hominins were butchering, storing, preserving, cooking, and cultivating a variety of items. All these practices enriched their paleo menu with foods in forms that nonhominin carnivores do not consume. Research shows that δ¹⁵N values are higher for cooked foods, putrid muscle tissue from terrestrial and aquatic species, and, with our study, for fly larvae feeding on decaying tissue.

The high δ¹⁵N values of maggots associated with putrid animal foods help explain how Neanderthals could have included plenty of other nutritious foods beyond only meat while still registering δ¹⁵N values we’re used to seeing in hypercarnivores.

We suspect the high δ¹⁵N values seen in Neanderthals reflect routine consumption of fatty animal tissues and fermented stomach contents, much of it in a semi-putrid or putrid state, together with the inevitable bonus of both living and dead ¹⁵N-enriched maggots.

What still isn’t known

Fly larvae are a fat-rich, nutrient-dense, ubiquitous, and easily procured insect resource, and both Neanderthals and early Homo sapiens, much like recent foragers, would have benefited from taking full advantage of them. But we cannot say that maggots alone explain why Neanderthals have such high δ¹⁵N values in their remains.

Several questions about this ancient diet remain unanswered. How many maggots would someone need to consume to account for an increase in δ¹⁵N values above the expected values due to meat eating alone? How do the nutritional benefits of consuming maggots change the longer a food item is stored? More experimental studies on changes in δ¹⁵N values of foods processed, stored, and cooked following Indigenous traditional practices can help us better understand the dietary practices of our ancient relatives.

Melanie Beasley is assistant professor of anthropology at Purdue University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Fermented meat with a side of maggots: A new look at the Neanderthal diet Read More »

20-years-after-katrina,-new-orleans-remembers

20 years after Katrina, New Orleans remembers


20 years ago, Ivor Van Heerden warned of impending disaster in New Orleans. Are his warnings still going unheeded?

A man is stranded on a rooftop in the aftermath of Hurricane Katrina in 2005. Credit: Wickes Helmboldt

Next month marks the 20th anniversary of one of the most devastating natural disasters in US history: Hurricane Katrina, a Category 3 storm that made landfall on August 29, 2005. The storm itself was bad enough, but the resulting surge of water caused havoc for New Orleans in particular when the city’s protective levees failed, flooding much of New Orleans and killing 1,392 people. National Geographic is marking the occasion with a new documentary series: Hurricane Katrina: Race Against Time.

The five-part documentary is directed by Oscar nominee Traci A. Curry (Attica) and co-produced by Ryan Coogler’s Proximity Media, in conjunction with Lightbox. The intent was to go beyond the headlines of yesteryear and re-examine the many systemic failures that occurred while also revealing “stories of survival, heroism, and resilience,” Proximity’s executive producers said in a statement. “It’s a vital historical record and a call to witness, remember and recon with the truth of Hurricane Katrina’s legacy.”

Race Against Time doesn’t just rehash the well-worn narrative of the disaster; it centers the voices of the people who were there on the ground: residents, first responders, officials, and so forth. Among those interviewed for the documentary is geologist/marine scientist Ivor Van Heerden, author of The Storm: What Went Wrong and Why During Hurricane Katrina: the Inside Story from One Louisiana Scientist (2006).

Around 1998, Van Heerden set up Louisiana State University’s (LSU) fledgling Hurricane Center with his colleague Marc Levitan, developing the first computer modeling efforts for local storm surges. They had a supercomputer for the modeling and LiDAR data for accurate digital elevation models, and since there was no way to share data among the five major parishes, they created a networked geographical information system GIS) to link them. Part of Van Heerden’s job involved driving all over New Orleans to inspect the levees, and he didn’t like what he saw: levees with big bows, sinking under their own weight, for example, and others with large cracks.

Van Heerden also participated in the 2004 Hurricane Pam mock scenario, designed as a test run for hurricane planning for the 13 parishes of southeastern Louisiana, including New Orleans. It was essentially a worst-case scenario for the conditions of Hurricane Betsy, assuming that the whole city would be flooded. “We really had hoped that the exercise would wake everybody up, but quite honesty we were laughed at a few times during the exercise,” Van Heerden told Ars. He recalled telling one woman from FEMA that they should be thinking about using tents to house evacuees: “She said, ‘Americans don’t live in tents.'”

Stormy weather

Mayor Ray Nagin orders a mandatory evacuation of New Orleans. ABC News Videosource

The tens of thousands of stranded New Orleans residents in the devastating aftermath of Katrina could have used those tents. Van Heerden still vividly recalls his frustration over the catastrophic failures that occurred on so many levels. “We knew the levees had failed, we knew that there had been catastrophic structural failure, but nobody wanted to hear it initially,” he said. He and his team were out in the field in the immediate aftermath, measuring water levels and sampling the water for pathogens and toxic chemicals. Naturally they came across people in need of rescue and were able to radio locations to the Louisiana State University police.

“An FBI agent told me, ‘If you find any bodies, tie them with a piece of string to something so they don’t float away and give us the lats and logs,'” Van Heerden recalled. The memories haunt him still. Some of the bodies were drowned children, which he found particularly devastating since he had a young daughter of his own at the time.

How did it all go so wrong? After 1965’s Hurricane Betsy flooded most of New Orleans, the federal government started a levee building program with the US Army Corps of Engineers (USACE) in charge. “Right at the beginning, the Corps used very old science in terms of determining how high to make the levees,” said Van Heerden. “They had access to other very good data, but they chose not to use it for some reason. So they made the levees way too low.”

“They also ignored some of their own geotechnical science when designing the levees,” he continued. “Some were built in sand with very shallow footings, so the water just went underneath and blew out the levee. Some were built on piles of earth, again with very shallow footings, and they just fell over. The 17th Street Canal, the whole levee structure actually slid 200 feet.”

There had also been significant alterations to the local landscape since Hurricane Betsy. In the past, the wetlands, especially the cypress tree swamps, provided some protection from storm surges. In 1992, for example, the Category 5 Hurricane Andrew made landfall on Atchafalaya Delta, where healthy wetlands reduced its energy by 50 percent between the coast and Morgan City, per Van Heerden. But other wetlands in the region changed drastically with the dredging of a canal called the Mississippi Gulf Outlet, running from Baton Rouge to the Gulf of Mexico.

“It was an open conduit for surge to get into New Orleans,” said Van Heerden. “The saltwater got into the wetlands and destroyed it, especially the cypress trees. This canal had opened up, in some places, to five times its width, allowing waves to build on the surface. The earthen levees weren’t armored in any way, so they just collapsed. They blew apart. That’s why parts of St. Bernard saw a wave of water 10 feet high.”

Just trying to survive

Stranded New Orleans residents gather in a shelter during Hurricane Katrina. KTVT-TV

Add in drastic cuts to FEMA under then-President George W. Bush—who inherited “a very functional, very well-organized” version of the department from his predecessor, Bill Clinton, per Van Heerden—and the stage was set for just such a disaster like Katrina’s harrowing aftermath. It didn’t help that New Orleans Mayor Ray Nagin delayed issuing a mandatory evacuation order until some 24 hours before the storm hit, making it much more difficult for residents to follow those orders in a timely fashion.

There were also delays in conveying the vital information that the levees had failed. “We now know that the USACE had a guy in a Coast Guard helicopter who actually witnessed the London Avenue Canal failure, at 9: 06 AM on Day One,” said Van Heerden. “That guy went to Baton Rouge and he didn’t tell a soul other than the Corps. So the Corps knew very early what was gong on and they did nothing about it. They had a big megaphone and millions of dollars in public relations and kept saying it was an act of God. It took until the third week of September for us to finally get the media to realize that this was a catastrophic failure of the levees.”

The USACE has never officially apologized for what happened, per Van Heerden. “Not one of them lost their job after Katrina,” he said. But LSU fired Van Heerden in 2009, sparking protest from faculty and students. The university gave no reason for his termination, but it was widely speculated at the time that Van Heerden’s outspoken criticism of the USACE was a factor, with LSU fearing it might jeopardize funding. Van Heerden, sued and the university settled. But he hasn’t worked in academia since and now consults with various nonprofit organizations on flooding and storm surge impacts.

The widespread reports of looting and civil war further exacerbated the situation as survivors swarmed the Superdome and the nearby convention center. The city had planned for food and water for 12,000 people housed at Superdome for 48 hours. The failure of the levees swelled that number to 30,000 people stranded for several days, waiting in vain for the promised cavalry to arrive.

Van Heerden acknowledges the looting but insists most of that was simply due to people trying to survive in the absence of any other aid. “How did they get water on the interstate?” said Van Heerden. “They went to a water company, broke in and hot-wired a truck, then went around and gave water to everyone.”

As for the widespread belief outside the city that there was unchecked violence and a brewing civil war, “That doesn’t happen in a catastrophe,” he said. The rumors were driven by reports of shots being fired but, “there are a lot of hunters in Louisiana, and the hunter’s SOS is to fire three shots in rapid succession,” he said. “One way to say ‘I’m here!’ is to fire a gun. But everybody bought into that civil war nonsense.”

“Another ticking time bomb”

LSU Hurricane Center co-founder Ivor Van Heerden working at his desk in 2005. Australian Broadcasting Corporation

The levees have since been rebuilt, and Van Heerden acknowledges that some of the repairs are robust. “They used more concrete, they put in protection pads and deeper footings,” he said. “But they didn’t take into account—and they admitted this a few years ago—subsidence in Louisiana, which is two to two-and-a-half feet every century. And they didn’t take into account global climate change and the associated rising sea levels. Within the next 70 years, sea level in Louisiana is going to rise four feet over millions of square miles. If you’ve got a levee with a [protective] marsh in front of it, before too long that marsh is no longer going to exist, so the water is going to move further and further in-shore.”

Then there’s the fact that hurricanes these days are now bigger in diameter than they were 30 years ago, thanks to the extra heat. “They get up to a Category 5 a lot quicker,” said Van Heerden. “The frequency also seems to be creeping up. It’s now four times as likely you will experience hurricane-force winds.” Van Heerden has run storm surge models assuming a 3-foot rise in sea level. “What we saw was the levees wouldn’t be high enough in New Orleans,” he said. “I hate to say it, but it looks like another ticking time bomb. Science is a quest for the truth. You ignore the science at your folly.”

Assuming there was sufficient public and political will, how should the US be preparing for future tropical storms? “In many areas we need to retreat,” said Van Heerden. “We need to get the houses and buildings out and rebuild the natural vegetation, rebuild the wetlands. On the Gulf Coast, sea level is really going to rise, and we need to rethink our infrastructure. This belief that, ‘Oh, we’re going to put up a big wall’—in the long run it’s not going to work. The devastation from tropical storms is going to spread further inland through very rapid downpours, and that’s something we’re going to have to plan mitigations for. But I just don’t see any movement in that direction.”

Perhaps documentaries like Race Against Time can help turn the tide; Van Heerden certainly hopes so. He also hopes the documentary can correct several public misconceptions of what happened—particularly the tendency to blame the New Orleans residents trying to survive in appalling conditions, rather than the government that failed them.

“I think this is a very good documentary in showing the plight of the people and what they suffered, which was absolutely horrendous,” said Van Heerden. “I hope people watching will realize that yes, this is a piece of our history, but sometimes the past is the key to the present. And ask themselves, ‘Is this a foretaste of what’s to come?'”

Hurricane Katrina: Race Against Time premieres on July 27, 2025, on National Geographic. It will be available for streaming starting July 28, 2025, on Disney+ and Hulu.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

20 years after Katrina, New Orleans remembers Read More »

this-aerogel-and-some-sun-could-make-saltwater-drinkable

This aerogel and some sun could make saltwater drinkable

Earth is about 71 percent water. An overwhelming 97 percent of that water is found in the oceans, leaving us with only 3 percent in the form of freshwater—and much of that is frozen in the form of glaciers. That leaves just 0.3 percent of that freshwater on the surface in lakes, swamps, springs, and our main sources of drinking water, rivers and streams.

Despite our planet’s famously blue appearance from space, thirsty aliens would be disappointed. Drinkable water is actually pretty scarce.

As if that doesn’t already sound unsettling, what little water we have is also threatened by climate change, urbanization, pollution, and a global population that continues to expand. Over 2 billion people live in regions where their only source of drinking water is contaminated. Pathogenic microbes in the water can cause cholera, diarrhea, dysentery, polio, and typhoid, which could be fatal in areas without access to vaccines or medical treatment.

Desalination of seawater is a possible solution, and one approach involves porous materials absorbing water that evaporates when heated by solar energy. The problem with most existing solar-powered evaporators is that they are difficult to scale up for larger populations. Performance decreases with size, because less water vapor can escape from materials with tiny pores and thick boundaries—but there is a way to overcome this.

Feeling salty

Researcher Xi Shen of the Hong Kong Polytechnic University wanted to figure out a way to improve these types of systems. He and his team have now created an aerogel that is far more efficient at turning over fresh water than previous methods of desalination.

“The key factors determining the evaporation performance of porous evaporators include heat localization, water transport, and vapor transport,” Shen said in a study recently published in ACS Energy Letters. “Significant advancements have been made in the structural design of evaporators to realize highly efficient thermal localization and water transport.”

Solar radiation is the only energy used to evaporate the water, which is why many attempts have been made to develop what are called photothermal materials. When sunlight hits these types of materials, they absorb light and convert it into heat energy, which can be used to speed up evaporation. Photothermal materials can be made of substances including polymers, metals, alloys, ceramics, or cements. Hydrogels have been used to successfully decontaminate and desalinate water before, but they are polymers designed to retain water, which negatively affects efficiency and stability, as opposed to aerogels, which are made of polymers that hold air. This is why Shen and his team decided to create a photothermal aerogel.

This aerogel and some sun could make saltwater drinkable Read More »

hackers—hope-to-defect-to-russia?-don’t-google-“defecting-to-russia.”

Hackers—hope to defect to Russia? Don’t Google “defecting to Russia.”

The next day, December 7, he… bought himself a new laptop, installed a VPN, and hopped right back online. Wagenius evaded scrutiny only until December 12, when the new laptop was also seized under orders from a military magistrate judge.

On December 20, Wagenius was arrested and charged with several federal crimes, and the feds have since resisted his efforts to get free on bail while his case progressed. (Due, in part, to the laptop episode mentioned above.)

Last week, Wagenius pleaded guilty to several of the charges against him. The documents in his case reveal someone with real technical skills but without a more general sense of opsec. The hacked call logs, for instance, were found right on Wagenius’ devices. But it was all the ways he kept saying explicitly what he was up to that really stood out to me.

For instance, there were numerous explicit Telegram chats with conspirators, along with public posts on boards like BreachForums and XSS. (In related news, the alleged admin of XSS was arrested yesterday in Ukraine.) In one representative chat with a “potential co-conspirator,” for instance, Wagenius outlined his various schemes in October 2024:

whats funny is that if i ever get found out

i cant get instantly arrested

because military law

which gives me time to go AWOL

(Narrator voice: “Military law did not give him time to go AWOL.”)

Then there were the emails in November 2024, all of them sent to “an e-mail address [Wagenius] believed belonged to Country-1’s military intelligence service in an attempt to sell stolen information.” These were all traced back to Wagenius and used as later evidence that he should not be released on bail.

Finally, there were his online searches. The government includes “just a subset” of these from 2024, including:

  • “can hacking be treason”
  • “where can i defect the u.s government military which country will not hand me over”
  • “U.S. military personnel defecting to Russia”
  • “Embassy of Russia – Washington, D.C.”

None of this shows impressive data/device security or even much forethought; the only real plan seems to have been: “Don’t get caught.” Once Wagenius’ devices were seized and searched, the jig was up.

Allison Nixon is chief research officer at the investigative firm Unit 221B. She helped expose Wagenius’ identity, and in an article last year for Krebs on Security, she shared a message to young men like Wagenius who “think they can’t be found and arrested.”

“You need to stop doing stupid shit and get a lawyer,” she said.

Hackers—hope to defect to Russia? Don’t Google “defecting to Russia.” Read More »

julian-lefay,-“the-father-of-the-elder-scrolls,”-has-died-at-59

Julian LeFay, “the father of The Elder Scrolls,” has died at 59

Julian LeFay, the man often credited as “the father of The Elder Scrolls,” has died at the age of 59, his creative partners announced this week.

“It is with profound sadness and heavy hearts that we inform our community of the passing of Julian LeFay, our beloved Technical Director and co-founder of Once Lost Games,” his colleagues wrote in a Bluesky post.

LeFay spent most of the 1990s at Bethesda Softworks, culminating in his work on The Elder Scrolls series into the late ’90s.

His career didn’t start with The Elder Scrolls, though. Beginning in 1988, LeFay made music for the Amiga hack-and-slash game Sword of Sodan as well as the NES game Where’s Waldo, and he did design and programming work on titles like Wayne Gretzky Hockey, the DOS version of Dragon’s Lair, and two DOS games based on the Terminator movie franchise.

In the early ’90s, he joined fellow Bethesda developers Ted Peterson and Vijay Lakshman on an Ultima Underworld-inspired RPG that would come to be called The Elder Scrolls: Arena. Though famed creative director Todd Howard has helmed the franchise since its third entry, The Elder Scrolls: Arena and The Elder Scrolls II: Daggerfall were chiefly spearheaded by LeFay. One of the gods of The Elder Scrolls universe was named after LeFay, and the setting was inspired by the literature and tabletop role-playing games LeFay and Peterson enjoyed.

Julian LeFay, “the father of The Elder Scrolls,” has died at 59 Read More »

after-$380m-hack,-clorox-sues-its-“service-desk”-vendor-for-simply-giving-out-passwords

After $380M hack, Clorox sues its “service desk” vendor for simply giving out passwords

Hacking is hard. Well, sometimes.

Other times, you just call up a company’s IT service desk and pretend to be an employee who needs a password reset, an Okta multifactor authentication reset, and a Microsoft multifactor authentication reset… and it’s done. Without even verifying your identity.

So you use that information to log in to the target network and discover a more trusted user who works in IT security. You call the IT service desk back, acting like you are now this second person, and you request the same thing: a password reset, an Okta multifactor authentication reset, and a Microsoft multifactor authentication reset. Again, the desk provides it, no identity verification needed.

So you log in to the network with these new credentials and set about planting ransomware or exfiltrating data in the target network, eventually doing an estimated $380 million in damage. Easy, right?

According to The Clorox Company, which makes everything from lip balm to cat litter to charcoal to bleach, this is exactly what happened to it in 2023. But Clorox says that the “debilitating” breach was not its fault. It had outsourced the “service desk” part of its IT security operations to the massive services company Cognizant—and Clorox says that Cognizant failed to follow even the most basic agreed-upon procedures for running the service desk.

In the words of a new Clorox lawsuit, Cognizant’s behavior was “all a devastating lie,” it “failed to show even scant care,” and it was “aware that its employees were not adequately trained.”

“Cognizant was not duped by any elaborate ploy or sophisticated hacking techniques,” says the lawsuit, using italics to indicate outrage emphasis. “The cybercriminal just called the Cognizant Service Desk, asked for credentials to access Clorox’s network, and Cognizant handed the credentials right over. Cognizant is on tape handing over the keys to Clorox’s corporate network to the cybercriminal—no authentication questions asked.”

I can has password reset?

From 2013 through 2023, Cognizant had helped “guard the proverbial front door” to Clorox’s network by running a “service desk” that handled common access requests around passwords, VPNs, and multifactor authentication (MFA) such as SMS codes.

After $380M hack, Clorox sues its “service desk” vendor for simply giving out passwords Read More »

what-exactly-is-golden-dome?-this-space-force-general-owes-trump-an-answer.

What exactly is Golden Dome? This Space Force general owes Trump an answer.


“Basically, I’ve been given 60 days to come up with the objective architecture.”

Gen. Michael Guetlein, overseeing the development of the Golden Dome missile defense system, looks on as President Donald Trump speaks in the Oval Office of the White House on May 20, 2025, in Washington, DC. Credit: Jim Watson/AFP via Getty Images

The newly installed head of the Pentagon’s Golden Dome missile defense shield, a monumental undertaking projected to cost $175 billion over the next three years, knows the clock is ticking to show President Donald Trump some results before the end of his term in the White House.

“We are going to try to craft a schedule to have incremental demonstrations every six months because we are on a short timeline,” said Gen. Michael Guetlein, who was confirmed by the Senate last week to become the military’s Golden Dome czar.

Speaking on Tuesday, his second day on the job leading the Golden Dome initiative, Guetlein said his team will “move out with a sense of urgency and move out with incremental wins” as the military races to meet Trump’s timeline.

Guetlein discussed his new job with retired Gen. John “Jay” Raymond, the first chief of the Space Force, at an event in Washington, DC, hosted by the Space Foundation.

Analysts and retired military officials doubt the Pentagon can achieve all of Trump’s Golden Dome promises by the end of 2028. It’s not yet clear what the Pentagon can finish in three years, but Guetlein said Thursday his team will deliver “a capability” on that schedule. “We’ve got to exploit anything and everything we’ve possibly got,” he said, echoing a tenet of Space Force policy to “exploit what we have, buy what we can, and build what we must.”

This means the Space Force will lean heavily on commercial companies, research labs, academia, and, in the case of Canada, international partners to build the Golden Dome.

“Golden Dome for America requires a whole-of-nation response to deter and, if necessary, to defeat attacks against the United States,” the Defense Department said in a statement Tuesday. “We have the technological foundation, national talent, and decisive leadership to advance our nation’s defenses. We are proud to stand behind Gen. Mike Guetlein as he takes the helm of this national imperative.”

President Trump signed an executive order in January calling for the development of a layered missile defense shield to protect the US homeland. He initially called the project the Iron Dome for America, named for Israel’s Iron Dome missile defense system. But Israel’s Iron Dome, which has proven effective against missile attacks from Iran and its proxies in the Middle East, only has to defend an area the size of New Jersey. The Pentagon’s system, now named Golden Dome, will ostensibly cover the entire United States.

Lay of the land

Advocates for the Golden Dome point to recent events to justify the program. These include Russia’s first use of an intermediate-range ballistic missile against Ukraine last year, and Ukraine’s successful drone attack on a Russian airbase last month. Waves of Iranian missile and drone attacks on Israel have tested the mettle of that country’s Iron Dome.

In the January 27 executive order, the White House said the military’s plan must defend against many types of aerial threats, including ballistic, hypersonic, and advanced cruise missiles, plus “other next-generation aerial attacks,” a category that appears to include drones and shorter-range unguided missiles.

This will require a network of sensors on the ground and in space, including heat-seeking sensors and radars to track incoming aerial threats, and interceptors based on the ground, at sea, and in space capable of destroying missiles at any point in flight—boost phase, midcourse, and during final approach to a target.

This illustration shows how the Missile Defense Agency’s HBTSS satellites can track hypersonic missiles as they glide and maneuver through the atmosphere, evading detection by conventional missile-tracking spacecraft, such as the Space Force’s DSP and SBIRS satellites. Credit: Northrop Grumman

The good news for backers of the Golden Dome program is that the Pentagon and commercial industry were developing most of these elements before Trump’s executive order. The Space Development Agency (SDA) launched a batch of prototype missile-tracking and data-relay satellites in 2023, pathfinders for a constellation of hundreds of spacecraft in low-Earth orbit that will begin launching later this year.

In some cases, the military has already fielded Golden Dome components in combat. The Army has operated the Patriot missile system since the 1980s and the Terminal High Altitude Area Defense (THAAD) interceptors for more than 15 years to defend against lower-level threats like small rockets, aircraft, and drones. The Navy’s Aegis Ballistic Missile Defense System uses sea-launched interceptors to target longer-range missiles in space.

The Missile Defense Agency manages the Ground-based Midcourse Defense (GMD) program, which consists of operational silo-launched missile interceptors based in Alaska and California that could be used to defend against a limited missile strike from a rogue state like North Korea.

GMD has cost approximately $70 billion to date and has worked a little more than half the time the military has tested it against a missile target. On the plus side, GMD has achieved four straight successful intercepts in tests since 2014. But despite its immense cost, GMD is antiquated and would not be effective against a large volley of missiles coming from another nuclear superpower, like China.

Golden Dome will bring all of these systems together, and add more to the mix in order to “double down on the protection of the homeland and protect our American citizens,” Guetlein said.

What’s next?

Guetlein identified several short-term priorities for what is officially called the “Office of Golden Dome for America.” One of them is to begin bringing together the military’s existing missile detection and tracking assets, ground- and sea-based interceptors, and the communication pathways, or “comm pipes,” to connect all the pieces in a sophisticated command-and-control network.

“That includes the sensors, that includes the shooters, as well as the comm pipes,” Guetlein said. “How do we bring all that to bear simultaneously in protection of the homeland, while utilizing the capabilities that are already there and not trying to re-create them?”

The Pentagon said in a statement Tuesday that Guetlein’s office will devise an “objective architecture” for the missile defense shield and “socialize” it by late September. This presumably means sharing some information about the architecture with Congress and the public. So far, Space Force officials have hesitated to provide any specifics, at least in public statements and congressional hearings. They often prefer to describe Golden Dome as a “system of systems” instead of something entirely new.

“Basically, I’ve been given 60 days to come up with the objective architecture. I owe that back to the Deputy Secretary of Defense in 60 days,” Guetlein said. “So, in 60 days, I’ll be able to talk in depth about, ‘Hey, this is our vision for what we want to get after for Golden Dome.'”

Although the major pieces of a layered anti-missile system like Golden Dome may appear obvious to anyone with a casual familiarity with missile defense and space—we just named a few of these elements above—the Trump administration has not published any document describing what the Pentagon might actually achieve in the next three years.

Despite the lack of detail, Congress voted to approve $25 billion as a down payment for Golden Dome in the Trump-backed “One Big Beautiful Bill” signed into law July 4. The bulk of the Golden Dome-related budget is earmarked for procurement of more Patriot and THAAD missile batteries, an increase in funding for SDA’s missile-tracking satellites, ballistic missile defense command-and-control networks, and development of “long-range kill chains” for combat targeting.

Two of the US Army’s THAAD missile batteries are seen deployed in Israel in this 2019 photo. Credit: US Army/Staff Sgt. Cory Payne

So, most of the funding allocated to Golden Dome over the next year will go toward bolstering programs already in the Pentagon’s portfolio. But the military will tie them all together with an integrated command-and-control system that can sense an adversarial missile launch, plot its trajectory, and then generate a targeting solution and send it to an interceptor on the ground or in space to eliminate the threat.

Eventually, military leaders want satellites to handle all of these tasks autonomously in space and do it fast enough for US or allied forces to respond to an imminent threat.

“We know how to get data,” a retired senior military official recently told Ars. “The question is, how do you fuse that data in real time with the characteristics of a fire control system, which means real-time feedback of all this data, filtering that data, filtering out sensors that aren’t helping as much as other ones, and then using that to actually command and control against a large-scale attack of diverse threats.

“I feel like those are still two different things,” said the official, who spoke on background with Ars. “It’s one thing to have all the data and be able to process it. It’s another thing to be able to put it into an active, real-time fire control system.”

Trump introduced Guetlein, the Space Force’s former vice chief of space operations, as his nominee for director of the Golden Dome program in an Oval Office event on May 20. At the time, Trump announced the government had “officially selected an architecture” for Golden Dome. That appears to still be the work in front of Guetlein and his team, which is set to grow with new hiring but will remain “small and flat,” the general said Tuesday.

Guetlein has a compelling résumé to lead Golden Dome. Before becoming the second-ranking officer in the Space Force, he served as head of Space Systems Command, which is responsible for most of the service’s acquisition and procurement activities. His prior assignments included stints as deputy director of the National Reconnaissance Office, program executive at the Missile Defense Agency, program manager for the military’s missile warning satellites, and corporate fellow at SpaceX.

Weapons in space

Guetlein identified command and control and the development of space-based interceptors as two of the most pressing technical challenges for Golden Dome. He believes the command-and-control problem can be “overcome in pretty short order.”

“I think the real technical challenge will be building the space-based interceptor,” Guetlein said. “That technology exists. I believe we have proven every element of the physics that we can make it work. What we have not proven is, first, can I do it economically, and then second, can I do it at scale? Can I build enough satellites to get after the threat? Can I expand the industrial base fast enough to build those satellites? Do I have enough raw materials, etc.?”

This is the challenge that ultimately killed the Strategic Defense Initiative (SDI) or “Star Wars” program proposed by former President Ronald Reagan in the 1980s as a way to counter the threat of a nuclear missile attack from the Soviet Union. The first concept for SDI called for 10,000 interceptors to be launched into Earth orbit. This was pared down to 4,600, then finally to fewer than 1,000 before the cancellation of the space-based element in 1993.

Thirty years ago, the United States lacked the technology and industrial capacity to build and launch so many satellites. It’s a different story today. SpaceX has launched more than 9,000 Starlink communications satellites in six years, and Amazon recently kicked off the deployment of more than 3,200 Internet satellites of its own.

Space-based interceptors are a key tenet of Trump’s executive order on Golden Dome. Specifically, the order calls for space-based interceptors capable of striking a ballistic missile during its boost phase shortly after launch. These interceptors would essentially be small satellites positioned in low-Earth orbit, likely a few hundred miles above the planet, circling the world every 90 minutes ready for commands to prevent nuclear Armageddon.

A Standard Missile 3 Block IIA launches from the Aegis Ashore Missile Defense Test Complex at the Pacific Missile Range Facility in Kauai, Hawaii, on December 10, 2018, during a test to intercept an intermediate-range ballistic missile target in space. Credit: Mark Wright/DOD

Reuters reported Tuesday that the Defense Department, which reportedly favored SpaceX to play a central role in Golden Dome, is now looking to other companies, including Amazon Kuiper and other big defense contractors. SpaceX founder Elon Musk has fallen out of favor with the Trump administration, but the company’s production line continues to churn out spacecraft for the National Reconnaissance Office’s global constellation of spy satellites. And it’s clear the cheapest and most reliable way to launch Golden Dome interceptors into orbit will be using SpaceX’s Falcon 9 rocket.

How many space-based interceptors?

“I would envision that there would be certainly more than 1,000 of those in orbit in different orbital planes,” said retired Air Force Gen. Henry “Trey” Obering III, a senior executive advisor at Booz Allen Hamilton and former commander of the Missile Defense Agency. “You could optimize those orbital planes against the Russian threat or Chinese threat, or both, or all the above, between Iran, North Korea, China, and Russia.”

In an interview with Ars, Obering suggested the interceptors could be modest in size and mass, somewhat smaller than SpaceX’s Starlink satellites, and could launch 100 or 200 at a time on a rocket like SpaceX’s Falcon 9. None of this capability existed in the Reagan era.

Taking all of that into account, it’s understandable why Guetlein and others believe Golden Dome is doable.

But major questions remain unanswered about its ultimate cost and the realism of Trump’s three-year schedule. Some former defense officials have questioned the technical viability of using space-based interceptors to target a missile during its boost phase, within the first few minutes of launch.

It’s true that there are also real emerging threats, such as hypersonic missiles and drones, that the US military is currently ill-equipped to defend against.

“The strategic threats are diversifying, and then the actors are diversifying,” the former military space official told Ars. “It’s no longer just Russia. It’s China now, and to a lesser extent, North Korea and potentially Iran. We’ll see where that goes. So, when you put that all together, our ability to deter and convince a potential adversary, or at least make them really uncertain about how successful they could be with a strike, is degraded compared to what it used to be.”

The official said the Trump administration teed up the Golden Dome executive order without adequately explaining the reasons for it. That’s a political failing that could come back to bite the program. The lack of clarity didn’t stop Congress from approving this year’s $25 billion down payment, but there are more key decision points ahead.

“I’m a little disappointed no one’s really defined the problem very well,” the retired military official said. “It definitely started out as a solution without a problem statement, like, ‘I need an Iron Dome, just like Israel.’ But I feel like the entire effort would benefit from a better problem statement.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

What exactly is Golden Dome? This Space Force general owes Trump an answer. Read More »

2025-aston-martin-vanquish-volante:-a-m’s-ultimate-gt-goes-topless

2025 Aston Martin Vanquish Volante: A-M’s ultimate GT goes topless

It’s hard to blame them. Top up or down, the Vanquish’s aesthetic is one of eagerness and aggression, largely due to the F1-derived aero elements to cool the massive power unit as well as to balance out air from front to back. The rest is all Aston Martin-quality craftsmanship, shaping the Vanquish into a taut, sleek form wrapped in formal attire.

An Aston Martin Vanquish engine bay

Yes, you could just have an electric motor make this much torque and power almost silently. Credit: Aston Martin

Bond. Aluminum Bond.

The secret underlying the Vanquish’s capabilities is its bonded aluminum body, which is perfectly suited for a grand tourer like this. Bonding panels together rather than welding them makes controlling the NVH (noise, vibration, harshness) levels much easier as the adhesives absorb vibrations, while the stiffness provides much more control in terms of lateral movement. This also means the suspension has less to compensate for, which means it can be stiffer without adding teeth-rattling jitter.

Indeed, on the move, the Vanquish Volante is velvety-smooth on the highway, and with the top down, conversations don’t need to be shouted. Raise the soft top and the well-sealed cover is indistinguishable from the coupe as far as your ears are concerned.

The even-keeled nature is also due in part to the balance Aston Martin maintains between the throttle input and the electronic rear differential. At low speeds, the Vanquish is quite agile, but a progressive power band keeps it from being nervous or jerky when laying down the power, with the wheels effectively locked in place at high speeds for added stability.

A silver Aston Martin Vanquish Volante seen in profile

If a Vantage is for track work, a Vanquish is for cruising. Credit: Aston Martin

We’re talking autobahn speeds, here, by the way. What we’d usually muster on the highway is a cakewalk for this immense luxury chariot. It goes too fast too quickly, for better or for worse, with 80 mph (129 km/h) feeling like half of that. Different drive modes make a palpable difference in behavior, with GT mode supporting smooth, long stretches while Sport and Sport + offer more engaging, throaty behavior for twisty backroads. Here, the car continues to be well-mannered, though the occasional dab for power triggers an overeager automatic into dropping a gear or two, sending the V12 into a fury.

2025 Aston Martin Vanquish Volante: A-M’s ultimate GT goes topless Read More »

tesla-skepticism-continues-to-grow,-robotaxi-demo-fails-to-impress-austin

Tesla skepticism continues to grow, robotaxi demo fails to impress Austin

Tesla’s eroding popularity with Americans shows little sign of abating. Each month, the Electric Vehicle Intelligence Report surveys thousands of consumers to gauge attitudes on EV adoption, autonomous driving, and the automakers that are developing those technologies. Toyota, which only recently started selling enough EVs to be included in the survey, currently has the highest net-positive score and the highest “view intensity score”—the percentage of consumers who have a very positive view of a brand minus the ones who have a very negative view—despite selling just a fairly lackluster EV to date. Meanwhile, the brand that actually popularized the EV, moving it from compliance car and milk float to something desirable, has fallen even further into negative territory in July.

Just 26 percent of survey participants still have a somewhat or very positive view of Tesla. But 39 percent have a somewhat or very negative view of the company, with just 14 percent being unfamiliar or having no opinion. That’s a net positive view of -13, but Tesla’s view intensity score is -16, meaning a lot more people really don’t like the company compared to the ones who really do. The problem is also growing over time: In April, Tesla still had a net positive view of -7.

Tesla remained at the bottom of the charts when EVIR looked more closely into demographic data. Tesla was the least-positively viewed car company regardless of income, although the effect was most pronounced among those with incomes less than $75,000, as were the results based on geography (although suburbanites held it in the most disdain) and age (where those over 65 have the most haters).

Vinfast is the only other automaker with a negative net-positive view and view intensity score, but 92 percent of survey respondents were unfamiliar with the Vietnamese automaker or had no opinion about it.

When asked which brands they trusted, the survey data mostly mirrored the positive versus negative brand perception. Only Tesla and Vinfast have negative net trust scores, with Tesla also having the lowest “trust integrity score”—those who say they trust a brand “a lot” versus those who distrust that brand “a lot,” at -19.

Tesla skepticism continues to grow, robotaxi demo fails to impress Austin Read More »

google-gets-ahead-of-the-leaks-and-reveals-the-pixel-10-early

Google gets ahead of the leaks and reveals the Pixel 10 early

Google has an event next month to officially launch the Pixel 10 series, but the leaks have been coming fast and furious beforehand. There won’t be much left to learn on August 20, particularly now that Google has revealed the phone. Over on the Google Store, there’s a video revealing the Pixel 10’s design, and it looks just a little familiar.

The video (which you can also see below) isn’t very long, but it offers an unobscured look at the phone’s physical design. The 13-second clip opens with the numeral “10” emerging from the shadows. The zero elongates and morphs into the trademark camera window on the back of the phone. The video zooms out to reveal the full phone from the back. The device is a muted blue-gray, which is probably the “frost” color listed in recent leaks.

The video is not accompanied by specs, pricing, or any other details; however, Google’s new Tensor G5 processor is expected to be a marked improvement over past iterations. While the first four Tensor chips were manufactured in Samsung fabs, Tensor G5 is from TSMC. The dominant Taiwanese chip maker touts better semiconductor packaging technology, and the chip itself is believed to have more custom components that further separate it from the Samsung Exynos lineage.

Google gets ahead of the leaks and reveals the Pixel 10 early Read More »

openai-jumps-gun-on-international-math-olympiad-gold-medal-announcement

OpenAI jumps gun on International Math Olympiad gold medal announcement

The early announcement has prompted Google DeepMind, which had prepared its own IMO results for the agreed-upon date, to move up its own IMO-related announcement to later today. Harmonic plans to share its results as originally scheduled on July 28.

In response to the controversy, OpenAI research scientist Noam Brown posted on X, “We weren’t in touch with IMO. I spoke with one organizer before the post to let him know. He requested we wait until after the closing ceremony ends to respect the kids, and we did.”

However, an IMO coordinator told X user Mikhail Samin that OpenAI actually announced before the closing ceremony, contradicting Brown’s claim. The coordinator called OpenAI’s actions “rude and inappropriate,” noting that OpenAI “wasn’t one of the AI companies that cooperated with the IMO on testing their models.”

Hard math since 1959

The International Mathematical Olympiad, which has been running since 1959, represents one of the most challenging tests of mathematical reasoning. More than 100 countries send six participants each, with contestants facing six proof-based problems across two 4.5-hour sessions. The problems typically require deep mathematical insight and creativity rather than raw computational power. You can see the exact problems in the 2025 Olympiad posted online.

For example, problem one asks students to imagine a triangular grid of dots (like a triangular pegboard) and figure out how to cover all the dots using exactly n straight lines. The twist is that some lines are called “sunny”—these are the lines that don’t run horizontally, vertically, or diagonally at a 45º angle. The challenge is to prove that no matter how big your triangle is, you can only ever create patterns with exactly 0, 1, or 3 sunny lines—never 2, never 4, never any other number.

The timing of the OpenAI results surprised some prediction markets, which had assigned around an 18 percent probability to any AI system winning IMO gold by 2025. However, depending on what Google says this afternoon (and what others like Harmonic may release on July 28), OpenAI may not be the only AI company to have achieved these unexpected results.

OpenAI jumps gun on International Math Olympiad gold medal announcement Read More »

it’s-“frighteningly-likely”-many-us-courts-will-overlook-ai-errors,-expert-says

It’s “frighteningly likely” many US courts will overlook AI errors, expert says


Judges pushed to bone up on AI or risk destroying their court’s authority.

A judge points to a diagram of a hand with six fingers

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Order in the court! Order in the court! Judges are facing outcry over a suspected AI-generated order in a court.

Fueling nightmares that AI may soon decide legal battles, a Georgia court of appeals judge, Jeff Watkins, explained why a three-judge panel vacated an order last month that appears to be the first known ruling in which a judge sided with someone seemingly relying on fake AI-generated case citations to win a legal fight.

Now, experts are warning that judges overlooking AI hallucinations in court filings could easily become commonplace, especially in the typically overwhelmed lower courts. And so far, only two states have moved to force judges to sharpen their tech competencies and adapt so they can spot AI red flags and theoretically stop disruptions to the justice system at all levels.

The recently vacated order came in a Georgia divorce dispute, where Watkins explained that the order itself was drafted by the husband’s lawyer, Diana Lynch. That’s a common practice in many courts, where overburdened judges historically rely on lawyers to draft orders. But that protocol today faces heightened scrutiny as lawyers and non-lawyers increasingly rely on AI to compose and research legal filings, and judges risk rubberstamping fake opinions by not carefully scrutinizing AI-generated citations.

The errant order partly relied on “two fictitious cases” to deny the wife’s petition—which Watkins suggested were “possibly ‘hallucinations’ made up by generative-artificial intelligence”—as well as two cases that had “nothing to do” with the wife’s petition.

Lynch was hit with $2,500 in sanctions after the wife appealed, and the husband’s response—which also appeared to be prepared by Lynch—cited 11 additional cases that were “either hallucinated” or irrelevant. Watkins was further peeved that Lynch supported a request for attorney’s fees for the appeal by citing “one of the new hallucinated cases,” writing it added “insult to injury.”

Worryingly, the judge could not confirm whether the fake cases were generated by AI or even determine if Lynch inserted the bogus cases into the court filings, indicating how hard it can be for courts to hold lawyers accountable for suspected AI hallucinations. Lynch did not respond to Ars’ request to comment, and her website appeared to be taken down following media attention to the case.

But Watkins noted that “the irregularities in these filings suggest that they were drafted using generative AI” while warning that many “harms flow from the submission of fake opinions.” Exposing deceptions can waste time and money, and AI misuse can deprive people of raising their best arguments. Fake orders can also soil judges’ and courts’ reputations and promote “cynicism” in the justice system. If left unchecked, Watkins warned, these harms could pave the way to a future where a “litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.”

“We have no information regarding why Appellee’s Brief repeatedly cites to nonexistent cases and can only speculate that the Brief may have been prepared by AI,” Watkins wrote.

Ultimately, Watkins remanded the case, partly because the fake cases made it impossible for the appeals court to adequately review the wife’s petition to void the prior order. But no matter the outcome of the Georgia case, the initial order will likely forever be remembered as a cautionary tale for judges increasingly scrutinized for failures to catch AI misuses in court.

“Frighteningly likely” judge’s AI misstep will be repeated

John Browning, a retired justice on Texas’ Fifth Court of Appeals and now a full-time law professor at Faulkner University, last year published a law article Watkins cited that warned of the ethical risks of lawyers using AI. In the article, Browning emphasized that the biggest concern at that point was that lawyers “will use generative AI to produce work product they treat as a final draft, without confirming the accuracy of the information contained therein or without applying their own independent professional judgment.”

Today, judges are increasingly drawing the same scrutiny, and Browning told Ars he thinks it’s “frighteningly likely that we will see more cases” like the Georgia divorce dispute, in which “a trial court unwittingly incorporates bogus case citations that an attorney includes in a proposed order” or even potentially in “proposed findings of fact and conclusions of law.”

“I can envision such a scenario in any number of situations in which a trial judge maintains a heavy docket and looks to counsel to work cooperatively in submitting proposed orders, including not just family law cases but other civil and even criminal matters,” Browning told Ars.

According to reporting from the National Center for State Courts, a nonprofit representing court leaders and professionals who are advocating for better judicial resources, AI tools like ChatGPT have made it easier for high-volume filers and unrepresented litigants who can’t afford attorneys to file more cases, potentially further bogging down courts.

Peter Henderson, a researcher who runs the Princeton Language+Law, Artificial Intelligence, & Society (POLARIS) Lab, told Ars that he expects cases like the Georgia divorce dispute aren’t happening every day just yet.

It’s likely that a “few hallucinated citations go overlooked” because generally, fake cases are flagged through “the adversarial nature of the US legal system,” he suggested. Browning further noted that trial judges are generally “very diligent in spotting when a lawyer is citing questionable authority or misleading the court about what a real case actually said or stood for.”

Henderson agreed with Browning that “in courts with much higher case loads and less adversarial process, this may happen more often.” But Henderson noted that the appeals court catching the fake cases is an example of the adversarial process working.

While that’s true in this case, it seems likely that anyone exhausted by the divorce legal process, for example, may not pursue an appeal if they don’t have energy or resources to discover and overturn errant orders.

Judges’ AI competency increasingly questioned

While recent history confirms that lawyers risk being sanctioned, fired from their firms, or suspended from practicing law for citing fake AI-generated cases, judges will likely only risk embarrassment for failing to catch lawyers’ errors or even for using AI to research their own opinions.

Not every judge is prepared to embrace AI without proper vetting, though. To shield the legal system, some judges have banned AI. Others have required disclosures—with some even demanding to know which specific AI tool was used—but that solution has not caught on everywhere.

Even if all courts required disclosures, Browning pointed out that disclosures still aren’t a perfect solution since “it may be difficult for lawyers to even discern whether they have used generative AI,” as AI features become increasingly embedded in popular legal tools. One day, it “may eventually become unreasonable to expect” lawyers “to verify every generative AI output,” Browning suggested.

Most likely—as a judicial ethics panel from Michigan has concluded—judges will determine “the best course of action for their courts with the ever-expanding use of AI,” Browning’s article noted. And the former justice told Ars that’s why education will be key, for both lawyers and judges, as AI advances and becomes more mainstream in court systems.

In an upcoming summer 2025 article in The Journal of Appellate Practice & Process, “The Dawn of the AI Judge,” Browning attempts to soothe readers by saying that AI isn’t yet fueling a legal dystopia. And humans are unlikely to face “robot judges” spouting AI-generated opinions any time soon, the former justice suggested.

Standing in the way of that, at least two states—Michigan and West Virginia—”have already issued judicial ethics opinions requiring judges to be ‘tech competent’ when it comes to AI,” Browning told Ars. And “other state supreme courts have adopted official policies regarding AI,” he noted, further pressuring judges to bone up on AI.

Meanwhile, several states have set up task forces to monitor their regional court systems and issue AI guidance, while states like Virginia and Montana have passed laws requiring human oversight for any AI systems used in criminal justice decisions.

Judges must prepare to spot obvious AI red flags

Until courts figure out how to navigate AI—a process that may look different from court to court—Browning advocates for more education and ethical guidance for judges to steer their use and attitudes about AI. That could help equip judges to avoid both ignorance of the many AI pitfalls and overconfidence in AI outputs, potentially protecting courts from AI hallucinations, biases, and evidentiary challenges sneaking past systems requiring human review and scrambling the court system.

An overlooked part of educating judges could be exposing AI’s influence so far in courts across the US. Henderson’s team is planning research that tracks which models attorneys are using most in courts. That could reveal “the potential legal arguments that these models are pushing” to sway courts—and which judicial interventions might be needed, Henderson told Ars.

“Over the next few years, researchers—like those in our group, the POLARIS Lab—will need to develop new ways to track the massive influence that AI will have and understand ways to intervene,” Henderson told Ars. “For example, is any model pushing a particular perspective on legal doctrine across many different cases? Was it explicitly trained or instructed to do so?”

Henderson also advocates for “an open, free centralized repository of case law,” which would make it easier for everyone to check for fake AI citations. “With such a repository, it is easier for groups like ours to build tools that can quickly and accurately verify citations,” Henderson said. That could be a significant improvement to the current decentralized court reporting system that often obscures case information behind various paywalls.

Dazza Greenwood, who co-chairs MIT’s Task Force on Responsible Use of Generative AI for Law, did not have time to send comments but pointed Ars to a LinkedIn thread where he suggested that a structural response may be needed to ensure that all fake AI citations are caught every time.

He recommended that courts create “a bounty system whereby counter-parties or other officers of the court receive sanctions payouts for fabricated cases cited in judicial filings that they reported first.” That way, lawyers will know that their work will “always” be checked and thus may shift their behavior if they’ve been automatically filing AI-drafted documents. In turn, that could alleviate pressure on judges to serve as watchdogs. It also wouldn’t cost much—mostly just redistributing the exact amount of fees that lawyers are sanctioned to AI spotters.

Novel solutions like this may be necessary, Greenwood suggested. Responding to a question asking if “shame and sanctions” are enough to stop AI hallucinations in court, Greenwood said that eliminating AI errors is imperative because it “gives both otherwise generally good lawyers and otherwise generally good technology a bad name.” Continuing to ban AI or suspend lawyers as a preferred solution risks dwindling court resources just as cases likely spike rather than potentially confronting the problem head-on.

Of course, there’s no guarantee that the bounty system would work. But “would the fact of such definite confidence that your cures will be individually checked and fabricated cites reported be enough to finally… convince lawyers who cut these corners that they should not cut these corners?”

In absence of a fake case detector like Henderson wants to build, experts told Ars that there are some obvious red flags that judges can note to catch AI-hallucinated filings.

Any case number with “123456” in it probably warrants review, Henderson told Ars. And Browning noted that AI tends to mix up locations for cases, too. “For example, a cite to a purported Texas case that has a ‘S.E. 2d’ reporter wouldn’t make sense, since Texas cases would be found in the Southwest Reporter,” Browning said, noting that some appellate judges have already relied on this red flag to catch AI misuses.

Those red flags would perhaps be easier to check with the open source tool that Henderson’s lab wants to make, but Browning said there are other tell-tale signs of AI usage that anyone who has ever used a chatbot is likely familiar with.

“Sometimes a red flag is the language cited from the hallucinated case; if it has some of the stilted language that can sometimes betray AI use, it might be a hallucination,” Browning said.

Judges already issuing AI-assisted opinions

Several states have assembled task forces like Greenwood’s to assess the risks and benefits of using AI in courts. In Georgia, the Judicial Council of Georgia Ad Hoc Committee on Artificial Intelligence and the Courts released a report in early July providing “recommendations to help maintain public trust and confidence in the judicial system as the use of AI increases” in that state.

Adopting the committee’s recommendations could establish “long-term leadership and governance”; a repository of approved AI tools, education, and training for judicial professionals; and more transparency on AI used in Georgia courts. But the committee expects it will take three years to implement those recommendations while AI use continues to grow.

Possibly complicating things further as judges start to explore using AI assistants to help draft their filings, the committee concluded that it’s still too early to tell if the judges’ code of conduct should be changed to prevent “unintentional use of biased algorithms, improper delegation to automated tools, or misuse of AI-generated data in judicial decision-making.” That means, at least for now, that there will be no code-of-conduct changes in Georgia, where the only case in which AI hallucinations are believed to have swayed a judge has been found.

Notably, the committee’s report also confirmed that there are no role models for courts to follow, as “there are no well-established regulatory environments with respect to the adoption of AI technologies by judicial systems.” Browning, who chaired a now-defunct Texas AI task force, told Ars that judges lacking guidance will need to stay on their toes to avoid trampling legal rights. (A spokesperson for the State Bar of Texas told Ars the task force’s work “concluded” and “resulted in the creation of the new standing committee on Emerging Technology,” which offers general tips and guidance for judges in a recently launched AI Toolkit.)

“While I definitely think lawyers have their own duties regarding AI use, I believe that judges have a similar responsibility to be vigilant when it comes to AI use as well,” Browning said.

Judges will continue sorting through AI-fueled submissions not just from pro se litigants representing themselves but also from up-and-coming young lawyers who may be more inclined to use AI, and even seasoned lawyers who have been sanctioned up to $5,000 for failing to check AI drafts, Browning suggested.

In his upcoming “AI Judge” article, Browning points to at least one judge, 11th Circuit Court of Appeals Judge Kevin Newsom, who has used AI as a “mini experiment” in preparing opinions for both a civil case involving an insurance coverage issue and a criminal matter focused on sentencing guidelines. Browning seems to appeal to judges’ egos to get them to study up so they can use AI to enhance their decision-making and possibly expand public trust in courts, not undermine it.

“Regardless of the technological advances that can support a judge’s decision-making, the ultimate responsibility will always remain with the flesh-and-blood judge and his application of very human qualities—legal reasoning, empathy, strong regard for fairness, and unwavering commitment to ethics,” Browning wrote. “These qualities can never be replicated by an AI tool.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

It’s “frighteningly likely” many US courts will overlook AI errors, expert says Read More »