Author name: Mike M.

openai-unveils-“wellness”-council;-suicide-prevention-expert-not-included

OpenAI unveils “wellness” council; suicide prevention expert not included


Doctors examining ChatGPT

OpenAI reveals which experts are steering ChatGPT mental health upgrades.

Ever since a lawsuit accused ChatGPT of becoming a teen’s “suicide coach,” OpenAI has been scrambling to make its chatbot safer. Today, the AI firm unveiled the experts it hired to help make ChatGPT a healthier option for all users.

In a press release, OpenAI explained its Expert Council on Wellness and AI started taking form after OpenAI began informally consulting with experts on parental controls earlier this year. Now it’s been formalized, bringing together eight “leading researchers and experts with decades of experience studying how technology affects our emotions, motivation, and mental health” to help steer ChatGPT updates.

One priority was finding “several council members with backgrounds in understanding how to build technology that supports healthy youth development,” OpenAI said, “because teens use ChatGPT differently than adults.”

That effort includes David Bickham, a research director at Boston Children’s Hospital, who has closely monitored how social media impacts kids’ mental health, and Mathilde Cerioli, the chief science officer at a nonprofit called Everyone.AI. Cerioli studies the opportunities and risks of children using AI, particularly focused on “how AI intersects with child cognitive and emotional development.”

These experts can seemingly help OpenAI better understand how safeguards can fail kids during extended conversations to ensure kids aren’t particularly vulnerable to so-called “AI psychosis,” a phenomenon where longer chats trigger mental health issues.

In January, Bickham noted in an American Psychological Association article on AI in education that “little kids learn from characters” already—as they do things like watch Sesame Street—and form “parasocial relationships” with those characters. AI chatbots could be the next frontier, possibly filling in teaching roles if we know more about the way kids bond with chatbots, Bickham suggested.

“How are kids forming a relationship with these AIs, what does that look like, and how might that impact the ability of AIs to teach?” Bickham posited.

Cerioli closely monitors AI’s influence in kids’ worlds. She suggested last month that kids who grow up using AI may risk having their brains rewired to “become unable to handle contradiction,” Le Monde reported, especially “if their earliest social interactions, at an age when their neural circuits are highly malleable, are conducted with endlessly accommodating entities.”

“Children are not mini-adults,” Cerioli said. “Their brains are very different, and the impact of AI is very different.”

Neither expert is focused on suicide prevention in kids. That may disappoint dozens of suicide prevention experts who last month pushed OpenAI to consult with experts deeply familiar with what “decades of research and lived experience” show about “what works in suicide prevention.”

OpenAI experts on suicide risks of chatbots

On a podcast last year, Cerioli said that child brain development is the area she’s most “passionate” about when asked about the earliest reported chatbot-linked teen suicide. She said it didn’t surprise her to see the news and noted that her research is focused less on figuring out “why that happened” and more on why it can happen because kids are “primed” to seek out “human connection.”

She noted that a troubled teen confessing suicidal ideation to a friend in the real world would more likely lead to an adult getting involved, whereas a chatbot would need specific safeguards built in to ensure parents are notified.

This seems in line with the steps OpenAI took to add parental controls, consulting with experts to design “the notification language for parents when a teen may be in distress,” the company’s press release said. However, on a resources page for parents, OpenAI has confirmed that parents won’t always be notified if a teen is linked to real-world resources after expressing “intent to self-harm,” which may alarm some critics who think the parental controls don’t go far enough.

Although OpenAI does not specify this in the press release, it appears that Munmun De Choudhury, a professor of interactive computing at Georgia Tech, could help evolve ChatGPT to recognize when kids are in danger and notify parents.

De Choudhury studies computational approaches to improve “the role of online technologies in shaping and improving mental health,” OpenAI noted.

In 2023, she conducted a study on the benefits and harms of large language models in digital mental health. The study was funded in part through a grant from the American Foundation for Suicide Prevention and noted that chatbots providing therapy services at that point could only detect “suicide behaviors” about half the time. The task appeared “unpredictable” and “random” to scholars, she reported.

It seems possible that OpenAI hopes the child experts can provide feedback on how ChatGPT is impacting kids’ brains while De Choudhury helps improve efforts to notify parents of troubling chat sessions.

More recently, De Choudhury seemed optimistic about potential AI mental health benefits, telling The New York Times in April that AI therapists can still have value even if companion bots do not provide the same benefits as real relationships.

“Human connection is valuable,” De Choudhury said. “But when people don’t have that, if they’re able to form parasocial connections with a machine, it can be better than not having any connection at all.”

First council meeting focused on AI benefits

Most of the other experts on OpenAI’s council have backgrounds similar to De Choudhury’s, exploring the intersection of mental health and technology. They include Tracy Dennis-Tiwary (a psychology professor and cofounder of Arcade Therapeutics), Sara Johansen (founder of Stanford University’s Digital Mental Health Clinic), David Mohr (director of Northwestern University’s Center for Behavioral Intervention Technologies), and Andrew K. Przybylski (a professor of human behavior and technology).

There’s also Robert K. Ross, a public health expert whom OpenAI previously tapped to serve as a nonprofit commission advisor.

OpenAI confirmed that there has been one meeting so far, which served to introduce the advisors to teams working to upgrade ChatGPT and Sora. Moving forward, the council will hold recurring meetings to explore sensitive topics that may require adding guardrails. Initially, though, OpenAI appears more interested in discussing the potential benefits to mental health that could be achieved if tools were tweaked to be more helpful.

“The council will also help us think about how ChatGPT can have a positive impact on people’s lives and contribute to their well-being,” OpenAI said. “Some of our initial discussions have focused on what constitutes well-being and the ways ChatGPT might empower people as they navigate all aspects of their life.”

Notably, Przybylski co-authored a study in 2023 providing data disputing that access to the Internet has negatively affected mental health broadly. He told Mashable that his research provided the “best evidence” so far “on the question of whether Internet access itself is associated with worse emotional and psychological experiences—and may provide a reality check in the ongoing debate on the matter.” He could possibly help OpenAI explore if the data supports perceptions that AI poses mental health risks, which are currently stoking a chatbot mental health panic in Congress.

Also appearing optimistic about companion bots in particular is Johansen. In a LinkedIn post earlier this year, she recommended that companies like OpenAI apply “insights from the impact of social media on youth mental health to emerging technologies like AI companions,” concluding that “AI has great potential to enhance mental health support, and it raises new challenges around privacy, trust, and quality.”

Other experts on the council have been critical of companion bots. OpenAI noted that Mohr specifically “studies how technology can help prevent and treat depression.”

Historically, Mohr has advocated for more digital tools to support mental health, suggesting in 2017 that apps could help support people who can’t get to the therapist’s office.

More recently, Mohr told The Wall Street Journal in 2024 that he had concerns about AI chatbots posing as therapists, though.

“I don’t think we’re near the point yet where there’s just going to be an AI who acts like a therapist,” Mohr said. “There’s still too many ways it can go off the rails.”

Similarly, although Dennis-Tiwary told Wired last month that she finds the term “AI psychosis” to be “very unhelpful” in most cases that aren’t “clinical,” she has warned that “above all, AI must support the bedrock of human well-being, social connection.”

“While acknowledging that there are potentially fruitful applications of social AI for neurodivergent individuals, the use of this highly unreliable and inaccurate technology among children and other vulnerable populations is of immense ethical concern,” Dennis-Tiwary wrote last year.

For OpenAI, the wellness council could help the company turn a corner as ChatGPT and Sora continue to be heavily scrutinized. The company also confirmed that it would continue consulting “the Global Physician Network, policymakers, and more, as we build advanced AI systems in ways that support people’s well-being.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI unveils “wellness” council; suicide prevention expert not included Read More »

gm’s-ev-push-will-cost-it-$1.6-billion-in-q3-with-end-of-the-tax-credit

GM’s EV push will cost it $1.6 billion in Q3 with end of the tax credit

The prospects of continued electric vehicle adoption in the US are in an odd place. As promised, the Trump administration and its congressional Republican allies killed off as many of the clean energy and EV incentives as they could after taking power in January. Ironically, though, the end of the clean vehicle tax credit on September 30 actually spurred the sales of EVs, as customers rushed to dealerships to take advantage of the soon-to-disappear $7,500 credit.

Predictions for EV sales going forward aren’t so rosy, and automakers are reacting by adjusting their product portfolio plans. Today, General Motors revealed that will result in a $1.6 billion hit to its balance sheet when it reports its Q3 results late this month, according to its 8-K.

Q3 was a decent one for GM, with sales up 8 percent year on year and up 10 percent for the year to date. GM EV sales look even better: up 104 percent for the year to date compared to the first nine months of 2024, with nearly 145,000 electric Cadillacs, Chevrolets, and GMCs finding homes.

GM’s EV push will cost it $1.6 billion in Q3 with end of the tax credit Read More »

spacex-finally-got-exactly-what-it-needed-from-starship-v2

SpaceX finally got exactly what it needed from Starship V2


This was the last flight of SpaceX’s second-gen Starship design. Version 3 arrives next year.

Thirty-three methane-fueled Raptor engines power SpaceX’s Super Heavy booster off the launch pad Monday. Credit: SpaceX

SpaceX closed a troubled but instructive chapter in its Starship rocket program Monday with a near-perfect test flight that carried the stainless steel spacecraft halfway around the world from South Texas to the Indian Ocean.

The rocket’s 33 methane-fueled Raptor engines roared to life at 6: 23 pm CDT (7: 23 pm EDT; 23: 23 UTC), throttling up to generate some 16.7 million pounds of thrust, by large measure more powerful than any rocket before Starship. Moments later, the 404-foot-tall (123.1-meter) rocket began a vertical climb away from SpaceX’s test site in Starbase, Texas, near the US-Mexico border.

From then on, the rocket executed its flight plan like clockwork. This was arguably SpaceX’s most successful Starship test flight to date. The only flight with a similar claim occurred one year ago Monday, when the company caught the rocket’s Super Heavy booster back at the launch pad after soaring to the uppermost fringes of the atmosphere. But that flight didn’t accomplish as much in space.

“Starship’s eleventh flight test reached every objective, providing valuable data as we prepare the next generation of Starship and Super Heavy,” SpaceX posted on X.

SpaceX’s 11th Starship flight climbs away from Starbase, Texas. Credit: SpaceX

SpaceX didn’t try to recover the Super Heavy booster on this flight, but the goals the company set before the launch included an attempt to guide the enormous rocket stage to a precise splashdown in the Gulf of Mexico off the coast of South Texas. The booster, reused from a previous flight in March, also validated a new engine configuration for its landing burn, first reigniting 13 of its engines, then downshifting to five, then to three for the final hover.

That all worked, along with pretty much everything else apart from an indication on SpaceX’s livestream that Starship’s Super Heavy booster stage lost an engine early in its descent. The malfunctioning engine had no impact on the rest of the flight.

Flight 11 recap

This was the fifth and final flight of Starship’s second-generation configuration, known as Version 2, or V2. It was the 11th full-scale Starship test flight overall.

It took a while for Starship V2 to meet SpaceX’s expectations. The first three Starship V2 launches in January, March, and May ended prematurely due to problems in the rocket’s propulsion and a fuel leak, breaking a string of increasingly successful Starship flights since 2023. Another Starship V2 exploded on a test stand in Texas in June, further marring the second-gen rocket’s track record.

But SpaceX teams righted the program with a good test flight in August, the first time Starship V2 made it all the way to splashdown. Engineers learned a few lessons on that flight, including the inadequacy of a new metallic heat shield tile design that left a patch of orange oxidation down the side of the ship. They also found that another experiment with part of the ship’s heat shield showed promising results. This method involved using a soft “crunch wrap” material to seal the gaps between the ship’s ceramic tiles and prevent super-heated plasma from reaching the rocket’s stainless steel skin.

Technicians installed the crunch wrap material in more places for Flight 11, and a first look at the performance of the ship during reentry and splashdown suggested the heat shield change worked well.

Dan Huot from SpaceX’s communications office demonstrates how “crunch wrap” material can fill the gaps between Starship’s heat shield tiles. Credit: SpaceX

After reaching space, Starship shut down its six Raptor engines and coasted across the Atlantic Ocean and Africa before emerging over the Indian Ocean just before reentry. During its time in space, Starship released eight Starlink satellite mockups mimicking the larger size of the company’s next-generation Starlink spacecraft. These new Starlink satellites will only be able to launch on Starship.

Starship also reignited one of its six engines for a brief maneuver to set up the ship’s trajectory for reentry. With that, the stage was set for the final act of the test flight. How would the latest version of SpaceX’s ever-changing heat shield design hold up against temperatures of 2,600° Fahrenheit (1,430° Celsius)?

The answer: Apparently quite well. While SpaceX has brought Starships back to Earth in one piece several times, this was the first time the ship made it through reentry relatively unscathed. Live video streaming from cameras onboard Starship showed a blanket of orange and purple plasma enveloping the rocket during reentry. This is now a familiar sight, thanks to connectivity with Starship through SpaceX’s Starlink broadband network.

What was different on Monday was the lack of any obvious damage to the heat shield or flaps throughout Starship’s descent, a promising sign for SpaceX’s chances of reusing the vehicle and its heat shield over and over again, without requiring any refurbishment. This, according to SpaceX’s Elon Musk, is the acid test for determining Starship’s overall success.

An onboard camera captured this view of Starship during the final minute of flight over the Indian Ocean. At this point of the flight, the vehicle—designated Ship 38 as seen here—is descending in a horizontal orientation before flipping vertical for the final moments before splashdown. Credit: SpaceX

In the closing moments of Monday’s flight, Starship flexed its flaps to perform a “dynamic banking maneuver” over the Indian Ocean, then flipped upright and fired its engines to slow for splashdown, simulating maneuvers the rocket will execute on future missions returning to the launch site. That will be one of the chief goals for the next phase of Starship’s test campaign beginning next year.

Patience for V3

It will likely be at least a few months before SpaceX is ready to launch the next Starship flight. Technicians at Starbase are assembling the next Super Heavy booster and the first Starship V3 vehicle. Once integrated, the booster and ship are expected to undergo cryogenic testing and static-fire testing before SpaceX moves forward with launch.

“Focus now turns to the next generation of Starship and Super Heavy, with multiple vehicles currently in active build and preparing for tests,” SpaceX wrote on its website. “This next iteration will be used for the first Starship orbital flights, operational payload missions, propellant transfer, and more as we iterate to a fully and rapidly reusable vehicle with service to Earth orbit, the Moon, Mars, and beyond.”

Starship V3 will have larger propellant tanks to increase the rocket’s lifting capacity, upgraded Raptor 3 engines, and an improved payload compartment to support launches of real Starlink satellites. SpaceX will also use this version of the rocket for orbital refueling experiments, a long-awaited milestone for the Starship program now planned for sometime next year. Orbital refueling is a crucial enabler for future Starship flights beyond low-Earth orbit and is necessary for SpaceX to fulfill Musk’s ambition to send ships to Mars, the founder’s long-held goal for the company.

It’s also required for Starship flights to the Moon. NASA has signed contracts with SpaceX worth more than $4 billion to develop a human-rated derivative of Starship to land astronauts on the Moon as part of the agency’s Artemis program. The orbital refueling demonstration is a key milestone on the NASA lunar lander contract. Getting this done as soon as possible is vitally important to NASA, which is seeing its Artemis Moon landing schedule slip, in part due to Starship delays.

None of it can really get started until Starship V3 is flying reliably and flying often. If the first Starship V3 flight goes well, SpaceX may attempt to put the next vehicle—Flight 13—into orbit to verify the ship’s endurance in space. At some point, SpaceX will make the first attempt to bring a ship home from orbit for a catch by the launch tower, similar to how SpaceX has caught Super Heavy boosters returning from the edge of space.

But first, ground crews are wrapping up work on a second Starship launch pad designed to accommodate the upgraded, taller Starship V3 rocket. Monday’s flight marked the final launch from Pad 1 in its existing form. The differences with the second launch pad include its flame trench, a common fixture at many launch pads around the world. Pad 1 was not built with a flame trench, but instead features an elevated launch mount where the rocket sits prior to liftoff.

SpaceX is expected to overhaul Pad 1 in the coming months to reactivate it as a second launch pad option for Starship V3. All of this work is occurring in Texas as SpaceX prepares to bring online more Starship launch pads at Cape Canaveral Space Force Station and Kennedy Space Center in Florida. SpaceX says it will need a lot of pads to ramp up Starship to monthly, weekly, and eventually daily flights.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

SpaceX finally got exactly what it needed from Starship V2 Read More »

starship’s-elementary-era-ends-today-with-mega-rocket’s-11th-test-flight

Starship’s elementary era ends today with mega-rocket’s 11th test flight

Future flights of Starship will end with returns to Starbase, where the launch tower will try to catch the vehicle coming home from space, similar to the way SpaceX has shown it can recover the Super Heavy booster. A catch attempt with Starship is still at least a couple of flights away.

In preparation for future returns to Starbase, the ship on Flight 11 will perform a “dynamic banking maneuver” and test subsonic guidance algorithms prior to its final engine burn to brake for splashdown. If all goes according to plan, the flight will end with a controlled water landing in the Indian Ocean approximately 66 minutes after liftoff.

Turning point

Monday’s test flight will be the last Starship launch of the year as SpaceX readies a new generation of the rocket, called Version 3, for its debut sometime in early 2026. The new version of the rocket will fly with upgraded Raptor engines and larger propellant tanks and have the capability for refueling in low-Earth orbit.

Starship Version 3 will also inaugurate SpaceX’s second launch pad at Starbase, which has several improvements over the existing site, including a flame trench to redirect engine exhaust away from the pad. The flame trench is a common feature of many launch pads, but all of the Starship flights so far have used an elevated launch mount, or stool, over a water-cooled flame deflector.

The current launch complex is expected to be modified to accommodate future Starship V3s, giving the company two pads to support a higher flight rate.

NASA is counting on a higher flight rate for Starship next year to move closer to fulfilling SpaceX’s contract to provide a human-rated lander to the agency’s Artemis lunar program. SpaceX has contracts worth more than $4 billion to develop a derivative of Starship to land NASA astronauts on the Moon.

But much of SpaceX’s progress toward a lunar landing hinges on launching numerous Starships—perhaps a dozen or more—in a matter of a few weeks or months. SpaceX is activating the second launch pad in Texas and building several launch towers and a new factory in Florida to make this possible.

Apart from recovering and reusing Starship itself, the program’s most pressing near-term hurdle is the demonstration of in-orbit refueling, a prerequisite for any future Starship voyages to the Moon or Mars. This first refueling test could happen next year but will require Starship V3 to have a smoother introduction than Starship V2, which is retiring after Flight 11 with, at best, a 40 percent success rate.

Starship’s elementary era ends today with mega-rocket’s 11th test flight Read More »

4chan-fined-$26k-for-refusing-to-assess-risks-under-uk-online-safety-act

4chan fined $26K for refusing to assess risks under UK Online Safety Act

The risk assessments also seem to unconstitutionally compel speech, they argued, forcing them to share information and “potentially incriminate themselves on demand.” That conflicts with 4chan and Kiwi Farms’ Fourth Amendment rights, as well as “the right against self-incrimination and the due process clause of the Fifth Amendment of the US Constitution,” the suit says.

Additionally, “the First Amendment protects Plaintiffs’ right to permit anonymous use of their platforms,” 4chan and Kiwi Farms argued, opposing Ofcom’s requirements to verify ages of users. (This may be their weakest argument as the US increasingly moves to embrace age gates.)

4chan is hoping a US district court will intervene and ban enforcement of the OSA, arguing that the US must act now to protect all US companies. Failing to act now could be a slippery slope, as the UK is supposedly targeting “the most well-known, but small and, financially speaking, defenseless platforms” in the US before mounting attacks to censor “larger American companies,” 4chan and Kiwi Farms argued.

Ofcom has until November 25 to respond to the lawsuit and has maintained that the OSA is not a censorship law.

On Monday, Britain’s technology secretary, Liz Kendall, called OSA a “lifeline” meant to protect people across the UK “from the darkest corners of the Internet,” the Record reported.

“Services can no longer ignore illegal content, like encouraging self-harm or suicide, circulating online which can devastate young lives and leaves families shattered,” Kendall said. “This fine is a clear warning to those who fail to remove illegal content or protect children from harmful material.”

Whether 4chan and Kiwi Farms can win their fight to create a carveout in the OSA for American companies remains unclear, but the Federal Trade Commission agrees that the UK law is an overreach. In August, FTC Chair Andrew Ferguson warned US tech companies against complying with the OSA, claiming that censoring Americans to comply with UK law is a violation of the FTC Act, the Record reported.

“American consumers do not reasonably expect to be censored to appease a foreign power and may be deceived by such actions,” Ferguson told tech executives in a letter.

Another lawyer backing 4chan, Preston Byrne, seemed to echo Ferguson, telling the BBC, “American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail.”

4chan fined $26K for refusing to assess risks under UK Online Safety Act Read More »

software-update-bricks-some-jeep-4xe-hybrids-over-the-weekend

Software update bricks some Jeep 4xe hybrids over the weekend

Owners of some Jeep Wrangler 4xe hybrids have been left stranded after installing an over-the-air software update this weekend. The automaker pushed out a telematics update for the Uconnect infotainment system that evidently wasn’t ready, resulting in cars losing power while driving and then becoming stranded.

Stranded Jeep owners have been detailing their experiences in forum and Reddit posts, as well as on YouTube. The buggy update doesn’t appear to brick the car immediately. Instead, the failure appears to occur while driving—a far more serious problem. For some, this happened close to home and at low speed, but others claim to have experienced a powertrain failure at highway speeds.

Jeep pulled the update after reports of problems, but the software had already downloaded to many owners’ cars by then. A member of Stellantis’ social engagement team told 4xe owners at a Jeep forum to ignore the update pop-up if they haven’t installed it yet.

Owners were also advised to avoid using either hybrid or electric modes if they had updated their 4xe and not already suffered a powertrain failure. Yesterday, Jeep pushed out a fix.

As Crowdstrike showed last year, Friday afternoons are a bad time to push out a software update. Now Stellantis has learned that lesson, too. Ars has reached out to Stellantis, and we’ll update this post if we get a reply.

Software update bricks some Jeep 4xe hybrids over the weekend Read More »

marvel-gets-meta-with-wonder-man-teaser

Marvel gets meta with Wonder Man teaser

Marvel Studios has dropped the first teaser for Wonder Man, an eight-episode miniseries slated for a January release, ahead of its panel at New York Comic Con this weekend.

Part of the MCU’s Phase Six, the miniseries was created by Destin Daniel Cretton (Shang-Chi and the Legend of Five Rings) and Andrew Guest (Hawkeye), with Guest serving as showrunner. It has been in development since 2022.

The comic book version of the character is the son of a rich industrialist who inherits the family munitions factory but is being crushed by the competition: Stark Industries. Baron Zemo (Falcon and the Winter Soldier) then recruits him to infiltrate and betray the Avengers, giving him super powers (“ionic energy”) via a special serum. He eventually becomes a superhero and Avengers ally, helping them take on Doctor Doom, among other exploits. Since we know Doctor Doom is the Big Bad of the upcoming two new Avengers movies, a Wonder Man miniseries makes sense.

In the new miniseries, Yahya Abdul-Mateen II stars as Simon Williams, aka Wonder Man, an actor and stunt person with actual superpowers who decides to audition for the lead role in a superhero TV series—a reboot of an earlier Wonder Man incarnation. Demetrius Grosse plays Simon’s brother, Eric, aka Grim Reaper; Ed Harris plays Simon’s agent, Neal Saroyan; and Arian Moayed plays P. Clearly, an agent with the Department of Damage Control. Lauren Glazier, Josh Gad, Byron Bowers, Bechir Sylvain, and Manny McCord will also appear in as-yet-undisclosed roles

Marvel gets meta with Wonder Man teaser Read More »

“extremely-angry”-trump-threatens-“massive”-tariff-on-all-chinese-exports

“Extremely angry” Trump threatens “massive” tariff on all Chinese exports

The chairman of the House of Representatives’ Select Committee on the Chinese Communist Party (CCP), John Moolenaar (R-Mich.), issued a statement, suggesting that, unlike Trump, he’d seen China’s rare earths move coming. He pushed Trump to interpret China’s export controls as “an economic declaration of war against the United States and a slap in the face to President Trump.”

“China has fired a loaded gun at the American economy, seeking to cut off critical minerals used to make the semiconductors that power the American military, economy, and devices we use every day including cars, phones, computers, and TVs,” Moolenaar said. “Every American will be negatively affected by China’s action, and that’s why we must address America’s vulnerabilities and build our own leverage against China.”

To strike back forcefully, Moolenaar suggested passing a law he sponsored that he said would “end preferential trade treatment for China, build a resilient resource reserve of critical minerals, secure American research and campuses from Chinese influence, and strangle China’s technology sector with export controls instead of selling it advanced chips.”

Moolenaar also emphasized steps he recommended back in September that he claimed Trump could take to “create real leverage with China” in the face of its stranglehold on rare earths.

Those included “restricting or suspending Chinese airline landing rights in the US,” “reviewing export control policies governing the sale of commercial aircraft, parts, and maintenance services to China,” and “restricting outbound investment in China’s aviation sector in coordination with key allies.”

“These steps would send a clear message to Beijing that it cannot choke off critical supplies to our defense industries without consequences to its own strategic sectors,” Moolenaar wrote in his September letter to Trump. “By acting together, the US and its allies can strengthen our resilience, reinforce solidarity, and create real leverage with China.”

“Extremely angry” Trump threatens “massive” tariff on all Chinese exports Read More »

amd-and-sony’s-ps6-chipset-aims-to-rethink-the-current-graphics-pipeline

AMD and Sony’s PS6 chipset aims to rethink the current graphics pipeline

It feels like it was just yesterday that Sony hardware architect Mark Cerny was first teasing Sony’s “PS4 successor” and its “enhanced ray-tracing capabilities” powered by new AMD chips. Now that we’re nearly five full years into the PS5 era, it’s time for Sony and AMD to start teasing the new chips that will power what Cerny calls “a future console in a few years’ time.”

In a quick nine-minute video posted Thursday, Cerny sat down with Jack Huynh, the senior VP and general manager of AMD’s Computing and Graphics Group, to talk about “Project Amethyst,” a co-engineering effort between both companies that was also teased back in July. And while that Project Amethyst hardware currently only exists in the form of a simulation, Cerny said that the “results are quite promising” for a project that’s still in the “early days.”

Mo’ ML, fewer problems?

Project Amethyst is focused on going beyond traditional rasterization techniques that don’t scale well when you try to “brute force that with raw power alone,” Huynh said in the video. Instead, the new architecture is focused on more efficient running of the kinds of machine-learning-based neural networks behind AMD’s FSR upscaling technology and Sony’s similar PSSR system.

From the same source. Two branches. One vision.

My good friend and fellow gamer @cerny and I recently reflected on our shared journey — symbolized by these two pieces of amethyst, split from the same stone.

Project Amethyst is a co-engineering effort between @PlayStation and… pic.twitter.com/De9HWV3Ub2

— Jack Huynh (@JackMHuynh) July 1, 2025

While that kind of upscaling currently helps let GPUs pump out 4K graphics in real time, Cerny said that the “nature of the GPU fights us here,” requiring calculations to be broken up into subproblems to be handled in a somewhat inefficient parallel process by the GPU’s individual compute units.

To get around this issue, Project Amethyst uses “neural arrays” that let compute units share data and process problems like a “single focused AI engine,” Cerny said. While the entire GPU won’t be connected in this manner, connecting small sets of compute units like this allows for more scalable shader engines that can “process a large chunk of the screen in one go,” Cerny said. That means Project Amethyst will let “more and more of what you see on screen… be touched or enhanced by ML,” Huynh added.

AMD and Sony’s PS6 chipset aims to rethink the current graphics pipeline Read More »

“like-putting-on-glasses-for-the-first-time”—how-ai-improves-earthquake-detection

“Like putting on glasses for the first time”—how AI improves earthquake detection


AI is “comically good” at detecting small earthquakes—here’s why that matters.

Credit: Aurich Lawson | Getty Images

On January 1, 2008, at 1: 59 am in Calipatria, California, an earthquake happened. You haven’t heard of this earthquake; even if you had been living in Calipatria, you wouldn’t have felt anything. It was magnitude -0.53, about the same amount of shaking as a truck passing by. Still, this earthquake is notable, not because it was large but because it was small—and yet we know about it.

Over the past seven years, AI tools based on computer imaging have almost completely automated one of the fundamental tasks of seismology: detecting earthquakes. What used to be the task of human analysts—and later, simpler computer programs—can now be done automatically and quickly by machine-learning tools.

These machine-learning tools can detect smaller earthquakes than human analysts, especially in noisy environments like cities. Earthquakes give valuable information about the composition of the Earth and what hazards might occur in the future.

“In the best-case scenario, when you adopt these new techniques, even on the same old data, it’s kind of like putting on glasses for the first time, and you can see the leaves on the trees,” said Kyle Bradley, co-author of the Earthquake Insights newsletter.

I talked with several earthquake scientists, and they all agreed that machine-learning methods have replaced humans for the better in these specific tasks.

“It’s really remarkable,” Judith Hubbard, a Cornell University professor and Bradley’s co-author, told me.

Less certain is what comes next. Earthquake detection is a fundamental part of seismology, but there are many other data processing tasks that have yet to be disrupted. The biggest potential impacts, all the way to earthquake forecasting, haven’t materialized yet.

“It really was a revolution,” said Joe Byrnes, a professor at the University of Texas at Dallas. “But the revolution is ongoing.”

When an earthquake happens in one place, the shaking passes through the ground, similar to how sound waves pass through the air. In both cases, it’s possible to draw inferences about the materials the waves pass through.

Imagine tapping a wall to figure out if it’s hollow. Because a solid wall vibrates differently than a hollow wall, you can figure out the structure by sound.

With earthquakes, this same principle holds. Seismic waves pass through different materials (rock, oil, magma, etc.) differently, and scientists use these vibrations to image the Earth’s interior.

The main tool that scientists traditionally use is a seismometer. These record the movement of the Earth in three directions: up–down, north–south, and east–west. If an earthquake happens, seismometers can measure the shaking in that particular location.

An old-fashioned physical seismometer. Today, seismometers record data digitally. Credit: Yamaguchi先生 on Wikimedia CC BY-SA 3.0

Scientists then process raw seismometer information to identify earthquakes.

Earthquakes produce multiple types of shaking, which travel at different speeds. Two types, Primary (P) waves and Secondary (S) waves are particularly important, and scientists like to identify the start of each of these phases.

Before good algorithms, earthquake cataloging had to happen by hand. Byrnes said that “traditionally, something like the lab at the United States Geological Survey would have an army of mostly undergraduate students or interns looking at seismograms.”

However, there are only so many earthquakes you can find and classify manually. Creating algorithms to effectively find and process earthquakes has long been a priority in the field—especially since the arrival of computers in the early 1950s.

“The field of seismology historically has always advanced as computing has advanced,” Bradley told me.

There’s a big challenge with traditional algorithms, though: They can’t easily find smaller quakes, especially in noisy environments.

Composite seismogram of common events. Note how each event has a slightly different shape. Credit: EarthScope Consortium CC BY 4.0

As we see in the seismogram above, many different events can cause seismic signals. If a method is too sensitive, it risks falsely detecting events as earthquakes. The problem is especially bad in cities, where the constant hum of traffic and buildings can drown out small earthquakes.

However, earthquakes have a characteristic “shape.” The magnitude 7.7 earthquake above looks quite different from the helicopter landing, for instance.

So one idea scientists had was to make templates from human-labeled datasets. If a new waveform correlates closely with an existing template, it’s almost certainly an earthquake.

Template matching works very well if you have enough human-labeled examples. In 2019, Zach Ross’ lab at Caltech used template matching to find 10 times as many earthquakes in Southern California as had previously been known, including the earthquake at the start of this story. Almost all of the new 1.6 million quakes they found were very small, magnitude 1 and below.

If you don’t have an extensive pre-existing dataset of templates, however, you can’t easily apply template matching. That isn’t a problem in Southern California—which already had a basically complete record of earthquakes down to magnitude 1.7—but it’s a challenge elsewhere.

Also, template matching is computationally expensive. Creating a Southern California quake dataset using template matching took 200 Nvidia P100 GPUs running for days on end.

There had to be a better way.

AI detection models solve all of these problems:

  • They are faster than template matching.

  • Because AI detection models are very small (around 350,000 parameters compared to billions in LLMs like GPT4.0), they can be run on consumer CPUs.

  • AI models generalize well to regions not represented in the original dataset.

As an added bonus, AI models can give better information about when the different types of earthquake shaking arrive. Timing the arrivals of the two most important waves—P and S waves—is called phase picking. It allows scientists to draw inferences about the structure of the quake. AI models can do this alongside earthquake detection.

The basic task of earthquake detection (and phase picking) looks like this:

Cropped figure from Earthquake Transformer—an attentive deep-learning model for simultaneous earthquake detection and phase picking. Credit: Nature Communications

The first three rows represent different directions of vibration (east–west, north–south, and up–down respectively). Given these three dimensions of vibration, can we determine if an earthquake occurred, and if so, when it started?

We want to detect the initial P wave, which arrives directly from the site of the earthquake. But this can be tricky because echoes of the P wave may get reflected off other rock layers and arrive later, making the waveform more complicated.

Ideally, then, our model outputs three things at every time step in the sample:

  1. The probability that an earthquake is occurring at that moment.

  2. The probability that the first P wave arrives at that moment.

  3. The probability that the first S wave arrives at that moment.

We see all three outputs in the fourth row: the detection in green, the P wave arrival in blue, and the S wave arrival in red. (There are two earthquakes in this sample.)

To train an AI model, scientists take large amounts of labeled data, like what’s above, and do supervised training. I’ll describe one of the most used models: Earthquake Transformer, which was developed around 2020 by a Stanford University team led by S. Mostafa Mousavi, who later became a Harvard professor.

Like many earthquake detection models, Earthquake Transformer adapts ideas from image classification. Readers may be familiar with AlexNet, a famous image-recognition model that kicked off the deep-learning boom in 2012.

AlexNet used convolutions, a neural network architecture that’s based on the idea that pixels that are physically close together are more likely to be related. The first convolutional layer of AlexNet broke an image down into small chunks—11 pixels on a side—and classified each chunk based on the presence of simple features like edges or gradients.

The next layer took the first layer’s classifications as input and checked for higher-level concepts such as textures or simple shapes.

Each convolutional layer analyzed a larger portion of the image and operated at a higher level of abstraction. By the final layers, the network was looking at the entire image and identifying objects like “mushroom” and “container ship.”

Images are two-dimensional, so AlexNet is based on two-dimensional convolutions. By contrast, seismograph data is one-dimensional, so Earthquake Transformer uses one-dimensional convolutions over the time dimension. The first layer analyzes vibration data in 0.1-second chunks, while later layers identify patterns over progressively longer time periods.

It’s difficult to say what exact patterns the earthquake model is picking out, but we can analogize this to a hypothetical audio transcription model using one-dimensional convolutions. That model might first identify consonants, then syllables, then words, then sentences over increasing time scales.

Earthquake Transformer converts raw waveform data into a collection of high-level representations that indicate the likelihood of earthquakes and other seismologically significant events. This is followed by a series of deconvolution layers that pinpoint exactly when an earthquake—and its all-important P and S waves—occurred.

The model also uses an attention layer in the middle of the model to mix information between different parts of the time series. The attention mechanism is most famous in large language models, where it helps pass information between words. It plays a similar role in seismographic detection. Earthquake seismograms have a general structure: P waves followed by S waves followed by other types of shaking. So if a segment looks like the start of a P wave, the attention mechanism helps it check that it fits into a broader earthquake pattern.

All of the Earthquake Transformer’s components are standard designs from the neural network literature. Other successful detection models, like PhaseNet, are even simpler. PhaseNet uses only one-dimensional convolutions to pick the arrival times of earthquake waves. There are no attention layers.

Generally, there hasn’t been “much need to invent new architectures for seismology,” according to Byrnes. The techniques derived from image processing have been sufficient.

What made these generic architectures work so well then? Data. Lots of it.

Ars has previously reported on how the introduction of ImageNet, an image recognition benchmark, helped spark the deep learning boom. Large, publicly available earthquake datasets have played a similar role in seismology.

Earthquake Transformer was trained using the Stanford Earthquake Dataset (STEAD), which contains 1.2 million human-labeled segments of seismogram data from around the world. (The paper for STEAD explicitly mentions ImageNet as an inspiration). Other models, like PhaseNet, were also trained on hundreds of thousands or millions of labeled segments.

All recorded earthquakes in the Stanford Earthquake Dataset. Credit: IEEE (CC BY 4.0)

The combination of the data and the architecture just works. The current models are “comically good” at identifying and classifying earthquakes, according to Byrnes. Typically, machine-learning methods find 10 or more times the quakes that were previously identified in an area. You can see this directly in an Italian earthquake catalog:

From Machine learning and earthquake forecasting—next steps by Beroza et al. Credit: Nature Communications (CC-BY 4.0)

AI tools won’t necessarily detect more earthquakes than template matching. But AI-based techniques are much less compute- and labor-intensive, making them more accessible to the average research project and easier to apply in regions around the world.

All in all, these machine-learning models are so good that they’ve almost completely supplanted traditional methods for detecting and phase-picking earthquakes, especially for smaller magnitudes.

The holy grail of earthquake science is earthquake prediction. For instance, scientists know that a large quake will happen near Seattle but have little ability to know whether it will happen tomorrow or in a hundred years. It would be helpful if we could predict earthquakes precisely enough to allow people in affected areas to evacuate.

You might think AI tools would help predict earthquakes, but that doesn’t seem to have happened yet.

The applications are more technical and less flashy, said Cornell’s Judith Hubbard.

Better AI models have given seismologists much more comprehensive earthquake catalogs, which have unlocked “a lot of different techniques,” Bradley said.

One of the coolest applications is in understanding and imaging volcanoes. Volcanic activity produces a large number of small earthquakes, whose locations help scientists understand the structure of the magma system. In a 2022 paper, John Wilding and co-authors used a large AI-generated earthquake catalog to create this incredible image of the structure of the Hawaiian volcanic system.

Each dot represents an individual earthquake. Credit: Wilding et al., The magmatic web beneath Hawai‘i.

They provided direct evidence of a previously hypothesized magma connection between the deep Pāhala sill complex and Mauna Loa’s shallow volcanic structure. You can see this in the image with the arrow labeled as Pāhala-Mauna Loa seismicity band. The authors were also able to clarify the structure of the Pāhala sill complex into discrete sheets of magma. This level of detail could potentially facilitate better real-time monitoring of earthquakes and more accurate eruption forecasting.

Another promising area is lowering the cost of dealing with huge datasets. Distributed Acoustic Sensing (DAS) is a powerful technique that uses fiber-optic cables to measure seismic activity across the entire length of the cable. A single DAS array can produce “hundreds of gigabytes of data” a day, according to Jiaxuan Li, a professor at the University of Houston. That much data can produce extremely high-resolution datasets—enough to pick out individual footsteps.

AI tools make it possible to very accurately time earthquakes in DAS data. Before the introduction of AI techniques for phase picking in DAS data, Li and some of his collaborators attempted to use traditional techniques. While these “work roughly,” they weren’t accurate enough for their downstream analysis. Without AI, much of his work would have been “much harder,” he told me.

Li is also optimistic that AI tools will be able to help him isolate “new types of signals” in the rich DAS data in the future.

Not all AI techniques have paid off

As in many other scientific fields, seismologists face some pressure to adopt AI methods, whether or not they are relevant to their research.

“The schools want you to put the word AI in front of everything,” Byrnes said. “It’s a little out of control.”

This can lead to papers that are technically sound but practically useless. Hubbard and Bradley told me that they’ve seen a lot of papers based on AI techniques that “reveal a fundamental misunderstanding of how earthquakes work.”

They pointed out that graduate students can feel pressure to specialize in AI methods at the cost of learning less about the fundamentals of the scientific field. They fear that if this type of AI-driven research becomes entrenched, older methods will get “out-competed by a kind of meaninglessness.”

While these are real issues, and ones Understanding AI has reported on before, I don’t think they detract from the success of AI earthquake detection. In the last five years, an AI-based workflow has almost completely replaced one of the fundamental tasks in seismology for the better.

That’s pretty cool.

Kai Williams is a reporter for Understanding AI, a Substack newsletter founded by Ars Technica alum Timothy B. Lee. His work is supported by a Tarbell Fellowship. Subscribe to Understanding AI to get more from Tim and Kai.

“Like putting on glasses for the first time”—how AI improves earthquake detection Read More »

it’s-prime-day-2025-part-two,-and-here-are-more-of-the-best-deals-we-could-find

It’s Prime Day 2025 part two, and here are more of the best deals we could find

Skip to content

Updated deals on keyboards, laptops, chargers, cameras, and lots of other stuff!

Photograph of Optimus Prime in NYC Photograph of Optimus Prime in NYC

Optimus Prime, the patron saint of Prime Day, observed in midtown Manhattan in June 2023. Credit: Raymond Hall / Getty Images

Optimus Prime, the patron saint of Prime Day, observed in midtown Manhattan in June 2023. Credit: Raymond Hall / Getty Images

Portable power stations

Streaming gear

Cameras

Macbooks

Other laptops

Keyboards and mice

Monitors

Android phones

Indoor security cameras

Ars Technica may earn compensation for sales from links on this post through affiliate programs.

Loading Loading comments…

Most Read

  1. Listing image for first story in Most Read: Elon Musk tries to make Apple and mobile carriers regret choosing Starlink rivals

It’s Prime Day 2025 part two, and here are more of the best deals we could find Read More »

deloitte-will-refund-australian-government-for-ai-hallucination-filled-report

Deloitte will refund Australian government for AI hallucination-filled report

The Australian Financial Review reports that Deloitte Australia will offer the Australian government a partial refund for a report that was littered with AI-hallucinated quotes and references to nonexistent research.

Deloitte’s “Targeted Compliance Framework Assurance Review” was finalized in July and published by Australia’s Department of Employment and Workplace Relations (DEWR) in August (Internet Archive version of the original). The report, which cost Australian taxpayers nearly $440,000 AUD (about $290,000 USD), focuses on the technical framework the government uses to automate penalties under the country’s welfare system.

Shortly after the report was published, though, Sydney University Deputy Director of Health Law Chris Rudge noticed citations to multiple papers and publications that did not exist. That included multiple references to nonexistent reports by Lisa Burton Crawford, a real professor at the University of Sydney law school.

“It is concerning to see research attributed to me in this way,” Crawford told the AFR in August. “I would like to see an explanation from Deloitte as to how the citations were generated.”

“A small number of corrections”

Deloitte and the DEWR buried that explanation in an updated version of the original report published Friday “to address a small number of corrections to references and footnotes,” according to the DEWR website. On page 58 of that 273-page updated report, Deloitte added a reference to “a generative AI large language model (Azure OpenAI GPT-4o) based tool chain” that was used as part of the technical workstream to help “[assess] whether system code state can be mapped to business requirements and compliance needs.”

Deloitte will refund Australian government for AI hallucination-filled report Read More »