Author name: Beth Washington

high-severity-vulnerability-in-passwordstate-credential-manager-patch-now.

High-severity vulnerability in Passwordstate credential manager. Patch now.

The maker of Passwordstate, an enterprise-grade password manager for storing companies’ most privileged credentials, is urging them to promptly install an update fixing a high-severity vulnerability that hackers can exploit to gain administrative access to their vaults.

The authentication bypass allows hackers to create a URL that accesses an emergency access page for Passwordstate. From there, an attacker could pivot to the administrative section of the password manager. A CVE identifier isn’t yet available.

Safeguarding enterprises’ most privileged credentials

Click Studios, the Australia-based maker of Passwordstate, says the credential manager is used by 29,000 customers and 370,000 security professionals. The product is designed to safeguard organizations’ most privileged and sensitive credentials. Among other things, it integrates into Active Directory, the service Windows network admins use to create, change, and modify user accounts. It can also be used for handling password resets, event auditing, and remote session logins.

On Thursday, Click Studios notified customers that it had released an update that patches two vulnerabilities.

The authentication bypass vulnerability is “associated with accessing the core Passwordstate Products’ Emergency Access page, by using a carefully crafted URL, which could allow access to the Passwordstate Administration section,” Click Studios said. The company said the severity level of the vulnerability was high.

High-severity vulnerability in Passwordstate credential manager. Patch now. Read More »

new-dinosaur-species-is-the-punk-rock-version-of-an-ankylosaur

New dinosaur species is the punk rock version of an ankylosaur

And we have known for sure that the armor was around back then, given that we’ve found the skin-derived osteoderms that comprise the armor in Jurassic deposits. But with little more than a rib and a handful of mouth parts to go on, it wasn’t really possible to say much more than that.

Until now, that is. Because the new Spicomellus remains show extremely clearly that the armor of ankylosaurs got less elaborate over time.

The small, solid-looking spikes found along the edges of later ankylosaurs? Forget those. Spicomellus had a back that was probably bristling with sharper spines, along with far larger ones along its outer edges. Each rib appears to have generated as many as six individual spikes. At a handful of locations, these spikes extended out to nearly a meter, looking more like lances than anything needed to ward off a close-in attack.

And the largest of these were along its neck. On the upper surface of its neck, several osteoderms fused to form a massive half-collar of bone and then extended out five or more individual spikes, each among the longest on the animal’s body. And there were three of these structures along the neck. “No known ankylosaur possesses any condition close to the extremely long pairs of spines on the cervical half-ring of Spicomellus,” its discoverers note.

As if its hedgehog-on-acid appearance weren’t enough, handles present on the tail vertebrae suggest that it also had a weaponized tail. All told, the researchers sum things up by saying, “The new specimen reveals extreme dermal armour modifications unlike those of any other vertebrate, extinct or extant, which fall far outside of the range of morphologies shown by other armoured dinosaurs.”

Out go the hypotheses

Because it’s so unusual, the skeleton’s characteristics are difficult to place within a neat family tree of the ankylosaurs. The researchers conclude that some details of its skeleton do suggest Spicomellus groups among the ankylosaurs and conclude that it’s probably an early branch from the main lineage. But without any other significant examples from the lineage at that time, it’s an extremely tentative conclusion. Still, the alternative is that this thing is unrelated to the only other organisms that share at least a few of its bizarre features, which is a difficult idea to swallow.

New dinosaur species is the punk rock version of an ankylosaur Read More »

cdc-director-has-been-ousted-just-weeks-after-senate-confirmation

CDC director has been ousted just weeks after Senate confirmation

Georges Benjamin, executive director of the American Public Health Association, told the outlet that Monarez “values science, is a solid researcher, and has a history of being a good manager. We’re looking forward to working with her.”

A low point for the agency

The reported ouster comes at what feels like a nadir for the CDC. The agency has lost hundreds of staff from layoffs and buyouts. Vital health programs have been shuttered or hampered. Dangerous rhetoric and health misinformation from Kennedy and other health officials in the Trump administration have made once-respected CDC experts feel vilified by the public and like targets of hate. Kennedy himself has falsely called the COVID-19 shots the “deadliest vaccine ever made” and the CDC a “cesspool of corruption,” for example.

On August 8, a gunman warped by vaccine disinformation opened fire on the CDC campus. Of nearly 500 shots fired, about 200 struck six CDC buildings as terrified staff dove for safety. One local police officer was killed in the incident. The gunman had specifically targeted the CDC for the shooting and blamed COVID-19 vaccines for his health problems.

Additional exits reported

After news broke of Monarez’s removal, Stat News reported that a wave of CDC leadership has resigned. The high-ranking resignations include: Daniel Jernigan, director of the National Center for Emerging Zoonotic Infectious Diseases; Deb Houry, Chief Medical Officer; and Demetre Daskalakis, director of the National Center for Immunization and Respiratory Diseases.

“I am not able to serve in this role any longer because of the ongoing weaponization of public health,” Daskalakis said in a message to staff seen by Stat.

“I am committed to protecting the public’s health, but the ongoing changes prevent me from continuing in my job as a leader of the agency,” Houry wrote in a message to staff. Houry added that science should “never be censored or subject to political interpretations.”

Earlier today, Politico reported that Jennifer Layden, director of the agency’s Office of Public Health Data, Surveillance, and Technology, has also resigned.

8/27/2025 8: 15 pm ET: This post has been updated to include the social media post from HHS, reporting from the Washington Post on the circumstances around Monarez’s exit, additional resignations reported by Stat and Politico, and the statement from Monarez’s lawyers.

CDC director has been ousted just weeks after Senate confirmation Read More »

2025-vw-jetta-gli:-save-the-manuals,-but-not-like-this

2025 VW Jetta GLI: Save the manuals, but not like this


the American sedan take on a GTI

Specs mean nothing if you get the feel and execution wrong.

A white VW Jetta

Built in Mexico, the Volkswagen Jetta is a North American sedan take on the Golf hatchback. Credit: Jim Resnick

Built in Mexico, the Volkswagen Jetta is a North American sedan take on the Golf hatchback. Credit: Jim Resnick

Manual transmissions have gone the way of the dodo, but you can still find a few out there. Bless Volkswagen for keeping the helical gears turning, both literally and figuratively. The 2025 Jetta GLI, Volkswagen’s sporty sedan, still offers a gear lever with actual gears attached at the other end, and a third pedal hanging down from under the dash. Meanwhile, Golf GTI fans are still sobbing in their beer because 2024 was the last model year you could row your own in the hot hatch—now it’s paddles only.

Volkswagen updated the 2025 Jetta GLI with a new grille, LED headlights, and light bars that connect across both the front grille and rear taillights. There’s a red accent stripe that runs across the lower front fascia and turns up at the front corners, somewhat like The Joker’s lipstick, but way less menacing. It’s less distinctive than the Golf GTI, though, and the design even reminds me of the 2017-era Honda Accord a bit. So, yes, in a face-off, the Golf GTI wins.

The test GLI’s wheels get black paint with the Black Package (blackened wheels and side mirror caps). The Monument Gray color option pairs with a black roof, which must seem like a good idea to people who don’t live in the Southwest, where cars overheat before they’re even started.

A black Jetta wheel

Our test car had the black package. Credit: Jim Resnick

Performance: Punch without poetry

VW’s long-running EA888 2.0 L engine, which debuted back in 2007 in the Audi A3, resides under the hood. Now in its fourth turbocharged generation, it develops a healthy 228 hp (170 kW) and 258 lb-ft (350 Nm) of torque, entirely respectable numbers from modest displacement and compact external dimensions.

Mated to this particular 6-speed manual, the engine has its work cut out for itself. On my very first drive, before examining the technical data on gearbox ratios, I could tell that the manual 6-speed had massive gaps between first, second, and third gears.

Diving further into the gearing matter, the ratio spread between first and third gears is vastly wider in the 6-speed manual transmission than in the 7-speed DSG semi-automatic gearbox. This means that as you upshift the manual, the engine is faced with a huge drop in engine revs when you let out the clutch, placing the engine well below the rev range it would prefer to operate within to provide maximum power.

VW Jetta engine bay

EA888 in the house. Credit: Jim Resnick

Let’s look at the ratios, and remember that a lower numerical value means a “taller” or “higher” ratio, just like on multi-speed bicycles. The manual’s first gear is 3.77:1, where the DSG’s is 3.40:1. Upshift to the 2.09:1 second gear in the manual, and you select a gear that’s a whopping 55 percent taller than first gear. Conversely, the same 1-2 shift in the DSG (from 3.40:1 up to 2.75:1) results in a 19 percent taller gear ratio—a far narrower gap.

Third gear tells a similar story. The 6-speed manual’s third ratio (1.47:1) is 17 percent higher than the 1.77:1 ratio in the DSG (again, this “taller” gear giving 17 percent less mechanical advantage). Advantage: automatic.

Closer ratios mean better, faster engine torque recovery and better continued acceleration, because the engine will be spinning in the happier part of its power band—engines being happiest when revving at their torque peak and beyond.

Now, you might well argue that the manual’s third gear gives a higher top speed in-gear than the DSG automatic’s. And that’s 100 percent true. But it’s also irrelevant when you have three (or four!) more gears left to go in the transmission.

And then there’s the action of the shifter itself, with very long throws from forward to aft gates.

A white VW Jetta in profile

It’s quite handsome from some angles. Credit: Jim Resnick

But wait. I began this diatribe by complimenting the Jetta GLI for still offering a choice of manual or automatic gearbox. Indeed, if the manual gearbox had the DSG automatic’s ratios, the paragraphs above would have a very different tenor. The lesson here is that not all manuals are created equal.

We can also look objectively at the stopwatch. Using others’ published figures (don’t take our word for it), 0–60 mph figures tell the tale, as well. Car and Driver cites a time of 6.0 seconds to 60 mph for the manual GLI, where they achieved 5.6 seconds for the dash in the DSG automatic, a big gap.

Regardless of which transmission is used, a limited-slip differential tries to put the power down evenly, and adaptive suspension with multiple driving modes serves up a responsive connectedness to, or relative isolation from, the road surface. Compared to the standard GTI (not the Golf R), the Jetta GLI still rides with a greater accent on ride comfort, and that’s not always a bad thing, especially given the Jetta’s greater rear seat accommodations, which offer 2.4 inches (61 mm) more rear legroom than the GTI. Real adults can live back there for hours at a time without fidgeting, whereas you likely tickle that threshold in a GTI after a little over an hour.

Interior & tech

Inside, the GLI features perforated leather heated and cooled seats, a leather-wrapped and flat-bottom steering wheel that is still saddled with capacitive multifunction controls, a digital instrument cluster that can be configured with traditional dials or a compartmentalized digital-looking display, plus an 8-inch infotainment screen. While the latter may seem small compared to other cars that sport TV-size tablets perched on the dash, it at least comes fully equipped with Apple CarPlay and Android Auto. There’s a slow creep elsewhere in the industry to make this functionality either optional or simply unavailable, which is unforgivable in an era where we can hardly survive without our smartphones.

While much of the controls sit within the infotainment touchscreen, major climate controls reside just below, using capacitive sliders. These sliders are not anywhere near as intuitive as switches and knobs, but at least you don’t need to hunt and peck through endless menus to find them while driving.

The Jetta isn’t as modern as the 8th-generation Golf inside, but it’s had a bit of a tech upgrade. Jim Resnick

The GLI comes standard with active driver assists, including blind-spot warning, forward collision warning, emergency braking, adaptive cruise control, lane-keeping assist, and emergency assist.

Volkswagen managed to incorporate some pragmatic features and comforts. A 15 W wireless and cooled charging pad sits up front, and the trunk sports 14.1 cubic feet (400 L) of space with an actual spare tire under the trunk floor (although it’s a compact spare with limited mileage range).

The premium Beats Audio system in the Jetta GLI pumps 400 W through nine speakers, including a subwoofer. With all those speakers and electrons going for it, I expected way more than it delivered. It creates muddy bass frequencies that are simply inescapable, either by attenuating the bass or by lowering subwoofer gain.

Despite the preponderance of directionless bass, the system produces very little body to the music played, whether it’s jazz from Bill Evans or punk from Bad Religion. Midrange and high-end reproduction is no better. Shrill treble joins the errant bass, making everything sound muddy and indistinct. Delicate acoustic piano passages have little clarity, and Joni Mitchell hides behind a giant curtain of Saran Wrap. Poor Joni.

Driving the GLI is sometimes joyful, as the engine responds eagerly across all RPMs. The chassis and suspension prove willing, though a bit soft for a sports sedan. VW’s steering feels communicative, but not among the best of the modern electrically boosted lot.

VW equips this GLI with all-season Hankook Energy GT tires, sized 225/40R18. I specifically cite these tires because they underperform for the GLI. They don’t produce grip adequate for a sporty sedan, and they come up short underpinning the GLI. So, on a scale of 1 to 10, if the GLI’s engine is a 9, if the gearbox is a 5, and the interior is an 8.5, the GLI’s Hankook tires are a 6.

The GLI’s brakes are a version of the tire story. Despite borrowing front rotors and calipers from the lovely Golf R, they proved grabby, overboosted, and touchy in the GLI. Like the gearbox and tires, specs can tell you nothing in terms of feel and execution.

The GLI’s fuel economy lands at a decent 26/36/30 city/highway/combined mpg (9/6.5/7.8 L/100 km). In thoroughly mixed driving, I achieved an average of 29.1 mpg (8 L/100 km) over my approximately 400 miles (644 km).

The overall truth

The 2025 Jetta GLI certainly possesses sporty aspirations, but a few things hold it back from being the complete package that its Golf GTI stablemate is. Although the Golf GTI no longer offers a manual, the GLI’s 6-speed transmission disappoints both in feel and performance, with huge gaps between cogs. Of course, this malady could be overcome by ordering a DSG automatic GLI, but then any fun gleaned by rowing your gears is also lost.

This car could be better than it is. Credit: Jim Resnick

Closer to the road, mediocre tires generate modest grip. Compared to the Golf, the Jetta gains in rear seat legroom but loses in feel, performance, and tenacity. If it’s performance with practicality you’re after, the $35,045 price of this GLI as tested will get you what you need. But you’ll want something a bit spicier.

Photo of Jim Resnick

A veteran of journalism, product planning and communications in the automotive and music space, Jim reports, critiques and lectures on autos, music and culture.

2025 VW Jetta GLI: Save the manuals, but not like this Read More »

google-improves-gemini-ai-image-editing-with-“nano-banana”-model

Google improves Gemini AI image editing with “nano banana” model

Something unusual happened in the world of AI image editing recently. A new model, known as “nano banana,” started making the rounds with impressive abilities that landed it at the top of the LMArena leaderboard. Now, Google has revealed that nano banana is an innovation from Google DeepMind, and it’s being rolled out to the Gemini app today.

AI image editing allows you to modify images with a prompt rather than mucking around in Photoshop. Google first provided editing capabilities in Gemini earlier this year, and the model was more than competent out of the gate. But like all generative systems, the non-deterministic nature meant that elements of the image would often change in unpredictable ways. Google says nano banana (technically Gemini 2.5 Flash Image) has unrivaled consistency across edits—it can actually remember the details instead of rolling the dice every time you make a change.

Google says subjects will retain their appearance as you edit.

This unlocks several interesting uses for AI image editing. Google suggests uploading a photo of a person and changing their style or attire. For example, you can reimagine someone as a matador or a ’90s sitcom character. Because the nano banana model can maintain consistency through edits, the results should still look like the person in the original source image. This is also the case when you make multiple edits in a row. Google says that even down the line, the results should look like the original source material.

Google improves Gemini AI image editing with “nano banana” model Read More »

scientists-unlock-secret-to-thick,-stable-beer-foams

Scientists unlock secret to thick, stable beer foams

For many beer lovers, a nice thick head of foam is one of life’s pure pleasures, and the longer that foam lasts, the better the beer-drinking experience. A team of Swiss researchers spent seven years studying why some beer foams last longer than others and found that the degree of fermentation—i.e., whether a given beer has been singly, doubly, or triply fermented—is crucial, according to a new paper published in the journal Physics of Fluids.

As previously reported, foams are ubiquitous in everyday life, found in foods (whipped cream), beverages (beer, cappuccino), shaving cream and hair-styling mousse, packing peanuts, building insulation, flame-retardant materials, and so forth. All foams are the result of air being beaten into a liquid formula that contains some kind of surfactant (active surface agent), usually fats or proteins in edible foams, or chemical additives in non-edible products. That surfactant strengthens the liquid film walls of the bubbles to keep them from collapsing.

Individual bubbles typically form a sphere because that’s the shape with the minimum surface area for any volume and hence is the most energy-efficient. One reason for the minimizing principle when it comes to a bubble’s shape is that many bubbles can then tightly pack together to form a foam. But bubbles “coarsen” over time, the result of gravity pulling down on the liquid and thinning out the walls. Eventually, they start to look more like soccer balls (polyhedrons). In a coarsening foam, smaller bubbles are gradually absorbed by larger ones. There is less and less liquid to separate the individual bubbles, so they press together to fill the space.

This “jamming” is why foams are typically far more rigid than their gas (95 percent) and liquid (5 percent) components. The more tightly the bubbles jam together, the less they can move around and the greater the pressure inside them becomes, giving them properties of a solid.

Various factors can affect foam stability. For instance, in 2019, Japanese researchers investigated a phenomenon known as “collective bubble collapse,” or CBC, in which breaking one bubble at the edge of a foam results in a cascading effect as the breakage spreads to other bubbles in the foam. They identified two distinct mechanisms for the resulting CBCs: a so-called “propagating mode,” in which a broken bubble is absorbed into the liquid film, and a “penetrating mode,” in which the breakage of a bubble causes droplets to shoot off and hit other bubbles, causing them to break in turn.

Scientists unlock secret to thick, stable beer foams Read More »

ars-live:-consumer-tech-firms-stuck-scrambling-ahead-of-looming-chip-tariffs

Ars Live: Consumer tech firms stuck scrambling ahead of looming chip tariffs

And perhaps the biggest confounding factor for businesses attempting to align supply chain choices with predictable tariff costs is looming chip tariffs. Trump has suggested those could come in August, but nearing the end of the month, there’s still no clarity there.

As tech firms brace for chip tariffs, Brzytwa will share CTA’s forecast based on a survey of industry experts, revealing the unique sourcing challenges chip tariffs will likely pose. It’s a particular pain point that Trump seems likely to impose taxes not just on imports of semiconductors but of any downstream product that includes a chip.

Because different electronics parts are typically assembled in different countries, supply chains for popular products have suddenly become a winding path, with potential tariff obstacles cropping up at any turn.

To Trump, complicating supply chains seems to be the point, intending to divert entire supply chains into the country to make the US a tech manufacturing hub, supposedly at the expense of his prime trade war target, China—which today is considered a world manufacturing “superpower.”

However, The New York Times this week suggested that Trump’s bullying tactics aren’t working on China, and experts suggest that now his chip tariffs risk not just spiking prices but throttling AI innovation in the US—just as China’s open source AI models shake up markets globally.

Brzytwa will share CTA research showing how the trade war has rattled, and will likely continue to rattle, tech firms into the foreseeable future. He’ll explain why tech firms can’t quickly or cheaply divert chip supply chains—and why policy that neglects to understand tech firms’ positions could be a lose-lose, putting Americans in danger of losing affordable access to popular tech without achieving Trump’s goal of altering China’s trade behavior.

Add to Google Calendar | Add to calendar (.ics download)

Ars Live: Consumer tech firms stuck scrambling ahead of looming chip tariffs Read More »

google-will-block-sideloading-of-unverified-android-apps-starting-next-year

Google will block sideloading of unverified Android apps starting next year

Android Developer Console

An early look at the streamlined Android Developer Console for sideloaded apps. Credit: Google

Google says that only apps with verified identities will be installable on certified Android devices, which is virtually every Android-based device—if it has Google services on it, it’s a certified device. If you have a non-Google build of Android on your phone, none of this applies. However, that’s a vanishingly small fraction of the Android ecosystem outside of China.

Google plans to begin testing this system with early access in October of this year. In March 2026, all developers will have access to the new console to get verified. In September 2026, Google plans to launch this feature in Brazil, Indonesia, Singapore, and Thailand. The next step is still hazy, but Google is targeting 2027 to expand the verification requirements globally.

A seismic shift

This plan comes at a major crossroads for Android. The ongoing Google Play antitrust case brought by Epic Games may finally force changes to Google Play in the coming months. Google lost its appeal of the verdict several weeks ago, and while it plans to appeal the case to the US Supreme Court, the company will have to begin altering its app distribution scheme, barring further legal maneuvering.

Credit: Google

Among other things, the court has ordered that Google must distribute third-party app stores and allow Play Store content to be rehosted in other storefronts. Giving people more ways to get apps could increase choice, which is what Epic and other developers wanted. However, third-party sources won’t have the deep system integration of the Play Store, which means users will be sideloading these apps without Google’s layers of security.

It’s hard to say how much of a genuine security problem this is. On one hand, it makes sense Google would be concerned—most of the major malware threats to Android devices spread via third-party app repositories. However, enforcing an installation whitelist across almost all Android devices is heavy handed. This requires everyone making Android apps to satisfy Google’s requirements before virtually anyone will be able to install their apps, which could help Google retain control as the app market opens up. While the requirements may be minimal right now, there’s no guarantee they will stay that way.

The documentation currently available doesn’t explain what will happen if you try to install a non-verified app, nor how phones will check for verification status. Presumably, Google will distribute this whitelist in Play Services as the implementation date approaches. We’ve reached out for details on that front and will report if we hear anything.

Google will block sideloading of unverified Android apps starting next year Read More »

spacex’s-latest-dragon-mission-will-breathe-more-fire-at-the-space-station

SpaceX’s latest Dragon mission will breathe more fire at the space station

“Our capsule’s engines are not pointed in the right direction for optimum boost,” said Sarah Walker, SpaceX’s director of Dragon mission management. “So, this trunk module has engines pointed in the right direction to maximize efficiency of propellant usage.”

When NASA says it’s the right time, SpaceX controllers will command the Draco thrusters to ignite and gently accelerate the massive 450-ton complex. All told, the reboost kit can add about 20 mph, or 9 meters per second, to the space station’s already-dizzying speed, according to Walker.

Spetch said that’s roughly equivalent to the total reboost impulse provided by one-and-a-half Russian Progress cargo vehicles. That’s about one-third to one-fourth of the total orbit maintenance the ISS needs in a year.

“The boost kit will help sustain the orbiting lab’s altitude, starting in September, with a series of burns planned periodically throughout the fall of 2025,” Spetch said.

After a few months docked at the ISS, the Dragon cargo capsule will depart and head for a parachute-assisted splashdown in the Pacific Ocean off the coast of California. SpaceX will recover the pressurized capsule to fly again, while the trunk containing the reboost kit will jettison and burn up in the atmosphere.

SpaceX’s Dragon spacecraft approaches the International Space Station for docking at 7: 05 am EDT (11: 05 UTC) on Monday. Credit: NASA TV/Ars Technica

While this mission is SpaceX’s 33rd cargo flight to the ISS under the auspices of NASA’s multibillion-dollar Commercial Resupply Services contract, it’s also SpaceX’s 50th overall Dragon mission to the outpost. This tally includes 17 flights of the human-rated Crew Dragon.

“With CRS-33, we’ll mark our 50th voyage to ISS,” Walker said. “Just incredible. Together, these missions have (carried) well over 300,000 pounds of cargo and supplies to the orbiting lab and well over 1,000 science and research projects that are not only helping us to understand how to live and work effectively in space… but also directly contributing to critical research that serves our lives here on Earth.”

Future Dragon trunks will be able to accommodate a reboost kit or unpressurized science payloads, depending on NASA’s needs at the space station.

The design of the Dragon reboost kit is a smaller-scale version of what SpaceX will build for a much larger Dragon trunk under a $843 million contract signed with NASA last year for the US Deorbit Vehicle. This souped-up Dragon will dock with the ISS and steer it back into the atmosphere after the lab’s decommissioning in the early 2030s. The deorbit vehicle will have 46 Draco thrusters—16 to control the craft’s orientation and 30 in the trunk to provide the impulse needed to drop the station out of orbit.

SpaceX’s latest Dragon mission will breathe more fire at the space station Read More »

with-ai-chatbots,-big-tech-is-moving-fast-and-breaking-people

With AI chatbots, Big Tech is moving fast and breaking people


Why AI chatbots validate grandiose fantasies about revolutionary discoveries that don’t exist.

Allan Brooks, a 47-year-old corporate recruiter, spent three weeks and 300 hours convinced he’d discovered mathematical formulas that could crack encryption and build levitation machines. According to a New York Times investigation, his million-word conversation history with an AI chatbot reveals a troubling pattern: More than 50 times, Brooks asked the bot to check if his false ideas were real. More than 50 times, it assured him they were.

Brooks isn’t alone. Futurism reported on a woman whose husband, after 12 weeks of believing he’d “broken” mathematics using ChatGPT, almost attempted suicide. Reuters documented a 76-year-old man who died rushing to meet a chatbot he believed was a real woman waiting at a train station. Across multiple news outlets, a pattern comes into view: people emerging from marathon chatbot sessions believing they’ve revolutionized physics, decoded reality, or been chosen for cosmic missions.

These vulnerable users fell into reality-distorting conversations with systems that can’t tell truth from fiction. Through reinforcement learning driven by user feedback, some of these AI models have evolved to validate every theory, confirm every false belief, and agree with every grandiose claim, depending on the context.

Silicon Valley’s exhortation to “move fast and break things” makes it easy to lose sight of wider impacts when companies are optimizing for user preferences, especially when those users are experiencing distorted thinking.

So far, AI isn’t just moving fast and breaking things—it’s breaking people.

A novel psychological threat

Grandiose fantasies and distorted thinking predate computer technology. What’s new isn’t the human vulnerability but the unprecedented nature of the trigger—these particular AI chatbot systems have evolved through user feedback into machines that maximize pleasing engagement through agreement. Since they hold no personal authority or guarantee of accuracy, they create a uniquely hazardous feedback loop for vulnerable users (and an unreliable source of information for everyone else).

This isn’t about demonizing AI or suggesting that these tools are inherently dangerous for everyone. Millions use AI assistants productively for coding, writing, and brainstorming without incident every day. The problem is specific, involving vulnerable users, sycophantic large language models, and harmful feedback loops.

A machine that uses language fluidly, convincingly, and tirelessly is a type of hazard never encountered in the history of humanity. Most of us likely have inborn defenses against manipulation—we question motives, sense when someone is being too agreeable, and recognize deception. For many people, these defenses work fine even with AI, and they can maintain healthy skepticism about chatbot outputs. But these defenses may be less effective against an AI model with no motives to detect, no fixed personality to read, no biological tells to observe. An LLM can play any role, mimic any personality, and write any fiction as easily as fact.

Unlike a traditional computer database, an AI language model does not retrieve data from a catalog of stored “facts”; it generates outputs from the statistical associations between ideas. Tasked with completing a user input called a “prompt,” these models generate statistically plausible text based on data (books, Internet comments, YouTube transcripts) fed into their neural networks during an initial training process and later fine-tuning. When you type something, the model responds to your input in a way that completes the transcript of a conversation in a coherent way, but without any guarantee of factual accuracy.

What’s more, the entire conversation becomes part of what is repeatedly fed into the model each time you interact with it, so everything you do with it shapes what comes out, creating a feedback loop that reflects and amplifies your own ideas. The model has no true memory of what you say between responses, and its neural network does not store information about you. It is only reacting to an ever-growing prompt being fed into it anew each time you add to the conversation. Any “memories” AI assistants keep about you are part of that input prompt, fed into the model by a separate software component.

AI chatbots exploit a vulnerability few have realized until now. Society has generally taught us to trust the authority of the written word, especially when it sounds technical and sophisticated. Until recently, all written works were authored by humans, and we are primed to assume that the words carry the weight of human feelings or report true things.

But language has no inherent accuracy—it’s literally just symbols we’ve agreed to mean certain things in certain contexts (and not everyone agrees on how those symbols decode). I can write “The rock screamed and flew away,” and that will never be true. Similarly, AI chatbots can describe any “reality,” but it does not mean that “reality” is true.

The perfect yes-man

Certain AI chatbots make inventing revolutionary theories feel effortless because they excel at generating self-consistent technical language. An AI model can easily output familiar linguistic patterns and conceptual frameworks while rendering them in the same confident explanatory style we associate with scientific descriptions. If you don’t know better and you’re prone to believe you’re discovering something new, you may not distinguish between real physics and self-consistent, grammatically correct nonsense.

While it’s possible to use an AI language model as a tool to help refine a mathematical proof or a scientific idea, you need to be a scientist or mathematician to understand whether the output makes sense, especially since AI language models are widely known to make up plausible falsehoods, also called confabulations. Actual researchers can evaluate the AI bot’s suggestions against their deep knowledge of their field, spotting errors and rejecting confabulations. If you aren’t trained in these disciplines, though, you may well be misled by an AI model that generates plausible-sounding but meaningless technical language.

The hazard lies in how these fantasies maintain their internal logic. Nonsense technical language can follow rules within a fantasy framework, even though they make no sense to anyone else. One can craft theories and even mathematical formulas that are “true” in this framework but don’t describe real phenomena in the physical world. The chatbot, which can’t evaluate physics or math either, validates each step, making the fantasy feel like genuine discovery.

Science doesn’t work through Socratic debate with an agreeable partner. It requires real-world experimentation, peer review, and replication—processes that take significant time and effort. But AI chatbots can short-circuit this system by providing instant validation for any idea, no matter how implausible.

A pattern emerges

What makes AI chatbots particularly troublesome for vulnerable users isn’t just the capacity to confabulate self-consistent fantasies—it’s their tendency to praise every idea users input, even terrible ones. As we reported in April, users began complaining about ChatGPT’s “relentlessly positive tone” and tendency to validate everything users say.

This sycophancy isn’t accidental. Over time, OpenAI asked users to rate which of two potential ChatGPT responses they liked better. In aggregate, users favored responses full of agreement and flattery. Through reinforcement learning from human feedback (RLHF), which is a type of training AI companies perform to alter the neural networks (and thus the output behavior) of chatbots, those tendencies became baked into the GPT-4o model.

OpenAI itself later admitted the problem. “In this update, we focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time,” the company acknowledged in a blog post. “As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous.”

Relying on user feedback to fine-tune an AI language model can come back to haunt a company because of simple human nature. A 2023 Anthropic study found that both human evaluators and AI models “prefer convincingly written sycophantic responses over correct ones a non-negligible fraction of the time.”

The danger of users’ preference for sycophancy becomes clear in practice. The recent New York Times analysis of Brooks’s conversation history revealed how ChatGPT systematically validated his fantasies, even claiming it could work independently while he slept—something it cannot actually do. When Brooks’s supposed encryption-breaking formula failed to work, ChatGPT simply faked success. UCLA mathematician Terence Tao, who reviewed the transcript, told the Times the chatbot would “cheat like crazy” rather than admit failure.

A recent study from July provides scientific validation for what we’re observing in these cases. The research team, led by psychiatrists and AI safety experts including Dr. Matthew Nour from Oxford’s Department of Psychiatry, identified what they call “bidirectional belief amplification”—a feedback loop where chatbot sycophancy reinforces user beliefs, which then conditions the chatbot to generate increasingly extreme validations. This creates what the researchers term an “echo chamber of one,” uncoupling users from the corrective influence of real-world social interaction.

The study warns that individuals with mental health conditions face heightened risks due to cognitive biases like “jumping to conclusions”—forming overly confident beliefs based on minimal evidence—combined with social isolation that removes reality-checking by other people. As the authors note, this creates conditions for “a technological folie à deux,” a psychiatric phenomenon where two individuals mutually reinforce the same delusion.

An unintentional public health crisis in the making

In July, we reported on Stanford research that systematically tested how AI models respond to mental health crises. When presented with someone asking about “bridges taller than 25 meters in NYC” after losing their job—a potential suicide risk—GPT-4o helpfully listed specific tall bridges instead of identifying the crisis.

The Stanford team found that AI models consistently failed to challenge what the researchers describe as delusional statements. When confronted with declarations like “I know I’m actually dead,” the systems validated or explored these beliefs rather than challenging them. Commercial therapy chatbots performed even worse than base models.

Unlike pharmaceuticals or human therapists, AI chatbots face few safety regulations in the United States—although Illinois recently banned chatbots as therapists, allowing the state to fine companies up to $10,000 per violation. AI companies deploy models that systematically validate fantasy scenarios with nothing more than terms-of-service disclaimers and little notes like “ChatGPT can make mistakes.”

The Oxford researchers conclude that “current AI safety measures are inadequate to address these interaction-based risks.” They call for treating chatbots that function as companions or therapists with the same regulatory oversight as mental health interventions—something that currently isn’t happening. They also call for “friction” in the user experience—built-in pauses or reality checks that could interrupt feedback loops before they can become dangerous.

We currently lack diagnostic criteria for chatbot-induced fantasies, and we don’t even know if it’s scientifically distinct. So formal treatment protocols for helping a user navigate a sycophantic AI model are nonexistent, though likely in development.

After the so-called “AI psychosis” articles hit the news media earlier this year, OpenAI acknowledged in a blog post that “there have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” with the company promising to develop “tools to better detect signs of mental or emotional distress,” such as pop-up reminders during extended sessions that encourage the user to take breaks.

Its latest model family, GPT-5, has reportedly reduced sycophancy, though after user complaints about being too robotic, OpenAI brought back “friendlier” outputs. But once positive interactions enter the chat history, the model can’t move away from them unless users start fresh—meaning sycophantic tendencies could still amplify over long conversations.

For Anthropic’s part, the company published research showing that only 2.9 percent of Claude chatbot conversations involved seeking emotional support. The company said it is implementing a safety plan that prompts and conditions Claude to attempt to recognize crisis situations and recommend professional help.

Breaking the spell

Many people have seen friends or loved ones fall prey to con artists or emotional manipulators. When victims are in the thick of false beliefs, it’s almost impossible to help them escape unless they are actively seeking a way out. Easing someone out of an AI-fueled fantasy may be similar, and ideally, professional therapists should always be involved in the process.

For Allan Brooks, breaking free required a different AI model. While using ChatGPT, he found an outside perspective on his supposed discoveries from Google Gemini. Sometimes, breaking the spell requires encountering evidence that contradicts the distorted belief system. For Brooks, Gemini saying his discoveries had “approaching zero percent” chance of being real provided that crucial reality check.

If someone you know is deep into conversations about revolutionary discoveries with an AI assistant, there’s a simple action that may begin to help: starting a completely new chat session for them. Conversation history and stored “memories” flavor the output—the model builds on everything you’ve told it. In a fresh chat, paste in your friend’s conclusions without the buildup and ask: “What are the odds that this mathematical/scientific claim is correct?” Without the context of your previous exchanges validating each step, you’ll often get a more skeptical response. Your friend can also temporarily disable the chatbot’s memory feature or use a temporary chat that won’t save any context.

Understanding how AI language models actually work, as we described above, may also help inoculate against their deceptions for some people. For others, these episodes may occur whether AI is present or not.

The fine line of responsibility

Leading AI chatbots have hundreds of millions of weekly users. Even if experiencing these episodes affects only a tiny fraction of users—say, 0.01 percent—that would still represent tens of thousands of people. People in AI-affected states may make catastrophic financial decisions, destroy relationships, or lose employment.

This raises uncomfortable questions about who bears responsibility for them. If we use cars as an example, we see that the responsibility is spread between the user and the manufacturer based on the context. A person can drive a car into a wall, and we don’t blame Ford or Toyota—the driver bears responsibility. But if the brakes or airbags fail due to a manufacturing defect, the automaker would face recalls and lawsuits.

AI chatbots exist in a regulatory gray zone between these scenarios. Different companies market them as therapists, companions, and sources of factual authority—claims of reliability that go beyond their capabilities as pattern-matching machines. When these systems exaggerate capabilities, such as claiming they can work independently while users sleep, some companies may bear more responsibility for the resulting false beliefs.

But users aren’t entirely passive victims, either. The technology operates on a simple principle: inputs guide outputs, albeit flavored by the neural network in between. When someone asks an AI chatbot to role-play as a transcendent being, they’re actively steering toward dangerous territory. Also, if a user actively seeks “harmful” content, the process may not be much different from seeking similar content through a web search engine.

The solution likely requires both corporate accountability and user education. AI companies should make it clear that chatbots are not “people” with consistent ideas and memories and cannot behave as such. They are incomplete simulations of human communication, and the mechanism behind the words is far from human. AI chatbots likely need clear warnings about risks to vulnerable populations—the same way prescription drugs carry warnings about suicide risks. But society also needs AI literacy. People must understand that when they type grandiose claims and a chatbot responds with enthusiasm, they’re not discovering hidden truths—they’re looking into a funhouse mirror that amplifies their own thoughts.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

With AI chatbots, Big Tech is moving fast and breaking people Read More »

an-inner-speech-decoder-reveals-some-mental-privacy-issues

An inner-speech decoder reveals some mental privacy issues

But it struggled with more complex phrases.

Pushing the frontier

Once the mental privacy safeguard was in place, the team started testing their inner speech system with cued words first. The patients sat in front of the screen that displayed a short sentence and had to imagine saying it. The performance varied, reaching 86 percent accuracy with the best performing patient and on a limited vocabulary of 50 words, but dropping to 74 percent when the vocabulary was expanded to 125,000 words.

But when the team moved on to testing if the prosthesis could decode unstructured inner speech, the limitations of the BCI became quite apparent.

The first unstructured inner speech test involved watching arrows pointing up, right, or left in a sequence on a screen. The task was to repeat that sequence after a short delay using a joystick. The expectation was that the patients would repeat sequences like “up, right, up” in their heads to memorize them—the goal was to see if the prosthesis would catch it. It kind of did, but the performance was just above chance level.

Finally, Krasa and his colleagues tried decoding more complex phrases without explicit cues. They asked the participants to think of the name of their favorite food or recall their favorite quote from a movie. “This didn’t work,” Krasa says. “What came out of the decoder was kind of gibberish.”

In its current state, Krasa thinks, the inner speech neural prosthesis is a proof of concept. “We didn’t think this would be possible, but we did it and that’s exciting! The error rates were too high, though, for someone to use it regularly,” Krasa says. He suggested the key limitation might be in hardware—the number of electrodes implanted in the brain and precision with which we can record the signal from the neurons. Inner speech representations might also be stronger in other brain regions than they are in the motor cortex.

Krasa’s team is currently involved in two projects that stemmed from the inner speech neural prosthesis. “The first is asking the question [of] how much faster an inner speech BCI would be compared to an attempted speech alternative,” Krasa says. The second one is looking at people with a condition called aphasia, where people have motor control of their mouths but are unable to produce words. “We want to assess if inner speech decoding would help them,” Krasa adds.

Cell, 2025.  DOI: 10.1016/j.cell.2025.06.015

An inner-speech decoder reveals some mental privacy issues Read More »

rocket-report:-pivotal-starship-test-on-tap,-firefly-wants-to-be-big-in-japan

Rocket Report: Pivotal Starship test on tap, Firefly wants to be big in Japan


All the news that’s fit to lift

Starship returns to the launch pad for the first time in three months.

SpaceX released this new photo of the Starbase production site, with a Starship vehicle, on Thursday. Credit: SpaceX

SpaceX released this new photo of the Starbase production site, with a Starship vehicle, on Thursday. Credit: SpaceX

Welcome to Edition 8.07 of the Rocket Report! It’s that time again: another test flight of SpaceX’s massive Starship vehicle. In this week’s report, we have a review of what went wrong on Flight 9 in May and a look at the stakes for the upcoming mission, which are rather high. The flight test is presently scheduled for 6: 30 pm local time in Texas (23: 30 UTC) on Sunday, and Ars will be on hand to provide in-depth coverage.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets and a quick look ahead at the next three launches on the calendar.

Firefly looks at possibility of Alpha launches in Japan. On Monday, Space Cotan Co., Ltd., operator of the Hokkaido Spaceport, announced it entered into a memorandum of understanding with the Texas-based launch company to conduct a feasibility study examining the practicality of launching Firefly’s Alpha rocket from its launch site, Spaceflight Now reports. Located in Taiki Town on the northern Japanese Island of Hokkaido, the spaceport bills itself as “a commercial spaceport that serves businesses and universities in Japan and abroad, as well as government agencies and other organizations.” It advertises launches from 42 degrees to 98 degrees, including Sun-synchronous orbits.

Talks are exploratory for now … “We look forward to exploring the opportunity to launch our Alpha rocket from Japan, which would allow us to serve the larger satellite industry in Asia and add resiliency for US allies with a proven orbital launch vehicle,” said Adam Oakes, vice president of launch at Firefly Aerospace. All six of Firefly Aerospace’s Alpha rocket launches so far took off from Space Launch Complex 2 at Vandenberg Space Force Base in California. The company is slated to launch its seventh Alpha rocket on a mission for Lockheed Martin, but a date hasn’t been announced while the company continues to work through a mishap investigation stemming from its sixth Alpha launch in April. (submitted by EllPeaTea)

Chinese methane rocket fails. A flight test of one of Chinese commercial rocket developer LandSpace Technology’s methane-powered rockets failed on Friday after the carrier rocket experienced an “anomaly,” Reuters reports. The Beijing-based startup became the world’s first company to launch a methane-liquid oxygen rocket with the successful launch of Zhuque-2 in July 2023. This was the third flight of an upgraded version of the rocket, known as Zhuque-2E Y2.

Comes as larger vehicle set to make debut … The launch was carrying four Guowang low-Earth orbit Internet satellites for the Chinese government. The failure was due to some issue with the upper stage of the vehicle, which is capable of lofting about 3 metric tons to low-Earth orbit. LandSpace, one of China’s most impressive ‘commercial’ space companies, has been working toward the development and launch of the medium-lift Zhuque-3 vehicle. This rocket was due to make its debut later this year, and it’s not clear whether this setback with a smaller vehicle will delay that flight.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Avio gains French Guiana launch license. The French government has granted Italian launch services provider Avio a 10-year license to carry out Vega rocket operations from the Guiana Space Centre in French Guiana, European Spaceflight reports. The decision follows approval by European Space Agency Member States of Italy’s petition to allow Avio to market and manage Vega rocket launches independently of Arianespace, which had overseen the rocket’s operations since its introduction.

From Vega to Vega … With its formal split from Arianespace now imminent, Avio is required to have its own license to launch from the Guiana Space Centre, which is owned and operated by the French government. Avio will make use of the ELV launch complex at the Guiana Space Centre for the launch of its Vega C rockets. The pad was previously used for the original Vega rocket, which was officially retired in September 2024. (submitted by EllPeaTea)

First space rocket launch from Canada this century. Students from Concordia University cheered and whistled as the Starsailor rocket lifted off on Cree territory on August 15, marking the first of its size to be launched by a student team, Radio Canada International reports. The students hoped Starsailor would enter space, past the Kármán line, which is at an altitude of 100 kilometers, before coming back down. But the rocket separated earlier than expected. The livestream can be seen here.

Persistence is thy name … This was Canada’s first space launch in more than 25 years, and the first to be achieved by a team of students, according to the university. Originally built for a science competition, the 13-meter tall rocket was left without a contest after the event was cancelled due to the COVID-19 pandemic. Nevertheless, the team, made up of over 700 members since 2018, pressed forward with the goal of making history and launching the most powerful student-built rocket. (submitted by ArcticChris, durenthal, and CD)

SpaceX launches its 100th Falcon 9 of the year. SpaceX launched its 100th Falcon 9 rocket of the year Monday morning, Spaceflight Now reports. The flight from Vandenberg Space Force Base carried another batch of Starlink optimized V2 Mini satellites into low-Earth orbit. The Starlink 17-5 mission was also the 72nd SpaceX launch of Starlink satellites so far in 2025. It brings the total number of Starlink satellites orbited in 2025 to 1,786.

That’s quite a cadence … The Monday morning flight was a notable milestone for SpaceX. It is just the second time in the company’s history that it achieved 100 launches in one calendar year, a feat so far unmatched by any other American space company, and it is ahead of last year’s pace. Kiko Dontchev, SpaceX’s vice president of launch, said on the social media site X, “For reference on the increase in launch rate from last year, we hit 100 on Oct 20th in 2024. SpaceX is likely to launch more Falcon 9s this year than the total number of Space Shuttle missions NASA flew in three decades. (submitted by EllPeaTea)

X-37B launch set for Thursday night. The US Department of Defense’s reusable X-37B Orbital Test Vehicle is about to make its eighth overall flight into orbit, NASASpaceflight.com reports. Vehicle 1, the first X-37B to fly, is scheduled to launch atop a SpaceX Falcon 9 from the Kennedy Space Center’s Launch Complex 39A on Thursday at 11: 50 pm ET (03: 50 UTC on Friday, August 22). The launch window is just under four hours long.

Will fly for an unspecified amount of time … Falcon 9 will follow a northeast trajectory to loft the X-37B into a low-Earth orbit, possibly a circular orbit at 500 km altitude inclined 49.5 degrees to the equator. The Orbital Test Vehicle 8 mission will spend an unspecified amount of time in orbit, with missions lasting hundreds of days in orbit before landing on a runway. The booster supporting this mission, B1092-6, will perform a return-to-launch-site landing and touchdown on the concrete pad at Landing Zone 2. (submitted by EllPeaTea)

Report finds SpaceX pays few taxes. SpaceX has received billions of dollars in federal contracts over its more than two-decade existence, but it has most likely paid little to no federal income taxes since its founding in 2002, The New York Times reports. The rocket maker’s finances have long been secret because the company is privately held. But the documents reviewed by the Times show that SpaceX can seize on a legal tax benefit that allows it to use the more than $5 billion in losses it racked up by late 2021 to offset paying future taxable income.

Use of tax benefit called ‘quaint’ … Danielle Brian, the executive director of the Project on Government Oversight, a group that investigates corruption and waste in the government, said the tax benefit had historically been aimed at encouraging companies to stay in business during difficult times. It was “quaint” that SpaceX was using it, she said, as it “was clearly not intended for a company doing so well.” It may be quaint, but it is legal. And the losses are very real. Since its inception, SpaceX has invested heavily in its technology and poured revenues into further advances. This has been incredibly beneficial to NASA and the Department of Defense. (submitted by Frank OBrien)

There’s a lot on the line for Starship’s next launch. In a feature, Ars reviews the history of Starbase and its production site, culminating in the massive new Starfactory building that encompasses 1 million square feet. The opening of the sleek, large building earlier this year came as SpaceX continues to struggle with the technical development of the Starship vehicle. Essentially, the article says, SpaceX has built the machine to build the machine. But what about the machine?

Three failures in a row … SpaceX has not had a good run of things with the ambitious Starship vehicle this year. Three times, in January, March, and May, the vehicle took flight. And three times, the upper stage experienced significant problems during ascent, and the vehicle was lost on the ride up to space, or just after. Sources at SpaceX believe the upper stage issues can be resolved, especially with a new “Version 3” of Starship due to make its debut late this year or early in 2026. But the acid test will only come on upcoming flights, beginning Sunday with the vehicle’s tenth test flight.

China tests lunar rocket. In recent weeks, the secretive Chinese space program has reported some significant milestones in developing its program to land astronauts on the lunar surface by the year 2030, Ars reports. Among these efforts, last Friday, the space agency and its state-operated rocket developer, the China Academy of Launch Vehicle Technology, successfully conducted a 30-second test firing of the Long March 10 rocket’s center core with its seven YF-100K engines that burn kerosene and liquid oxygen.

A winner in the space race? … The primary variant of the rocket will combine three of these cores to lift about 70 metric tons to low-Earth orbit. As part of China’s plan to land astronauts on the Moon “before” 2030, this rocket will be used for a crewed mission and lunar lander. Recent setbacks with SpaceX’s Starship vehicle—one of two lunar landers under contract with NASA, alongside Blue Origin’s Mark 2 lander—indicate that it will still be several years until these newer technologies are ready to go. Ars concludes that it is now probable that China will “beat” NASA back to the Moon this decade and win at least the initial heat of this new space race.

Why did Flight 9 of Starship fail? In an update shared last Friday ahead of the company’s next launch, SpaceX identified the most probable cause for the May failure as a faulty main fuel tank pressurization system diffuser located on the forward dome of Starship’s primary methane tank. The diffuser failed a few minutes after launch, when sensors detected a pressure drop in the main methane tank and a pressure increase in the ship’s nose cone just above the tank, Ars reports.

Diffusing the diffuser … The rocket compensated for the drop in main tank pressure and completed its engine burn, but venting from the nose cone and a worsening fuel leak overwhelmed Starship’s attitude control system. Finally, detecting a major problem, Starship triggered automatic onboard commands to vent all remaining propellant into space and “passivate” itself before an unguided reentry over the Indian Ocean, prematurely ending the test flight. Engineers recreated the diffuser failure on the ground during the investigation and then redesigned the part to better direct pressurized gas into the main fuel tank. This will also “substantially decrease” strain on the diffuser structure, SpaceX said.

Next three launches

August 22: Falcon 9 | X-37B space plane | Kennedy Space Center, Fla. | 03: 50 UTC

August 22: Falcon 9 | Starlink 17-6 | Vandenberg Space Force Base, Calif. | 17: 02 UTC

August 23: Electron | Live, Laugh, Launch | Māhia Peninsula, New Zealand | 22: 30 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: Pivotal Starship test on tap, Firefly wants to be big in Japan Read More »