Author name: DJ Henderson

after-seeing-hundreds-of-launches,-spacex’s-rocket-catch-was-a-new-thrill

After seeing hundreds of launches, SpaceX’s rocket catch was a new thrill


For a few moments, my viewing angle made it look like the rocket was coming right at me.

Coming in for the catch. Credit: Stephen Clark/Ars Technica

BOCA CHICA BEACH, Texas—I’ve taken some time to process what happened on the mudflats of South Texas a little more than a week ago and relived the scene in my mind countless times.

With each replay, it’s still as astonishing as it was when I saw it on October 13, standing on an elevated platform less than 4 miles away. It was surreal watching SpaceX’s enormous 20-story-tall Super Heavy rocket booster plummeting through the sky before being caught back at its launch pad by giant mechanical arms.

This is the way, according to SpaceX, to enable a future where it’s possible to rapidly reuse rockets, not too different from the way airlines turn around their planes between flights. This is required for SpaceX to accomplish the company’s mission, set out by Elon Musk two decades ago, of building a settlement on Mars.

Of course, SpaceX’s cameras got much better views of the catch than mine. This is one of my favorite video clips.

The final phase of Super Heavy’s landing burn used the three center Raptor engines to precisely steer into catch position pic.twitter.com/BxQbOmT4yk

— SpaceX (@SpaceX) October 14, 2024

In the near-term future, regularly launching and landing Super Heavy boosters, and eventually the Starship upper stage that goes into orbit, will make it possible for SpaceX to achieve the rapid-fire launch cadence the company needs to fulfill its contracts with NASA. The space agency is paying SpaceX roughly $4 billion to develop a human-rated version of Starship to land astronauts on the Moon under the umbrella of the Artemis program.

To make that happen, SpaceX must launch numerous Starship tankers over the course of a few weeks to a few months to refuel the Moon-bound Starship lander in low-Earth orbit. Rapid reuse is fundamental to the lunar lander architecture NASA chose for the first two Artemis landing missions.

SpaceX, which is funding most of Starship’s development costs, says upgraded versions will be capable of hauling 200 metric tons of payload to low-Earth orbit while flying often at a relatively low cost. This would unlock innumerable other potential applications for the US military and commercial industry.

Here’s a sampling of the photos I captured of SpaceX’s launch and catch, followed by the story of how I got them.

The fifth full-scale test flight of SpaceX’s new-generation Starship rocket lifted off from South Texas at sunrise Sunday morning. Stephen Clark/Ars Technica

Some context

I probably spent too much time watching last week’s flight through my camera’s viewfinder, but I suspect I’ll see it many more times. After all, SpaceX wants to make this a routine occurrence, more common than the landings of the smaller Falcon 9 booster now happening several times per week.

Nine years ago, I watched from 7 miles away as SpaceX landed a Falcon 9 for the first time. This was the closest anyone not directly involved in the mission could watch as the Falcon 9’s first stage returned to Cape Canaveral Space Force Station in Florida, a few minutes after lifting off with a batch of commercial communications satellites.

Citing safety concerns, NASA and the US Air Force closed large swaths of the spaceport for the flight. Journalists and VIPs were kept far away, and the locations on the base where employees or special guests typically watch a launch were off-limits. The landing happened at night and played out like a launch in reverse, with the Falcon 9 booster settling to a smooth touchdown on a concrete landing pad a few miles from the launch site.

The Falcon 9 landing on December 21, 2015, came after several missed landings on SpaceX’s floating offshore drone ship. With the Super Heavy booster, SpaceX nailed the catch on the first try.

The catch method means the rocket doesn’t need to carry landing legs, as the Falcon 9 does. This reduces the rocket’s weight and complexity, and theoretically reduces the amount of time and money needed to prepare the rocket to fly again.

I witnessed the first catch of SpaceX’s Super Heavy booster last week from just outside the restricted zone around the company’s sprawling Starbase launch site in South Texas. Deputies from the local sheriff’s office patrolled the area to ensure no one strayed inside the keep-out area and set up roadblocks to turn away anyone who wasn’t supposed to be there.

The launch was early in the morning, so I arrived late the night before at a viewing site run by Rocket Ranch, a campground that caters to SpaceX fans seeking a front-row seat to the goings-on at Starbase. Some SpaceX employees, several other reporters, and media photographers were there, too.

There are other places to view a Starship launch. Condominium and hotel towers on South Padre Island roughly 6 miles from the launch pad, a little farther than my post, offer commanding aerial views of Starbase, which is situated on Boca Chica Beach a few miles north of the US-Mexico border. The closest publicly accessible place to watch a Starship launch is on the south shore of the mouth of the Rio Grande River, but if you’re coming from the United States, getting there requires crossing the border and driving off-road.

People gather at the Rocket Ranch viewing site near Boca Chica Beach, Texas, before the third Starship test flight in March.

People gather at the Rocket Ranch viewing site near Boca Chica Beach, Texas, before the third Starship test flight in March. Credit: Brandon Bell/Getty Images

I chose a location with an ambiance somewhere in between the hustle and bustle of South Padre Island and the isolated beach just across the border in Mexico. The vibe on the eve of the launch had the mix of a rave and a pilgrimage of SpaceX true believers.

A laser light show projected the outline of a Starship against a tree as uptempo EDM tracks blared from speakers. Meanwhile, dark skies above revealed cosmic wonders invisible to most city dwellers, and behind us, the Rio Grande inexorably flowed toward the sea. Those of us who were there to work got a few hours of sleep, but I’m not sure I can say the same for everyone.

At first light, a few scattered yucca plants sticking up from the chaparral were the only things between us and SpaceX’s sky-scraping Starship rocket on the horizon. We got word the launch time would slip 25 minutes. SpaceX chose the perfect time to fly, with a crystal-clear sky hued by the rising Sun.

First, you see it

I was at Starbase for all four previous Starship test flights and have covered more than 300 rocket launches in person. I’ve been privileged to witness a lot of history, but after hundreds of launches, some of the novelty has worn off. Don’t get me wrong—I still feel a lump in my throat every time I see a rocket leave the planet. Prelaunch jitters are a real thing. But I no longer view every launch as a newsworthy event.

October 13 was different.

Those prelaunch anxieties were present as SpaceX counted off the final seconds to liftoff. First, you see it. A blast of orange flashed from the bottom of the gleaming, frosty rocket filled with super-cold propellants. Then, the 11 million-pound vehicle began a glacial climb from the launch pad. About 20 seconds later, the rumble from the rocket’s 33 methane-fueled engines reached our location.

Our viewing platform shook from the vibrations for over a minute as Starship and the Super Heavy booster soared into the stratosphere. Two-and-a-half minutes into the flight, the rocket was just a point of bluish-white light as it accelerated east over the Gulf of Mexico.

Another burst of orange encircled the rocket during the so-called hot-staging maneuver, when the Starship upper stage lit its engines at the moment the Super Heavy booster detached to begin the return to Starbase. Flying at the edge of space more than 300,000 feet over the Gulf, the booster flipped around and fired its engines to cancel out its downrange velocity and propel itself back toward the coastline.

The engines shut down, and the booster plunged deeper into the atmosphere. Eventually, the booster transformed from a dot in the sky back into the shape of a rocket as it approached Starbase at supersonic speed. The rocket’s velocity became more evident as it got closer. For a few moments, my viewing angle made it look like the rocket—bigger than the fuselage of a 747 jumbo jet—was coming right at me.

The descending booster zoomed through the contrail cloud it left behind during launch, then reappeared into clear air. With the naked eye, I could see a glow inside the rocket’s engine bay as it dived toward the launch pad, presumably from heat generated as the vehicle slammed into ever-denser air on the way back to Earth. This phenomenon made the rocket resemble a lit cigar.

Finally, the rocket hit the brakes by igniting 13 of its 33 engines, then downshifted to three engines for the final maneuver to slide in between the launch tower’s two catch arms. Like balancing a pencil on the tip of your finger, the Raptor engines vectored their thrust to steady the booster, which, for a moment, appeared to be floating next to the tower.

The Super Heavy booster, more than 20 stories tall, rights itself over the launch pad in Texas, moments before two mechanical arms grabbed it in mid-air.

Credit: Stephen Clark/Ars Technica

The Super Heavy booster, more than 20 stories tall, rights itself over the launch pad in Texas, moments before two mechanical arms grabbed it in mid-air. Credit: Stephen Clark/Ars Technica

A double-clap sonic boom jolted spectators from their slack-jawed awe. Only then could we hear the roar from the start of the Super Heavy booster’s landing burn. This sound reached us just as the rocket settled into the grasp of the launch tower, with its so-called catch fittings coming into contact with the metallic beams of the catch arms.

The engines switched off, and there it was. Many of the spectators lucky enough to be there jumped up and down with joy, hugged their friends, or let out an ecstatic yell. I snapped a few final photos and returned to his laptop, grinning, speechless, and started wondering how I could put this all into words.

Once the smoke cleared, at first glance, the rocket looked as good as new. There was no soot on the outside of the booster, as it is on the Falcon 9 rocket after returning from space. This is because the Super Heavy booster and Starship use cleaner-burning methane fuel instead of kerosene.

Elon Musk, SpaceX’s founder and CEO, later said the outer ring of engine nozzles on the bottom of the rocket showed signs of heating damage. This, he said, would be “easily addressed.”

What’s not so easy to address is how SpaceX can top this. A landing on the Moon or Mars? Sure, but realistically, those milestones are years off. There’s something that’ll happen before then.

Sometime soon, SpaceX will try to catch a Starship back at the launch pad at the end of an orbital flight. This will be an extraordinarily difficult feat, far exceeding the challenge of catching the Super Heavy booster.

Super Heavy only reaches a fraction of the altitude and speed of the Starship upper stage, and while the booster’s size and the catch method add degrees of difficulty, the rocket follows much the same up-and-down flight profile pioneered by the Falcon 9. Starship, on the other hand, will reenter the atmosphere from orbital velocity, streak through the sky surrounded by super-heated plasma, then shift itself into a horizontal orientation for a final descent SpaceX likes to call the “belly flop.”

In the last few seconds, Starship will reignite three of its engines, flip itself vertical, and come down for a precision landing. SpaceX demonstrated the ship could do this on the test flight last week, when the vehicle made a controlled on-target splashdown in the Indian Ocean after traveling halfway around the world from Texas.

If everything goes according to plan, SpaceX could be ready to try to catch a Starship for real next year. Stay tuned.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

After seeing hundreds of launches, SpaceX’s rocket catch was a new thrill Read More »

bizarre-fish-has-sensory-“legs”-it-uses-for-walking-and-tasting

Bizarre fish has sensory “legs” it uses for walking and tasting

Finding out what controls the formation of sensory legs meant growing sea robins from eggs. The research team observed that the legs of sea robins develop from the three pectoral fin rays that are around the stomach area of the fish, then separate from the fin as they continue to develop. Among the most active genes in the developing legs is the transcription factor (a protein that binds to DNA and turns genes on and off) known as tbx3a. When genetically engineered sea robins had tbx3a edited out with CRISPR-Cas9, it resulted in fewer legs, deformed legs, or both.

“Disruption of tbx3a results in upregulation of pectoral fin markers prior to leg separation, indicating that leg rays become more similar to fins in the absence of tbx3a,” the researchers said in a second study, also published in Current Biology.

To see whether genes for sensory legs are a dominant feature, the research team also tried creating sea robin hybrids, crossing species with and without sensory legs. This resulted in offspring with legs that had sensory capabilities, indicating that it’s a genetically dominant trait.

Exactly why sea robins evolved the way they did is still unknown, but the research team came up with a hypothesis. They think the legs of sea robin ancestors were originally intended for locomotion, but they gradually started gaining some sensory utility, allowing the animal to search the visible surface of the seafloor for food. Those fish that needed to search deeper for food developed sensory legs that allowed them to taste and dig for hidden prey.

“Future work will leverage the remarkable biodiversity of sea robins to understand the genetic basis of novel trait formation and diversification in vertebrates,” the team also said in the first study. “Our work represents a basis for understanding how novel traits evolve.”

Current Biology, 2024. DOI:  10.1016/j.cub.2024.08.014, 10.1016/j.cub.2024.08.042

Bizarre fish has sensory “legs” it uses for walking and tasting Read More »

openai-releases-chatgpt-app-for-windows

OpenAI releases ChatGPT app for Windows

On Thursday, OpenAI released an early Windows version of its first ChatGPT app for Windows, following a Mac version that launched in May. Currently, it’s only available to subscribers of Plus, Team, Enterprise, and Edu versions of ChatGPT, and users can download it for free in the Microsoft Store for Windows.

OpenAI is positioning the release as a beta test. “This is an early version, and we plan to bring the full experience to all users later this year,” OpenAI writes on the Microsoft Store entry for the app. (Interestingly, ChatGPT shows up as being rated “T for Teen” by the ESRB in the Windows store, despite not being a video game.)

A screenshot of the new Windows ChatGPT app captured on October 18, 2024.

A screenshot of the new Windows ChatGPT app captured on October 18, 2024.

Credit: Benj Edwards

A screenshot of the new Windows ChatGPT app captured on October 18, 2024. Credit: Benj Edwards

Upon opening the app, OpenAI requires users to log into a paying ChatGPT account, and from there, the app is basically identical to the web browser version of ChatGPT. You can currently use it to access several models: GPT-4o, GPT-4o with Canvas, 01-preview, 01-mini, GPT-4o mini, and GPT-4. Also, it can generate images using DALL-E 3 or analyze uploaded files and images.

If you’re running Windows 11, you can instantly call up a small ChatGPT window when the app is open using an Alt+Space shortcut (it did not work in Windows 10 when we tried). That could be handy for asking ChatGPT a quick question at any time.

A screenshot of the new Windows ChatGPT app listing in the Microsoft Store captured on October 18, 2024.

Credit: Benj Edwards

A screenshot of the new Windows ChatGPT app listing in the Microsoft Store captured on October 18, 2024. Credit: Benj Edwards

And just like the web version, all the AI processing takes place in the cloud on OpenAI’s servers, which means an Internet connection is required.

So as usual, chat like somebody’s watching, and don’t rely on ChatGPT as a factual reference for important decisions—GPT-4o in particular is great at telling you what you want to hear, whether it’s correct or not. As OpenAI says in a small disclaimer at the bottom of the app window: “ChatGPT can make mistakes.”

OpenAI releases ChatGPT app for Windows Read More »

how-the-malleus-maleficarum-fueled-the-witch-trial-craze

How the Malleus maleficarum fueled the witch trial craze


Invention of printing press, influence of nearby cities created perfect conditions for social contagion.

Between 1400 and 1775, a significant upsurge of witch trials swept across early-modern Europe, resulting in the execution of an estimated 40,000–60,000 accused witches. Historians and social scientists have long studied this period in hopes of learning more about how large-scale social changes occur. Some have pointed to the invention of the printing press and the publication of witch-hunting manuals—most notably the highly influential Malleus maleficarum—as a major factor, making it easier for the witch-hunting hysteria to spread across the continent.

The abrupt emergence of the craze and its rapid spread, resulting in a pronounced shift in social behaviors—namely, the often brutal persecution of suspected witches—is consistent with a theory of social change dubbed “ideational diffusion,” according to a new paper published in the journal Theory and Society. There is the introduction of new ideas, reinforced by social networks, that eventually take root and lead to widespread behavioral changes in a society.

The authors had already been thinking about cultural change and the driving forces by which it occurs, including social contagion—especially large cultural shifts like the Reformation and the Counter-Reformation, for example. One co-author, Steve Pfaff, a sociologist at Chapman University, was working on a project about witch trials in Scotland and was particularly interested in the role the Malleus maleficarum might have played.

“Plenty of other people have written about witch trials, specific trials or places or histories,” co-author Kerice Doten-Snitker, a social scientist with the Santa Fe Institute, told Ars. “We’re interested in building a general theory about change and wanted to use that as a particular opportunity. We realized that the printing of the Mallleus maleficarum was something we could measure, which is useful when you want to do empirical work, not just theoretical work.”

Ch-ch-ch-changes…

The Witch, No. 1, c. 1892 lithograph by Joseph E. Baker. shows a woman in a courtroom, in the dock with arms outstretched before a judge and jury.

The Witch, No. 1, c. 1892 lithograph by Joseph E. Baker.

Credit: Public domain

The Witch, No. 1, c. 1892 lithograph by Joseph E. Baker. Credit: Public domain

Modeling how sweeping cultural change happens has been a hot research topic for decades, hitting the cultural mainstream with the publication of Malcolm Gladwell’s 2000 bestseller The Tipping Point. Researchers continue to make advances in this area. University of Pennsylvania sociologist Damon Centola, for instance, published How Behavior Spreads: the Science of Complex Contagions in 2018, in which he applied new lessons learned in epidemiology—on how viral epidemics spread—to our understanding of how social networks can broadly alter human behavior. But while epidemiological modeling might be useful for certain simple forms of social contagion—people come into contact with something and it spreads rapidly, like a viral meme or hit song—other forms of social contagion are more complicated, per Doten-Snitker.

Doten-Snitker et al.’s ideational diffusion model differs from Centola’s in some critical respects. For cases like the spread of witch trials, “It’s not just that people are coming into contact with a new idea, but that there has to be something cognitively that is happening,” said Doten-Snitker. “People have to grapple with the ideas and undergo some kind of idea adoption. We talk about this as reinterpreting the social world. They have to rethink what’s happening around them in ways that make them think that not only are these attractive new ideas, but also those new ideas prescribe different types of behavior. You have to act differently because of what you’re encountering.”

The authors chose to focus on social networks and trade routes for their analysis of the witch trials, building on prior research that prioritized broader economic and environmental factors. Cultural elites were already exchanging ideas through letters, but published books added a new dimension to those exchanges. Researchers studying 21st century social contagion can download massive amounts of online data from social networks. That kind of data is sparse from the medieval era. “We don’t have the same archives of communication,” said Doten-Snitker. “There’s this dual thing happening: the book itself, and people sharing information, arguing back and forth with each other” about new ideas.

Graph showing the stages of the ideation diffusion model

The stages of the ideation diffusion model.

Credit: K. Dooten-Snitker et al., 2024

The stages of the ideation diffusion model. Credit: K. Dooten-Snitker et al., 2024

So she and her co-authors et al. turned to trade routes to determine which cities were more central and thus more likely to be focal points of new ideas and information. “The places that are more central in these trade networks have more stuff passing through and are more likely to come into contact with new ideas from multiple directions—specifically ideas about witchcraft,” said Doten-Snitker. Then they looked at which of 553 cities in Central Europe held their first witch trials, and when, as well as those where the Malleus maleficarum and similar manuals had been published.

Social contagion

They found that each new published edition of the Malleus maleficarum corresponded with a subsequent increase in witch trials. But that wasn’t the only contributing factor; trends in neighboring cities also influenced the increase, resulting in a slow-moving ripple effect that spread across the continent. “What’s the behavior of neighboring cities?” said Doten-Snitker. “Are they having witch trials? That makes your city more likely to have a witch trial when you have the opportunity.”

In epidemiological models like Centola’s, the pattern of change is a slow start with early adoption that then picks up speed and spreads before slowing down again as a saturation point is reached, because most people have now adopted the new idea or technology. That doesn’t happen with witch trials or other complex social processes such as the spread of medieval antisemitism. “Most things don’t actually spread that widely; they don’t reach complete saturation,” said Doten-Snitker. “So we need to have theories that build that in as well.”

In the case of witch trials, the publication of the Malleus maleficarum helped shift medieval attitudes toward witchcraft, from something that wasn’t viewed as a particularly pressing problem to something evil that was menacing society. The tome also offered practical advice on what should be done about it. “So there’s changing ideas about witchcraft and this gets coupled with, well, you need to do something about it,” said Doten-Snitker. “Not only is witchcraft bad, but it’s a threat. So you have a responsibility as a community to do something about witches.”

The term “witch hunt” gets bandied about frequently in modern times, particularly on social media, and is generally understood to reference a mob mentality unleashed on a given target. But Doten-Snitker emphasizes that medieval witch trials were not “mob justice”; they were organized affairs, with official accusations to an organized local judiciary that collected and evaluated evidence, using the Malleus malficarum and similar treatises as a guide. The process, she said, is similar to how today’s governments adopt new policies.

Why conspiracy theories take hold

Graphic showing cities where witch trials did and did not take place in Central EuropeWitch trials in Central Europe, 1400–1679, as well as those that printed the Malleus Maleficarum.

Cities where witch trials did and did not take place in Central Europe, 1400–1679, as well as those with printed copies of the Malleus Maleficarum.

Credit: K. Doten-Snitker et al., 2024

Cities where witch trials did and did not take place in Central Europe, 1400–1679, as well as those with printed copies of the Malleus Maleficarum. Credit: K. Doten-Snitker et al., 2024

The authors developed their model using the witch trials as a useful framework, but there are contemporary implications, particularly with regard to the rampant spread of misinformation and conspiracy theories via social media. These can also lead to changes in real-world behavior, including violent outbreaks like the January 6, 2021, attack on the US Capitol or, more recently, threats aimed at FEMA workers in the wake of Hurricane Helene. Doten-Snitker thinks their model could help identify the emergence of certain telltale patterns, notably the combination of the spread of misinformation or conspiracy theories on social media along with practical guidelines for responding.

“People have talked about the ways that certain conspiracy theories end up making sense to people,” said Doten-Snitker. “It’s because they’re constructing new ways of thinking about their world. This is why people start with one conspiracy theory belief that is then correlated with belief in others. It’s because you’ve already started rebuilding your image of what’s happening in the world around you and that serves as a basis for how you should act.”

On the plus side, “It’s actually hard for something that feels compelling to certain people to spread throughout the whole population,” she said. “We should still be concerned about ideas that spread that could be socially harmful. We just need to figure out where it might be most likely to happen and focus our efforts in those places rather than assuming it is a global threat.”

There was a noticeable sharp decline in both the frequency and intensity of witch trial persecutions in 1679 and onward, raising the question of how such cultural shifts eventually ran their course. That aspect is not directly addressed by their model, according to Doten-Snitker, but it does provide a framework for the kinds of things that might signal a similar major shift, such as people starting to push back against extreme responses or practices.  In the case of the tail end of the witch trials craze, for instance, there was increased pressure to prioritize clear and consistent judicial practices that excluded extreme measures such as extracting confessions via torture, for example, or excluding dreams as evidence of witchcraft.

“That then supplants older ideas about what is appropriate and how you should behave in the world and you could have a de-escalation of some of the more extremist tendencies,” said Doten-Snitker. “It’s not enough to simply say those ideas or practices are wrong. You have to actually replace it with something. And that is something that is in our model. You have to get people to re-interpret what’s happening around them and what they should do in response. If you do that, then you are undermining a worldview rather than just criticizing it.”

Theory and Society, 2024. DOI: 10.1007/s11186-024-09576-1  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior reporter at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

How the Malleus maleficarum fueled the witch trial craze Read More »

android-15’s-security-and-privacy-features-are-the-update’s-highlight

Android 15’s security and privacy features are the update’s highlight

Android 15 started rolling out to Pixel devices Tuesday and will arrive, through various third-party efforts, on other Android devices at some point. There is always a bunch of little changes to discover in an Android release, whether by reading, poking around, or letting your phone show you 25 new things after it restarts.

In Android 15, some of the most notable involve making your device less appealing to snoops and thieves and more secure against the kids to whom you hand your phone to keep them quiet at dinner. There are also smart fixes for screen sharing, OTP codes, and cellular hacking prevention, but details about them are spread across Google’s own docs and blogs and various news site’s reports.

Here’s what is notable and new in how Android 15 handles privacy and security.

Private Space for apps

In the Android 15 settings, you can find “Private Space,” where you can set up a separate PIN code, password, biometric check, and optional Google account for apps you don’t want to be available to anybody who happens to have your phone. This could add a layer of protection onto sensitive apps, like banking and shopping apps, or hide other apps for whatever reason.

In your list of apps, drag any app down to the lock space that now appears in the bottom right. It will only be shown as a lock until you unlock it; you will then see the apps available in your new Private Space. After that, you should probably delete it from the main app list. Dave Taylor has a rundown of the process and its quirks.

It’s obviously more involved than Apple’s “Hide and Require Face ID” tap option but with potentially more robust hiding of the app.

Hiding passwords and OTP codes

A second form of authentication is good security, but allowing apps to access the notification text with the code in it? Not so good. In Android 15, a new permission, likely to be given only to the most critical apps, prevents the leaking of one-time passcodes (OTPs) to other apps waiting for them. Sharing your screen will also hide OTP notifications, along with usernames, passwords, and credit card numbers.

Android 15’s security and privacy features are the update’s highlight Read More »

there’s-another-massive-meat-recall-over-listeria—and-it’s-a-doozy

There’s another massive meat recall over Listeria—and it’s a doozy

Another nationwide meat recall is underway over Listeria contamination—and its far more formidable than the last.

As of October 15, meat supplier BrucePac, of Durant, Oklahoma, is recalling 11.8 million pounds of ready-to-eat meat and poultry products after routine federal safety testing found Listeria monocytogenes, a potentially deadly bacterium, in samples of the company’s poultry. The finding triggered an immediate recall, which was first issued on October 9. But, officials are still working to understand the extent of the contamination—and struggling to identify the hundreds of potentially contaminated products.

“Because we sell to other companies who resell, repackage, or use our products as ingredients in other foods, we do not have a list of retail products that contain our recalled items,” BrucePac said in a statement updated October 15.

Depending on the packaging, the products may have establishment numbers 51205 or P-51205 inside or under the USDA mark of inspection. But, for now, consumers’ best chance of determining whether they’ve purchased any of the affected products is to look through a 342-page list of products identified by the US Department of Agriculture so far.

The unorganized document lists fresh and frozen foods sold at common retailers, including 7-Eleven, Aldi, Amazon Fresh, Giant Eagle, Kroger, Target, Trader Joe’s, Walmart, and Wegmans. Affected products carry well-known brand names, such as Atkins, Boston Market, Dole, Fresh Express, Jenny Craig, Michelina’s, Taylor Farms, and stores’ brands, such as Target’s Good & Gather. The recalled products were made between May 31, 2024 and October 8, 2024.

In the latest update, the USDA noted that some of the recalled products were also distributed to schools, but the agency hasn’t identified the schools that received the products. Restaurants and other institutions also received the products.

There’s another massive meat recall over Listeria—and it’s a doozy Read More »

amazon-joins-google-in-investing-in-small-modular-nuclear-power

Amazon joins Google in investing in small modular nuclear power


Small nukes is good nukes?

What’s with the sudden interest in nuclear power among tech titans?

Diagram of a reactor and its coolant system. There are two main components, the reactor itself, which has a top-to-bottom flow of fuel pellets, and the boiler, which receives hot gas from the reactor and uses it to boil water.

Fuel pellets flow down the reactor (left), as gas transfer heat to a boiler (right). Credit: X-energy

On Tuesday, Google announced that it had made a power purchase agreement for electricity generated by a small modular nuclear reactor design that hasn’t even received regulatory approval yet. Today, it’s Amazon’s turn. The company’s Amazon Web Services (AWS) group has announced three different investments, including one targeting a different startup that has its own design for small, modular nuclear reactors—one that has not yet received regulatory approval.

Unlike Google’s deal, which is a commitment to purchase power should the reactors ever be completed, Amazon will lay out some money upfront as part of the agreements. We’ll take a look at the deals and technology that Amazon is backing before analyzing why companies are taking a risk on unproven technologies.

Money for utilities and a startup

Two of Amazon’s deals are with utilities that serve areas where it already has a significant data center footprint. One of these is Energy Northwest, which is an energy supplier that sends power to utilities in the Pacific Northwest. Amazon is putting up the money for Energy Northwest to study the feasibility of adding small modular reactors to its Columbia Generating Station, which currently houses a single, large reactor. In return, Amazon will get the right to purchase power from an initial installation of four small modular reactors. The site could potentially support additional reactors, which Energy Northwest would be able to use to meet demands from other users.

The deal with Virginia’s Dominion Energy is similar in that it would focus on adding small modular reactors to Dominion’s existing North Anna Nuclear Generating Station. But the exact nature of the deal is a bit harder to understand. Dominion says the companies will “jointly explore innovative ways to advance SMR development and financing while also mitigating potential cost and development risks.”

Should either or both of these projects go forward, the reactor designs used will come from a company called X-energy, which is involved in the third deal Amazon is announcing. In this case, it’s a straightforward investment in the company, although the exact dollar amount is unclear (the company says Amazon is “anchoring” a $500 million round of investments). The money will help finalize the company’s reactor design and push it through the regulatory approval process.

Small modular nuclear reactors

X-energy is one of several startups attempting to develop small modular nuclear reactors. The reactors all have a few features that are expected to help them avoid the massive time and cost overruns associated with the construction of large nuclear power stations. In these small reactors, the limited size allows them to be made at a central facility and then be shipped to the power station for installation. This limits the scale of the infrastructure that needs to be built in place and allows the assembly facility to benefit from economies of scale.

This also allows a great deal of flexibility at the installation site, as you can scale the facility to power needs simply by adjusting the number of installed reactors. If demand rises in the future, you can simply install a few more.

The small modular reactors are also typically designed to be inherently safe. Should the site lose power or control over the hardware, the reactor will default to a state where it can’t generate enough heat to melt down or damage its containment. There are various approaches to achieving this.

X-energy’s technology is based on small, self-contained fuel pellets called TRISO particles for TRi-structural ISOtropic. These contain both the uranium fuel and a graphite moderator and are surrounded by a ceramic shell. They’re structured so that there isn’t sufficient uranium present to generate temperatures that can damage the ceramic, ensuring that the nuclear fuel will always remain contained.

The design is meant to run at high temperatures and extract heat from the reactor using helium, which is used to boil water and generate electricity. Each reactor can produce 80 megawatts of electricity, and the reactors are designed to work efficiently as a set of four, creating a 320 MW power plant. As of yet, however, there are no working examples of this reactor, and the design hasn’t been approved by the Nuclear Regulatory Commission.

Why now?

Why is there such sudden interest in small modular reactors among the tech community? It comes down to growing needs and a lack of good alternatives, even given the highly risky nature of the startups that hope to build the reactors.

It’s no secret that data centers require enormous amounts of energy, and the sudden popularity of AI threatens to raise that demand considerably. Renewables, as the cheapest source of power on the market, would be one way of satisfying that growth, but they’re not ideal. For one thing, the intermittent nature of the power they supply, while possible to manage at the grid level, is a bad match for the around-the-clock demands of data centers.

The US has also benefitted from over a decade of efficiency gains keeping demand flat despite population and economic growth. This has meant that all the renewables we’ve installed have displaced fossil fuel generation, helping keep carbon emissions in check. Should newly installed renewables instead end up servicing rising demand, it will make it considerably more difficult for many states to reach their climate goals.

Finally, renewable installations have often been built in areas without dedicated high-capacity grid connections, resulting in a large and growing backlog of projects (2.6 TW of generation and storage as of 2023) that are stalled as they wait for the grid to catch up. Expanding the pace of renewable installation can’t meet rising server farm demand if the power can’t be brought to where the servers are.

These new projects avoid that problem because they’re targeting sites that already have large reactors and grid connections to use the electricity generated there.

In some ways, it would be preferable to build more of these large reactors based on proven technologies. But not in two very important ways: time and money. The last reactor completed in the US was at the Vogtle site in Georgia, which started construction in 2009 but only went online this year. Costs also increased from $14 billion to over $35 billion during construction. It’s clear that any similar projects would start generating far too late to meet the near-immediate needs of server farms and would be nearly impossible to justify economically.

This leaves small modular nuclear reactors as the least-bad option in a set of bad options. Despite many startups having entered the space over a decade ago, there is still just a single reactor design approved in the US, that of NuScale. But the first planned installation saw the price of the power it would sell rise to the point where it was no longer economically viable due to the plunge in the cost of renewable power; it was canceled last year as the utilities that would have bought the power pulled out.

The probability that a different company will manage to get a reactor design approved, move to construction, and manage to get something built before the end of the decade is extremely low. The chance that it will be able to sell power at a competitive price is also very low, though that may change if demand rises sufficiently. So the fact that Amazon is making some extremely risky investments indicates just how worried it is about its future power needs. Of course, when your annual gross profit is over $250 billion a year, you can afford to take some risks.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Amazon joins Google in investing in small modular nuclear power Read More »

$250-analogue-3d-will-play-all-your-n64-cartridges-in-4k-early-next-year

$250 Analogue 3D will play all your N64 cartridges in 4K early next year

It’s been exactly one year since the initial announcement of the Analogue 3D, an HD-upscaled, FPGA-powered Nintendo 64 in the tradition of Analogue’s long-running line of high-end retro machines. Today, Analogue is revealing more details about the hardware, which will sell for $250 and plans to ship in the first quarter of 2025 (a slight delay from the previously announced 2024 release plan).

Like previous Analogue devices, the Analogue 3D uses a field-programmable gate array (FPGA) to simulate the actual logic gates found in original N64 hardware. That helps ensure 100 percent compatibility with the entire N64 cartridge library across all regions, Analogue promises, and should avoid the long-standing accuracy and lag issues inherent to most software-based emulation of the N64.

White and black hardware shells will be available for the Analogue 3D.

Credit: Analogue

White and black hardware shells will be available for the Analogue 3D. Credit: Analogue

To get that level of fidelity, the Analogue team spent four years programming an Altera Cyclone FPGA with a full 220,000 logic elements. That’s a big step up from previous Analogue devices—the Analogue Pocket’s main FPGA board featured just 49,000 logic elements three years ago. But the Analogue Pocket also included a second, 15,000-logic-element FPGA, which allowed it to run an expanding list of openFPGA cores to support games from other classic consoles.

Analogue has abandoned that additional FPGA for the Analogue 3D, meaning those openFPGA cores will not be usable on the new hardware. “If we wanted to offer Analogue 3D with openFPGA (which is not the purpose or focus of the product), it would require not only a second FPGA but an even more powerful base FPGA, therefore increasing the price to a price that doesn’t suit our goals,” Analogue founder Christopher Taber told Ars last year.

$250 Analogue 3D will play all your N64 cartridges in 4K early next year Read More »

fcc-republican-opposes-regulation-of-data-caps-with-analogy-to-coffee-refills

FCC Republican opposes regulation of data caps with analogy to coffee refills

Simington argued that regulating data caps would harm customers, using an analogy about the hypothetical regulation of coffee refills:

Suppose we were a different FCC, the Federal Coffee Commission, and rather than regulating the price of coffee (which we have vowed not to do), we instead implement a regulation whereby consumers are entitled to free refills on their coffees. What effects might follow? Well, I predict three things could happen: either cafés stop serving small coffees, or cafés charge a lot more for small coffees, or cafés charge a little more for all coffees.

Simington went on to compare the capacity of broadband networks to the coffee-serving capacity of coffee shops. He said that tiered coffee prices “can increase overall revenue for the café,” which can be invested “in more seats, more cafés, and faster coffee brewing.”

Simington is against rate regulation in general and said that regulation of usage-based plans (aka data caps) is just rate regulation with a different name. “Though only a Notice of Inquiry, because it is the first step down a path toward further rate regulation, I can’t support the item we’ve brewed up here. I dissent,” Simington wrote.

Carr: Data-capped plans “more affordable”

Carr’s statement said, “I dissent from today’s NOI because I cannot support the Biden-Harris Administration’s inexorable march towards rate regulation and because the FCC plainly does not have the legal authority to do so.”

Carr pointed to the recent 6th Circuit appeals court ruling that blocked the Rosenworcel FCC’s attempt to reinstate net neutrality rules under Title II of the Communications Act. Judges blocked enforcement of the net neutrality rules until the court makes a final ruling, saying that broadband providers are likely to win the case on the merits.

Carr said the FCC is “start[ing] down the path of directly regulating rates… by seeking comment on controlling the price of broadband capacity (‘data caps’). Prohibiting customers from choosing to purchase plans with data caps—which are more affordable than unlimited ones—necessarily regulates the service rates they are paying for.”

FCC Republican opposes regulation of data caps with analogy to coffee refills Read More »

apple-a17-pro-chip-is-the-star-of-the-first-ipad-mini-update-in-three-years

Apple A17 Pro chip is the star of the first iPad mini update in three years

Apple quietly announced a new version of its iPad mini tablet via press release this morning, the tablet’s first update since 2021.

The seventh-generation iPad mini looks mostly identical to the sixth-generation version, with a power-button-mounted Touch ID sensor and a slim-bezeled display. But Apple has swapped out the A15 Bionic chip for the Apple A17 Pro, the same processor it used in the iPhone 15 Pro last year.

The new iPad mini is available for preorder now and starts at $499 for 128GB (an upgrade over the previous base model’s 64GB of storage). 256GB and 512GB versions are available for $599 and $799, and cellular connectivity is an additional $150 on top of any of those prices.

Apple says the A17 Pro’s CPU performance is 30 percent faster than the A15’s and that its GPU performance is 25 percent faster (in addition to supporting hardware-accelerated ray tracing). But the biggest improvement will be an increase in RAM—the A17 Pro comes with 8GB instead of the A15’s 4GB, which appears to be Apple’s floor for the new Apple Intelligence AI features. The new iPad mini will be the only iPad mini capable of supporting Apple Intelligence, which will begin rolling out with the iPadOS 18.1 update within the next few weeks.

Apple A17 Pro chip is the star of the first iPad mini update in three years Read More »

apple-study-exposes-deep-cracks-in-llms’-“reasoning”-capabilities

Apple study exposes deep cracks in LLMs’ “reasoning” capabilities

This kind of variance—both within different GSM-Symbolic runs and compared to GSM8K results—is more than a little surprising since, as the researchers point out, “the overall reasoning steps needed to solve a question remain the same.” The fact that such small changes lead to such variable results suggests to the researchers that these models are not doing any “formal” reasoning but are instead “attempt[ing] to perform a kind of in-distribution pattern-matching, aligning given questions and solution steps with similar ones seen in the training data.”

Don’t get distracted

Still, the overall variance shown for the GSM-Symbolic tests was often relatively small in the grand scheme of things. OpenAI’s ChatGPT-4o, for instance, dropped from 95.2 percent accuracy on GSM8K to a still-impressive 94.9 percent on GSM-Symbolic. That’s a pretty high success rate using either benchmark, regardless of whether or not the model itself is using “formal” reasoning behind the scenes (though total accuracy for many models dropped precipitously when the researchers added just one or two additional logical steps to the problems).

An example showing how some models get mislead by irrelevant information added to the GSM8K benchmark suite.

An example showing how some models get mislead by irrelevant information added to the GSM8K benchmark suite. Credit: Apple Research

The tested LLMs fared much worse, though, when the Apple researchers modified the GSM-Symbolic benchmark by adding “seemingly relevant but ultimately inconsequential statements” to the questions. For this “GSM-NoOp” benchmark set (short for “no operation”), a question about how many kiwis someone picks across multiple days might be modified to include the incidental detail that “five of them [the kiwis] were a bit smaller than average.”

Adding in these red herrings led to what the researchers termed “catastrophic performance drops” in accuracy compared to GSM8K, ranging from 17.5 percent to a whopping 65.7 percent, depending on the model tested. These massive drops in accuracy highlight the inherent limits in using simple “pattern matching” to “convert statements to operations without truly understanding their meaning,” the researchers write.

Introducing irrelevant information to the prompts often led to “catastrophic” failure for most “reasoning” LLMs

Introducing irrelevant information to the prompts often led to “catastrophic” failure for most “reasoning” LLMs Credit: Apple Research

In the example with the smaller kiwis, for instance, most models try to subtract the smaller fruits from the final total because, the researchers surmise, “their training datasets included similar examples that required conversion to subtraction operations.” This is the kind of “critical flaw” that the researchers say “suggests deeper issues in [the models’] reasoning processes” that can’t be helped with fine-tuning or other refinements.

Apple study exposes deep cracks in LLMs’ “reasoning” capabilities Read More »

the-internet-archive-and-its-916-billion-saved-web-pages-are-back-online

The Internet Archive and its 916 billion saved web pages are back online

Last week, hackers defaced the Internet Archive website with a message that said, “Have you ever felt like the Internet Archive runs on sticks and is constantly on the verge of suffering a catastrophic security breach? It just happened. See 31 million of you on HIBP!”

HIBP is a reference to Have I Been Pwned, which was created by security researcher Troy Hunt and provides information and notifications on data breaches. The hacked Internet Archive data was sent to Have I Been Pwned and “contains authentication information for registered members, including their email addresses, screen names, password change timestamps, Bcrypt-hashed passwords, and other internal data,” BleepingComputer wrote.

Kahle said on October 9 that the Internet Archive fended off a DDoS attack and was working on upgrading security in light of the data breach and website defacement. The next day, he reported that the “DDoS folks are back” and had knocked the site offline. The Internet Archive “is being cautious and prioritizing keeping data safe at the expense of service availability,” he added.

“Services are offline as we examine and strengthen them… Estimated Timeline: days, not weeks,” he wrote on October 11. “Thank you for the offers of pizza (we are set).”

The Internet Archive and its 916 billion saved web pages are back online Read More »