Science

physics-of-badminton’s-new-killer-spin-serve

Physics of badminton’s new killer spin serve

Serious badminton players are constantly exploring different techniques to give them an edge over opponents. One of the latest innovations is the spin serve, a devastatingly effective method in which a player adds a pre-spin just before the racket contacts the shuttlecock (aka the birdie). It’s so effective—some have called it “impossible to return“—that the Badminton World Federation (BWF) banned the spin serve in 2023, at least until after the 2024 Paralympic Games in Paris.

The sanction wasn’t meant to quash innovation but to address players’ concerns about the possible unfair advantages the spin serve conferred. The BWF thought that international tournaments shouldn’t become the test bed for the technique, which is markedly similar to the previously banned “Sidek serve.” The BWF permanently banned the spin serve earlier this year. Chinese physicists have now teased out the complex fundamental physics of the spin serve, publishing their findings in the journal Physics of Fluids.

Shuttlecocks are unique among the various projectiles used in different sports due to their open conical shape. Sixteen overlapping feathers protrude from a rounded cork base that is usually covered in thin leather. The birdies one uses for leisurely backyard play might be synthetic nylon, but serious players prefer actual feathers.

Those overlapping feathers give rise to quite a bit of drag, such that the shuttlecock will rapidly decelerate as it travels and its parabolic trajectory will fall at a steeper angle than its rise. The extra drag also means that players must exert quite a bit of force to hit a shuttlecock the full length of a badminton court. Still, shuttlecocks can achieve top speeds of more than 300 mph. The feathers also give the birdie a slight natural spin around its axis, and this can affect different strokes. For instance, slicing from right to left, rather than vice versa, will produce a better tumbling net shot.

Chronophotographies of shuttlecocks after an impact with a racket

Chronophotographies of shuttlecocks after an impact with a racket. Credit: Caroline Cohen et al., 2015

The cork base makes the birdie aerodynamically stable: No matter how one orients the birdie, once airborne, it will turn so that it is traveling cork-first and will maintain that orientation throughout its trajectory. A 2015 study examined the physics of this trademark flip, recording flips with high-speed video and conducting free-fall experiments in a water tank to study how its geometry affects the behavior. The latter confirmed that shuttlecock feather geometry hits a sweet spot in terms of an opening inclination angle that is neither too small nor too large. And they found that feather shuttlecocks are indeed better than synthetic ones, deforming more when hit to produce a more triangular trajectory.

Physics of badminton’s new killer spin serve Read More »

an-extinct-volcano-in-arkansas-hosts-the-only-public-diamond-mine-on-earth

An extinct volcano in Arkansas hosts the only public diamond mine on Earth

The park provides two covered pavilions with water troughs and tables for wet sifting, plus open sluice boxes with hand-operated water pumps at both ends of the field. Four shaded structures are available in the search area; however, visitors are also welcome to bring their own canopies or tents, provided they are well-secured.

The diamonds formed under extreme pressure and heat deep in the Earth’s mantle. If you find one, it will most likely look like a metallic or glassy pebble rather than a sparkly cut gem that you might picture in your mind. The volcanic soil also contains amethyst, garnet, jasper, agate, and various types of quartz (and you can keep those, too).

The largest diamond found in the United States came from this field—the 40.23-carat Uncle Sam diamond, discovered in 1924 before the land became a state park. In September 2021, California visitor Noreen Wredberg found a 4.38-carat yellow diamond after searching for two hours, and in 2024, a visitor named Julien Navas found a 7.46-carat diamond at the park.

The park received over 180,000 visitors in 2017, who found 450 certified diamonds of various colors. Of the reported diamond finds, 299 were white, 72 were brown, and 74 were yellow.

Park staff told Mays that visitors find one or two diamonds daily, so “keep your expectations in check,” she writes. Most diamonds discovered are about the size of a paper match head, while a one-carat diamond is roughly the size of a green pea. But even tiny diamonds carry the thrill of discovery. Park staff provide free identification services, examining finds under loupes and confirming whether that glassy pebble is quartz or something more valuable.

A family experience

For those wanting to join the thousands who visit each year, the park makes it affordable. Admission costs $15 for adults, $7 for children ages 6–12. You can camp overnight at the park and return to the field at dawn. During summer months, the park operates a small water park—an acknowledgment that diamond hunting in Arkansas can be brutal, with a heat index exceeding 110° Fahrenheit.

Sometimes rain turns the field into mud, which experienced searchers prefer because it makes diamonds easier to spot—but it can make for a messy adventure. As Mays put it, “Most visitors leave with a handful of interesting rocks, some newfound knowledge, and an urgent need for a long shower.”

If you don’t find any diamonds at the park, don’t despair—you could still potentially buy a $200,000 diamond-making machine on Alibaba.

An extinct volcano in Arkansas hosts the only public diamond mine on Earth Read More »

the-case-of-the-coke-snorting-chihuahua

The case of the coke-snorting Chihuahua

Every dog owner knows that canines are natural scavengers and that vigilance is required to ensure they don’t eat toxic substances. But accidental ingestions still happen—like the chihuahua who vets discovered had somehow managed to ingest a significant quantity of cocaine, according to a case study published in the journal Frontiers in Veterinary Science.

There have been several studies investigating the bad effects cocaine can have on the cardiovascular systems of both humans and animals. However, these controlled studies are primarily done in laboratory settings and often don’t match the messier clinical realities. “Case reports are crucial in veterinary medicine by providing real-world examples,” said co-author Jake Johnson of North Carolina State University. “They capture clinical scenarios that larger studies might miss, preserve unusual presentations for future reference, and help build our collective understanding of rare presentations, ultimately improving emergency preparedness and treatment protocols.”

In the case of a male 2-year-old chihuahua, the dog presented as lethargic and unresponsive. His owners had found him with his tongue sticking out and unable to focus visually. The chihuahua was primarily an outdoor dog but was also allowed inside, and all its vaccines were up to date. Examination revealed bradycardia, i.e., a slow heart rate, a blue tinge to the dog’s mucus membranes—often a sign of too much unoxygenated hemoglobin circulating through the system—and dilated pupils. The dog’s symptoms faded after the vet administered a large dose of atropine, followed by epinephrine.

Then the dog was moved to a veterinary teaching hospital for further evaluation and testing. A urine test was positive for cocaine with traces of fentanyl, confirmed with liquid chromatography testing. The authors estimate the dog could have snorted (or ingested) as much as 96 mg of the drug. Apparently the Chihuahua had a history of ingesting things it shouldn’t, but the owners reported no prescription medications missing at home. They also did not have any controlled substances or illegal drugs like cocaine in the home.

The case of the coke-snorting Chihuahua Read More »

rapidly-intensifying-hurricane-erin-becomes-historic-storm-due-to-strengthening

Rapidly intensifying Hurricane Erin becomes historic storm due to strengthening

Erin’s central pressure was in the 990s this time yesterday, and it’s now in the 920’s heading for the teens.

This will make Erin the fastest deepening Atlantic hurricane before Sept 1st. Beating Emily 2005, by a lot.

[image or embed]

— Sam Lillo (@samlillo.bsky.social) August 16, 2025 at 9: 29 AM

With a central pressure of 917 mb on Saturday, Erin ranks as the second-most intense Atlantic in the last 50 years prior to today’s date, behind only Hurricane Allen in 1980.

Rapid intensification becoming more common

Storms like Erin are predicted to become more common due to climate change, scientists say. One study in 2019 found that, for the strongest 5 percent of Atlantic hurricanes, 24-hour intensification rates increased by about 3–4 mph per decade from 1982 to 2009. “Our results suggest a detectable increase of Atlantic intensification rates with a positive contribution from anthropogenic forcing,” the authors of the study, in Nature Communications, wrote.

Hurricane scientists generally agree that although the overall number of tropical storms and hurricanes may not increase in a warmer world, such background conditions are likely to produce more intense storms like Erin.

According to the US government’s Climate.gov website, this increase in intensity of tropical cyclones (TCs) is happening due to human-caused climate change.

“The proportion of severe TCs (Category 4 & 5) has increased, possibly due to anthropogenic climate change,” a coalition of authors wrote. “This proportion of intense TCs is projected to increase further, bringing a greater proportion of storms having more damaging wind speeds, higher storm surges, and more extreme rainfall rates. Most climate model studies project a corresponding reduction in the proportion of low-intensity cyclones, so the total number of TCs each year is projected to decrease or remain approximately the same.”

To date this year the tropical Atlantic has seen lower overall activity than usual. But with Erin’s longevity and intensity this season should soon reach and surpass normal levels of Accumulated Cyclone Energy, a measurement of a season’s total activity. The Atlantic season typically peaks in early September, with the majority of storms forming between early August and early October.

Forecast models indicate the likely development of more hurricanes within the next two weeks, but there is no clear consensus on whether they will impact land.

Rapidly intensifying Hurricane Erin becomes historic storm due to strengthening Read More »

how-a-mysterious-particle-could-explain-the-universe’s-missing-antimatter

How a mysterious particle could explain the Universe’s missing antimatter


New experiments focused on understanding the enigmatic neutrino may offer insights.

An artist’s composition of the Milky Way seen with a neutrino lens (blue). Credit: IceCube Collaboration/NSF/ESO

Everything we see around us, from the ground beneath our feet to the most remote galaxies, is made of matter. For scientists, that has long posed a problem: According to physicists’ best current theories, matter and its counterpart, antimatter, ought to have been created in equal amounts at the time of the Big Bang. But antimatter is vanishingly rare in the universe. So what happened?

Physicists don’t know the answer to that question yet, but many think the solution must involve some subtle difference in the way that matter and antimatter behave. And right now, the most promising path into that unexplored territory centers on new experiments involving the mysterious subatomic particle known as the neutrino.

“It’s not to say that neutrinos are definitely the explanation of the matter-antimatter asymmetry, but a very large class of models that can explain this asymmetry are connected to neutrinos,” says Jessica Turner, a theoretical physicist at Durham University in the United Kingdom.

Let’s back up for a moment: When physicists talk about matter, that’s just the ordinary stuff that the universe is made of—mainly protons and neutrons (which make up the nuclei of atoms), along with lighter particles like electrons. Although the term “antimatter” has a sci-fi ring to it, antimatter is not all that different from ordinary matter. Typically, the only difference is electric charge: For example, the positron—the first antimatter particle to be discovered—matches an electron in its mass but carries a positive rather than a negative charge. (Things are a bit more complicated with electrically neutral particles. For example, a photon is considered to be its own antiparticle, but an antineutron is distinct from a neutron in that it’s made up of antiquarks rather than ordinary quarks.)

Various antimatter particles can exist in nature; they occur in cosmic rays and in thunderclouds, and are produced by certain kinds of radioactive decay. (Because people—and bananas—contain a small amount of radioactive potassium, they emit minuscule amounts of antimatter in the form of positrons.)

Small amounts of antimatter have also been created by scientists in particle accelerators and other experiments, at great effort and expense—putting a damper on science fiction dreams of rockets propelled by antimatter or planet-destroying weapons energized by it.

When matter and antimatter meet, they annihilate, releasing energy in the form of radiation. Such encounters are governed by Einstein’s famous equation, E=mc2—energy equals mass times the square of the speed of light — which says you can convert a little bit of matter into a lot of energy, or vice versa. (The positrons emitted by bananas and bodies have so little mass that we don’t notice the teeny amounts of energy released when they annihilate.) Because matter and antimatter annihilate so readily, it’s hard to make a chunk of antimatter much bigger than an atom, though in theory you could have everything from antimatter molecules to antimatter planets and stars.

But there’s a puzzle: If matter and antimatter were created in equal amounts at the time of the Big Bang, as theory suggests, shouldn’t they have annihilated, leaving a universe made up of pure energy? Why is there any matter left?

Physicists’ best guess is that some process in the early universe favored the production of matter compared to the production of antimatter — but exactly what that process was is a mystery, and the question of why we live in a matter-dominated universe is one of the most vexing problems in all of physics.

Crucially, physicists haven’t been able to think of any such process that would mesh with today’s leading theory of matter and energy, known as the Standard Model of particle physics. That leaves theorists seeking new ideas, some as-yet-unknown physics that goes beyond the Standard Model. This is where neutrinos come in.

A neutral answer

Neutrinos are tiny particles without any electric charge. (The name translates as “little neutral one.”) According to the Standard Model, they ought to be massless, like photons, but experiments beginning in the 1990s showed that they do in fact have a tiny mass. (They’re at least a million times lighter than electrons, the extreme lightweights among normal matter.) Since physicists already know that neutrinos violate the Standard Model by having mass, their hope is that learning more about these diminutive particles might yield insights into whatever lies beyond.

Neutrinos have been slow to yield their secrets, however, because they barely interact with other particles. About 60 billion neutrinos from the Sun pass through every square centimeter of your skin each second. If those neutrinos interacted with the atoms in our bodies, they would probably destroy us. Instead, they pass right through. “You most likely will not interact with a single neutrino in your lifetime,” says Pedro Machado, a physicist at Fermilab near Chicago. “It’s just so unlikely.”

Experiments, however, have shown that neutrinos “oscillate” as they travel, switching among three different identities—physicists call them “flavors”: electron neutrino, muon neutrino, and tau neutrino. Oscillation measurements have also revealed that different-flavored neutrinos have slightly different masses.

Neutrinos are known to oscillate, switching between three varieties or “flavors.” Exactly how they oscillate is governed by the laws of quantum mechanics, and the probability of finding that an electron neutrino has transformed into a muon neutrino, for example, varies as a function of the distance traveled. (The third flavor state, the tau neutrino, is very rare.) Credit: Knowable Magazine

Neutrino oscillation is weird, but it may be weird in a useful way, because it might allow physicists to probe certain fundamental symmetries in nature—and these in turn may illuminate the most troubling of asymmetries, namely the universe’s matter-antimatter imbalance.

For neutrino researchers, a key symmetry is called charge-parity or CP symmetry. It’s actually a combination of two distinct symmetries: Changing a particle’s charge flips matter into antimatter (or vice versa), while changing a particle’s parity flips a particle into its mirror image (like turning a right-handed glove into a left-handed glove). So the CP-opposite version of a particle of ordinary matter is a mirror image of the corresponding antiparticle. But does this opposite particle behave exactly the same as the original one? If not, physicists say that CP symmetry is violated—a fancy way of saying that matter and antimatter behave slightly differently from one another. So any examples of CP symmetry violation in nature could help to explain the matter-antimatter imbalance.

In fact, CP violation has already been observed in some mesons, a type of subatomic particle typically made up of one quark and one antiquark, a surprising result first found in the 1960s. But it’s an extremely small effect, and it falls far short of being able to account for the universe’s matter-antimatter asymmetry.

In July 2025, scientists working at the Large Hadron Collider at CERN near Geneva reported clear evidence for a similar violation by one type of particle from a different family of subatomic particles known as baryons—but this newly observed CP violation is similarly believed to be much too small to account for the matter-antimatter imbalance.

Charge-parity or CP symmetry is a combination of two distinct symmetries: Changing a particle’s charge from positive to negative, for example, flips matter into antimatter (or vice versa), while changing a particle’s parity flips a particle into its mirror image (like turning a right-handed glove into a left-handed glove). Consider an electron: Flip its charge and you end up with a positron; flip its “handedness”—in particle physics, this is actually a quantum-mechanical property known as spin—and you get an electron with opposite spin. Flip both properties, and you get a positron that’s like a mirror image of the original electron. Whether this CP-flipped particle behaves the same way as the original electron is a key question: If it doesn’t, physicists say that CP symmetry is “violated.” Any examples of CP symmetry violation in nature could help to explain the matter-antimatter imbalance observed in the universe today. Credit: Knowable Magazine

Experiments on the horizon

So what about neutrinos? Do they violate CP symmetry—and if so, do they do it in a big enough way to explain why we live in a matter-dominated universe? This is precisely the question being addressed by a new generation of particle physics experiments. Most ambitious among them is the Deep Underground Neutrino Experiment (DUNE), which is now under construction in the United States; data collection could begin as early as 2029.

DUNE will employ the world’s most intense neutrino beam, which will fire both neutrinos and antineutrinos from Fermilab to the Sanford Underground Research Facility, located 800 miles away in South Dakota. (There’s no tunnel; the neutrinos and antineutrinos simply zip through the earth, for the most part hardly noticing that it’s there.) Detectors at each end of the beam will reveal how the particles oscillate as they traverse the distance between the two labs—and whether the behavior of the neutrinos differs from that of the antineutrinos.

DUNE won’t pin down the precise amount of neutrinos’ CP symmetry violation (if there is any), but it will set an upper limit on it. The larger the possible effect, the greater the discrepancy in the behavior of neutrinos versus antineutrinos, and the greater the likelihood that neutrinos could be responsible for the matter-antimatter asymmetry in the early universe.

The Deep Underground Neutrino Experiment (DUNE), now under construction, will see both neutrinos and antineutrinos fired from below Fermilab near Chicago to the Sanford Underground Research Facility some 800 miles away in South Dakota. Neutrinos can pass through earth unaltered, with no need of a tunnel. The ambitious experiment may reveal how the behavior of neutrinos differs from that of their antimatter counterparts, antineutrinos. Credit: Knowable Magazine

For Shirley Li, a physicist at the University of California, Irvine, the issue of neutrino CP violation is an urgent question, one that could point the way to a major rethink of particle physics. “If I could have one question answered by the end of my lifetime, I would want to know what that’s about,” she says.

Aside from being a major discovery in its own right, CP symmetry violation in neutrinos could challenge the Standard Model by pointing the way to other novel physics. For example, theorists say it would mean there could be two kinds of neutrinos—left-handed ones (the normal lightweight ones observed to date) and much heavier right-handed neutrinos, which are so far just a theoretical possibility. (The particles’ “handedness” refers to their quantum properties.)

These right-handed neutrinos could be as much as 1015 times heavier than protons, and they’d be unstable, decaying almost instantly after coming into existence. Although they’re not found in today’s universe, physicists suspect that right-handed neutrinos may have existed in the moments after the Big Bang — possibly decaying via a process that mimicked CP violation and favored the creation of matter over antimatter.

It’s even possible that neutrinos can act as their own antiparticles—that is, that neutrinos could turn into antineutrinos and vice versa. This scenario, which the discovery of right-handed neutrinos would support, would make neutrinos fundamentally different from more familiar particles like quarks and electrons. If antineutrinos can turn into neutrinos, that could help explain where the antimatter went during the universe’s earliest moments.

One way to test this idea is to look for an unusual type of radioactive decay — theorized but thus far never observed—known as “neutrinoless double-beta decay.” In regular double-beta decay, two neutrons in a nucleus simultaneously decay into protons, releasing two electrons and two antineutrinos in the process. But if neutrinos can act as their own antiparticles, then the two neutrinos could annihilate each other, leaving only the two electrons and a burst of energy.

A number of experiments are underway or planned to look for this decay process, including the KamLAND-Zen experiment, at the Kamioka neutrino detection facility in Japan; the nEXO experiment at the SNOLAB facility in Ontario, Canada; the NEXT experiment at the Canfranc Underground Laboratory in Spain; and the LEGEND experiment at the Gran Sasso laboratory in Italy. KamLAND-Zen, NEXT, and LEGEND are already up and running.

While these experiments differ in the details, they all employ the same general strategy: They use a giant vat of dense, radioactive material with arrays of detectors that look for the emission of unusually energetic electrons. (The electrons’ expected neutrino companions would be missing, with the energy they would have had instead carried by the electrons.)

While the neutrino remains one of the most mysterious of the known particles, it is slowly but steadily giving up its secrets. As it does so, it may crack the puzzle of our matter-dominated universe — a universe that happens to allow inquisitive creatures like us to flourish. The neutrinos that zip silently through your body every second are gradually revealing the universe in a new light.

“I think we’re entering a very exciting era,” says Turner.

This article originally appeared in Knowable Magazine, a nonprofit publication dedicated to making scientific knowledge accessible to all. Sign up for Knowable Magazine’s newsletter.

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

How a mysterious particle could explain the Universe’s missing antimatter Read More »

spacex-reveals-why-the-last-two-starships-failed-as-another-launch-draws-near

SpaceX reveals why the last two Starships failed as another launch draws near


“SpaceX can now proceed with Starship Flight 10 launch operations under its current license.”

SpaceX completed a six-engine static fire of the next Starship upper stage on August 1. Credit: SpaceX

SpaceX is continuing with final preparations for the 10th full-scale test flight of the company’s enormous Starship rocket after receiving launch approval Friday from the Federal Aviation Administration.

Engineers completed a final test of Starship’s propulsion system with a so-called “spin prime” test Wednesday at the launch site in South Texas. Ground crews then rolled the ship back to a nearby hangar for engine inspections, touchups to its heat shield, and a handful of other chores to ready it for liftoff.

SpaceX has announced the launch is scheduled for no earlier than next Sunday, August 24, at 6: 30 pm local time in Texas (23: 30 UTC).

Like all previous Starship launches, the huge 403-foot-tall (123-meter) rocket will take off from SpaceX’s test site in Starbase, Texas, just north of the US-Mexico border. The rocket consists of a powerful booster stage named Super Heavy, with 33 methane-fueled Raptor engines. Six Raptors power the upper stage, known simply as Starship.

With this flight, SpaceX officials hope to put several technical problems with the Starship program behind them. SpaceX is riding a streak of four disappointing Starship test flights from January through May, and and the explosion and destruction of another Starship vehicle during a ground test in June.

These setbacks followed a highly successful year for the world’s largest rocket in 2024, when SpaceX flew Starship four times and achieved new objectives on each flight. These accomplishments included the first catch of a Super Heavy booster back at the launch pad, proving the company’s novel concept for recovering and reusing the rocket’s first stage.

Starship’s record so far in 2025 is another story. The rocket’s inability to make it through an entire suborbital test flight has pushed back future program milestones, such as the challenging tasks of recovering and reusing the rocket’s upper stage, and demonstrating the ability to refuel another rocket in orbit. Those would both be firsts in the history of spaceflight.

These future tests, and more, are now expected to occur no sooner than next year. This time last year, SpaceX officials hoped to achieve them in 2025. All of these demonstrations are vital for Elon Musk to meet his promise of sending numerous Starships to build a settlement on Mars. Meanwhile, NASA is eager for SpaceX to reel off these tests as quickly as possible because the agency has selected Starship as the human-rated lunar lander for the Artemis Moon program. Once operational, Starship will also be key to building out SpaceX’s next-generation Starlink broadband network.

A good outcome on the next Starship test flight would give SpaceX footing to finally take a step toward these future demos after months of dithering over design dilemmas.

Elon Musk, SpaceX’s founder and CEO, presented an update on Starship to company employees in May. This chart shows the planned evolution from Starship Version 2 (left) to Version 3 (middle), and an even larger rocket (right) in the more distant future.

The FAA said Friday it formally closed the investigation into Starship’s most recent in-flight failure in May, when the rocket started leaking propellant after reaching space, rendering it unable to complete the test flight.

“The FAA oversaw and accepted the findings of the SpaceX-led investigation,” the federal regulator said in a statement. “The final mishap report cites the probable root cause for the loss of the Starship vehicle as a failure of a fuel component. SpaceX identified corrective actions to prevent a reoccurrence of the event.”

Diagnosing failures

SpaceX identified the most probable cause for the May failure as a faulty main fuel tank pressurization system diffuser located on the forward dome of Starship’s primary methane tank. The diffuser failed a few minutes after launch, when sensors detected a pressure drop in the main methane tank and a pressure increase in the ship’s nose cone just above the tank.

The rocket compensated for the drop in main tank pressure and completed its engine burn, but venting from the nose cone and a worsening fuel leak overwhelmed Starship’s attitude control system. Finally, detecting a major problem, Starship triggered automatic onboard commands to vent all remaining propellant into space and “passivate” itself before an unguided reentry over the Indian Ocean, prematurely ending the test flight.

Engineers recreated the diffuser failure on the ground during the investigation, and then redesigned the part to better direct pressurized gas into the main fuel tank. This will also “substantially decrease” strain on the diffuser structure, SpaceX said.

The FAA, charged with ensuring commercial rocket launches don’t endanger public safety, signed off on the investigation and gave the green light for SpaceX to fly Starship again when it is ready.

“SpaceX can now proceed with Starship Flight 10 launch operations under its current license,” the FAA said.

“The upcoming flight will continue to expand the operating envelope on the Super Heavy booster, with multiple landing burn tests planned,” SpaceX said in an update posted to its website Friday. “It will also target similar objectives as previous missions, including Starship’s first payload deployment and multiple reentry experiments geared towards returning the upper stage to the launch site for catch.”

File photo of Starship’s six Raptor engines firing on a test stand in South Texas. Credit: SpaceX

In the aftermath of the test flight in May, SpaceX hoped to fly Starship again by late June or early July. But another accident June 18, this time on the ground, delayed the program another couple of months. The Starship vehicle SpaceX assigned to the next flight, designated Ship 36, exploded on a test stand in Texas as teams filled it with cryogenic propellants for an engine test-firing.

The accident destroyed the ship and damaged the test site, prompting SpaceX to retrofit the sole active Starship launch pad to support testing of the next ship in line—Ship 37. Those tests included a brief firing of all six of the ship’s Raptor engines August 1.

After Ship 37’s final spin prime test Wednesday, workers transported the rocket back to a hangar for evaluation, and crews immediately got to work transitioning the launch pad back to its normal configuration to host a full Super Heavy/Starship stack.

SpaceX said the explosion on the test stand in June was likely caused by damage to a high-pressure nitrogen storage tank inside Starship’s payload bay section. This tank, called a composite overwrapped pressure vessel, or COPV, violently ruptured and led to the ship’s fiery demise. SpaceX said COPVs on upcoming flights will operate at lower pressures, and managers ordered additional inspections on COPVs to look for damage, more proof testing, more stringent acceptance criteria, and a hardware change to address the problem.

Try, try, try, try again

This year began with the first launch of an upgraded version of Starship, known as Version 2 or Block 2, in January. But the vehicle suffered propulsion failures and lost control before the upper stage completed its engine burn to propel the rocket on a trajectory carrying it halfway around the world to splash down in the Indian Ocean. Instead, the rocket broke apart and rained debris over the Bahamas and the Turks and Caicos Islands more than 1,500 miles downrange from Starbase.

That was followed in March by another Starship launch that had a similar result, again scattering debris near the Bahamas. In May, the ninth Starship test flight made it farther downrange and completed its engine burn before spinning out of control in space, preventing it from making a guided reentry to gather data on its heat shield.

Mastering the design of Starship’s heat shield is critical the future of the program. As it has on all of this year’s test flights, SpaceX has installed on the next Starship several different ceramic and metallic tile designs to test alternative materials to protect the vehicle during its scorching plunge back into Earth’s atmosphere. Starship successfully made it through reentry for a controlled splashdown in the sea several times last year, but sensors detected hot spots on the rocket’s stainless steel skin after some of the tiles fell off during launch and descent.

Making the Starship upper stage reusable like the Super Heavy booster will require better performance from the heat shield. The demands of flying the ship home from orbit and attempting a catch at the launch pad far outweigh the challenge of recovering a booster. Coming back from space, the ship encounters much higher temperatures than the booster sees at lower velocities.

Therefore, SpaceX’s most important goal for the 10th Starship flight will be gathering information about how well the ship’s different heat shield materials hold up during reentry. Engineers want to have this data as soon as possible to inform design decisions about the next iteration of Starship—Version 3 or Block 3—that will actually fly into orbit. So far, all Starship launches have intentionally targeted a speed just shy of orbital velocity, bringing the vehicle back through the atmosphere halfway around the world.

Other objectives on the docket for Starship Flight 10 include the deployment of spacecraft simulators mimicking the size of SpaceX’s next-generation Starlink Internet satellites. Like the heat shield data, this has been part of the flight plan for the last three Starship launches, but the rocket never made it far enough to attempt any payload deployment tests.

Thirty-three Raptor engines power the Super Heavy booster downrange from SpaceX’s launch site near Brownsville, Texas, in January. Credit: SpaceX

Engineers also plan to put the Super Heavy booster through the wringer on the next launch. Instead of coming back to Starbase for a catch at the launch pad—something SpaceX has now done three times—the massive booster stage will target a controlled splashdown in the Gulf of Mexico east of the Texas coast. This will give SpaceX room to try new things with the booster, such as controlling the rocket’s final descent with a different mix of engines to see if it could overcome a problem with one of its three primary landing engines.

SpaceX tried to experiment with new ways of landing of the Super Heavy booster on the last test flight, too. The Super Heavy exploded before reaching the ocean, likely due to a structural failure of the rocket’s fuel transfer tube, an internal pipe where methane flows from the fuel tank at the top of the rocket to the engines at the bottom of the booster. SpaceX said the booster flew a higher angle of attack during its descent in May to test the limits of the rocket’s performance. It seems engineers found the limit, and the booster won’t fly at such a high angle of attack next time.

SpaceX has just two Starship Version 2 vehicles in its inventory before moving on to the taller Version 3 configuration, which will also debut improved Raptor engines.

“Every lesson learned, through both flight and ground testing, continues to feed directly into designs for the next generation of Starship and Super Heavy,” SpaceX said. “Two flights remain with the current generation, each with test objectives designed to expand the envelope on vehicle capabilities as we iterate towards fully and rapidly reusable, reliable rockets.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

SpaceX reveals why the last two Starships failed as another launch draws near Read More »

rocket-report:-ariane-6-beats-vulcan-to-third-launch;-china’s-first-drone-ship

Rocket Report: Ariane 6 beats Vulcan to third launch; China’s first drone ship


Why is China’s heavy-lift Long March 5B able to launch only 10 Guowang satellites at a time?

Wearing their orange launch and reentry spacesuits, Artemis II commander Reid Wiseman (bottom) and pilot Victor Glover (top) walk out of an emergency egress basket during nighttime training at Launch Complex 39B.

Welcome to Edition 8.06 of the Rocket Report! Two of the world’s most storied rocket builders not named SpaceX achieved major successes this week. Arianespace’s Ariane 6 rocket launched from French Guiana on its third flight Tuesday night with a European weather satellite. Less than 20 minutes later, United Launch Alliance’s third Vulcan rocket lifted off from Florida on a US military mission. These are two of the three big rockets developed in the Western world that have made their orbital debuts in the last two years, alongside Blue Origin’s New Glenn launcher. Ariane 6 narrowly won the “race” to reach its third orbital flight, but if you look at it another way, Ariane 6 reached its third flight milestone 13 months after its inaugural launch. It took Vulcan more than 19 months, and New Glenn has flown just once. SpaceX’s Super Heavy/Starship rocket has flown nine times but has yet to reach orbit.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

Sixth success for sea-launched Chinese rocket. Private Chinese satellite operator Geespace added 11 spacecraft to its expanding Internet of Things constellation on August 8, aiming to boost low-power connectivity in key emerging markets, Space News reports. The 11 satellites rode into orbit aboard a solid-fueled Jielong 3 (Smart Dragon 3) rocket lifting off from an ocean platform in the Yellow Sea off the coast of Rizhao, a city in eastern China’s Shandong province. This was the sixth flight of the Jielong 3, a rocket developed by a commercially oriented spinoff of the state-owned China Academy of Launch Vehicle Technology.

Mistaken for a meteor … The fourth stage of the Jielong 3 rocket, left in orbit after deploying its 11 satellite payloads, reentered the atmosphere late Sunday night. The fiery and destructive reentry created a fireball that streaked across the skies over Spain, the Spanish newspaper El Mundo reports. Many Spanish residents identified the streaking object as a meteor associated with the Perseid meteor shower. But it turned out to be a piece of China’s Jielong 3 rocket. Any debris that may have survived the scorching reentry likely fell into the Mediterranean Sea.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Portugal green-lights Azores spaceport. The Portuguese government has granted the Atlantic Spaceport Consortium a license to build and operate a rocket launch facility on the island of Santa Maria in the Azores, European Spaceflight reports. The Atlantic Spaceport Consortium (ASC) was founded in 2019 with the goal of developing a commercial spaceport on Santa Maria, 1,500 kilometers off the Portuguese mainland. In September 2024, the company showcased the island’s suitability as a launch site by launching two small solid-fuel amateur-class rockets that it developed in-house.

What’s on deck? … The spaceport license granted by Portugal’s regulatory authorities does not cover individual launches themselves. Those must be approved in a separate licensing process. It’s likely that the launch site on Santa Maria Island will initially host suborbital launches, including flights by the Polish rocket company SpaceForest. The European Space Agency has also selected Santa Maria as the landing site for the first flight of the Space Rider lifting body vehicle after it launches into orbit, perhaps in 2027. (submitted by claudiodcsilva)

Why is Jeff Bezos buying launches from Elon Musk? Early Monday morning, a Falcon 9 rocket lifted off from its original launch site in Florida. Remarkably, it was SpaceX’s 100th launch of the year. Perhaps even more notable was the rocket’s payload: two-dozen Project Kuiper satellites, which were dispensed into low-Earth orbit on target, Ars reports. This was SpaceX’s second launch of satellites for Amazon, which is developing a constellation to deliver low-latency broadband Internet around the world. SpaceX, then, just launched a direct competitor to its Starlink network into orbit. And it was for the founder of Amazon, Jeff Bezos, who owns a rocket company of his own in Blue Origin.

Several answers … So how did it come to this—Bezos and Elon Musk, competitors in so many ways, working together in space? There are several answers. Most obviously, launching payloads for customers is one of SpaceX’s two core business areas, alongside Starlink. SpaceX sells launch services to all comers and typically offers the lowest price per kilogram to orbit. There’s immediate revenue to be made if a company with deep pockets like Amazon is willing to pay SpaceX. Second, the other options to get Kuiper satellites into orbit just aren’t available at the volume Amazon needs. Amazon has reserved the lion’s share of its Kuiper launches with SpaceX’s competitors: United Launch Alliance, Arianespace, and Jeff Bezos’ own space company Blue Origin. Lastly, SpaceX could gain some leverage by providing launch services to Amazon. In return for a launch, SpaceX has asked other companies with telecom satellites, such as OneWeb and Kepler Communications, to share spectrum rights to enable Starlink to expand into new markets.

Trump orders cull of commercial launch regulations. President Donald Trump signed an executive order on Wednesday directing government agencies to “eliminate or expedite” environmental reviews for commercial launch and reentry licenses, Ars reports. The FAA, part of the Department of Transportation, is responsible for granting the licenses after ensuring launch and reentries don’t endanger the public, comply with environmental laws, and comport with US national interests. The drive toward deregulation will be welcome news for companies like SpaceX, led by onetime Trump ally Elon Musk; SpaceX conducts nearly all of the commercial launches and reentries licensed by the FAA.

Deflecting scrutiny? … The executive order does several things, and not all of them will be as controversial as the potential elimination of environmental reviews. The order includes a clause directing the government to reevaluate, amend, or rescind a slate of launch-safety regulations written during the first Trump administration. The FAA published the new regulations, known as Part 450, in 2020, and they went into effect in 2021, but space companies have complained that they are too cumbersome and have slowed down the license approval process. The Biden administration established a committee last year to look at reforming the regulations in response to industry’s outcry. Another part of the order that will likely lack bipartisan support is a call for making the head of the FAA’s commercial spaceflight division a political appointee. This job has historically been held by a career civil servant.

Ariane 6 launches European weather satellite. Europe’s new Ariane 6 rocket successfully launched for a third time on Tuesday night, carrying a satellite into orbit for weather forecasting and climate monitoring, Euronews reports. “The success of this second commercial launch confirms the performance, reliability, and precision of Ariane 6,” said Martin Sion, CEO of ArianeGroup, operator of the rocket. “Once again, the new European heavy-lift launcher meets Europe’s needs, ensuring sovereign access to space,” Sion added. It marks the second commercial flight of the rocket, which has been in development for almost a decade with the European Space Agency (ESA). It is significant as it gives Europe independent access to space and reduces its reliance on Elon Musk’s SpaceX.

Eumetsat returns to Europe … The polar-orbiting weather satellite launched by the Ariane 6 rocket this week is owned by the European Organization for the Exploitation of Meteorological Satellites, or Eumetsat. Headquartered in Germany, Eumetsat is a multinational organization that owns and operates geostationary and polar-orbiting weather satellites, watching real-time storm development over Europe and Africa, while feeding key data into global weather and climate models. Just last month, Eumetsat’s newest geostationary weather satellite launched from Florida on a SpaceX Falcon 9 rocket because of delays with the Ariane 6 program.

Rocket Lab isn’t giving up on 2025 yet. Rocket Lab continues to push for a first launch of its medium-lift Neutron rocket before the end of the year, but company executives acknowledge that schedule has no margin for error, Space News reports. It may seem unlikely, but Rocket Lab’s founder and CEO, Peter Beck, said in a conference call with investment analysts last week that the company has a “green light” schedule to debut the Neutron rocket within the next four-and-a-half months. There’s still much work to do to prepare for the first launch, and the inaugural flight seems almost certain to slip into 2026.

Launch pad nearly complete … Rocket Lab plans to host a ribbon-cutting at the Neutron rocket’s new launch pad on Wallops Island, Virginia, on August 28. This launch pad is located just south of the spaceport’s largest existing launch facility, where Northrop Grumman’s Antares rocket lifts off on resupply missions to the International Space Station. Rocket Lab has a small launch pad for its light-class Electron launcher co-located with the Antares pad at Wallops.

Chinese company reveals drone ship. The Chinese launch company iSpace has released the first photos of an ocean-going recovery ship to support the landings of reusable first-stage boosters. The company hosted a dedication ceremony in Yangzhou, China, earlier this month for the vessel, which looks similar to SpaceX’s rocket landing drone ships. In a press release, iSpace said the ship, named “Interstellar Return,” is China’s first marine rocket recovery ship, and the fifth such vessel in the world. SpaceX has three drone ships in its fleet for the Falcon 9 rocket, and Blue Origin has one for the New Glenn booster.

Rocket agnostic … The recovery ship will be compatible with various medium- and large-sized reusable rockets, iSpace said. But its main use will be as the landing site for the first stage booster for iSpace’s own Hyperbola 3 rocket, a medium-lift launcher with methane-fueled engines. The company has completed multiple vertical takeoff and landing tests of prototype boosters for the Hyperbola 3. The recovery ship measures about 100 meters long and 42 meters wide, with a displacement of 17,000 metric tons, and it has the ability to perform “intelligent unmanned operations” thanks to a dynamic positioning system, according to iSpace.

Vulcan’s first national security launch. United Launch Alliance delivered multiple US military satellites into a high-altitude orbit after a prime-time launch Tuesday night, marking an important transition from development to operations for the company’s new Vulcan rocket, Ars reports. This mission, officially designated USSF-106 by the US Space Force, was the first flight of ULA’s Vulcan rocket to carry national security payloads. Two test flights of the Vulcan rocket last year gave military officials enough confidence to certify it for launching the Pentagon’s medium-to-large space missions.

Secrecy in the fairing  … The Vulcan rocket’s Centaur upper stage released its payloads into geosynchronous orbit more than 22,000 miles (nearly 36,000 kilometers) over the equator roughly seven hours after liftoff. One of the satellites deployed by the Vulcan rocket is an experimental navigation testbed named NTS-3. It will demonstrate new technologies that could be used on future GPS navigation satellites. But the Space Force declined to disclose any information about the mission’s other payloads.

Artemis II crew trains for nighttime ops. The four astronauts training to fly around the Moon on NASA’s Artemis II mission next year have been at Kennedy Space Center in Florida this week. One of the reasons they were at Kennedy was to run through a rehearsal for what it will be like to work at the launch pad if the Artemis II mission ends up lifting off at night. Astronauts Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen put on their spacesuits and rehearsed emergency procedures at Launch Complex 39B, replicating a daytime simulation they participated in last year.

Moving forward … The astronauts also went inside the Vehicle Assembly Building to practice using egress baskets they would use to quickly escape the launch pad in the event of a prelaunch emergency. The baskets are fastened to the mobile launch tower inside the VAB, where technicians are assembling and testing the Space Launch System rocket for the Artemis II mission. Later this year, the astronauts will return to Kennedy for a two-part countdown demonstration test. First, the crew members will board their Orion spacecraft once it’s stacked atop the SLS rocket inside the VAB. Then, in part two, the astronauts will again rehearse emergency evacuation procedures once the rocket rolls to the launch pad.

China’s Long March 5B flies again. China is ramping up construction of its national satellite-Internet megaconstellation with the successful deployment of another batch of Guowang satellites by a heavy-lift Long March 5B rocket on Wednesday, Space.com reports. Guowang, whose name translates as “national network,” will be operated by China SatNet, a state-run company established in 2021. The constellation will eventually consist of about 13,000 satellites if all goes to plan.

Make this make sense … Guowang is a long way from that goal. Wednesday’s launch was the eighth overall for the network, but it was the fourth for the project in less than three weeks. Each mission lofts just five to 10 Guowang spacecraft, apparently because each satellite is quite large. For comparison, SpaceX launches 24 to 28 satellites on each mission to assemble its Starlink broadband megaconstellation, which currently consists of nearly 8,100 operational spacecraft. The Long March 5B is China’s most powerful operational rocket, with a lift capacity somewhat higher than SpaceX’s Falcon 9 but below that of the Falcon Heavy. It begs the question of just how big the Guowang satellites really are, and do they have a purpose beyond broadband Internet service?

Next three launches

Aug. 16: Kinetica 1 | Unknown Payload | Jiuquan Satellite Launch Center, China | 07: 35 UTC

Aug. 17: Long March 4C | Unknown Payload | Xichang Satellite Launch Center, China | 09: 05 UTC

Aug. 17: Long March 6A | Unknown Payload | Taiyuan Satellite Launch Center, China | 14: 15 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: Ariane 6 beats Vulcan to third launch; China’s first drone ship Read More »

incan-numerical-recordkeeping-system-may-have-been-widely-used

Incan numerical recordkeeping system may have been widely used

Women in STEM: Inca Edition

In the late 1500s, a few decades after the khipu in this recent study was made, an Indigenous chronicler named Guaman Poma de Ayala described how older women used khipu to “keep track of everything” in aqllawasai: places that basically functioned as finishing schools for Inca girls. Teenage girls, chosen by local nobles, were sent away to live in seclusion at the aqllawasai to weave cloth, brew chicha, and prepare food for ritual feasts.

What happened to the girls after aqllawasai graduation was a mixed bag. Some of them were married (or given as concubines) to Inca nobles, others became priestesses, and some ended up as human sacrifices. But some of them actually got to go home again, and they probably took their knowledge of khipu with them.

“I think this is the likely way in which khipu literacy made it into the countryside and the villages,” said Hyland. “These women, who were not necessarily elite, taught it to their children, etc.” That may be where the maker of KH0631 learned their skills: either in an aqllawasai or from a graduate of one (we still don’t know this particular khipu-maker’s gender).

Science confirming what they already knew”

The finely crafted khipu turning out to be the work of a commoner shows that numeracy was widespread and surprisingly egalitarian in the Inca empire, but it also reveals a centuries-long thread connecting the Inca and their descendants.

Modern people—the descendants of the Inca—still use khipu today in some parts of Peru and Chile. Some scholars (mostly non-Indigenous ones) have argued that these modern khipu weren’t really based on knowledge passed down for centuries but were instead just a clumsy attempt to copy the Inca technology. But if commoners were using khipu in the Inca empire, it makes sense for that knowledge to have been passed down to modern villagers.

“It points to a continuity between Inka and modern khipus,” said Hyland. “In the few modern villages with living khipu traditions, they already believe in this continuity, so it would be the case of science confirming what they already know.”

Science Advances, 2025. DOI: 10.1126/sciadv.adv1950  (About DOIs).

Incan numerical recordkeeping system may have been widely used Read More »

ice-discs-slingshot-across-a-metal-surface-all-on-their-own

Ice discs slingshot across a metal surface all on their own


VA Tech experiment was inspired by Death Valley’s mysterious “sailing stones” at Racetrack Playa.

Graduate student Jack Tapocik sets up ice on an engineered surface in the VA Tech lab of Jonathan Boreyko. Credit: Alex Parrish/Virginia Tech

Scientists have figured out how to make frozen discs of ice self-propel across a patterned metal surface, according to a new paper published in the journal ACS Applied Materials and Interfaces. It’s the latest breakthrough to come out of the Virginia Tech lab of mechanical engineer Jonathan Boreyko.

A few years ago, Boreyko’s lab experimentally demonstrated a three-phase Leidenfrost effect in water vapor, liquid water, and ice. The Leidenfrost effect is what happens when you dash a few drops of water onto a very hot, sizzling skillet. The drops levitate, sliding around the pan with wild abandon. If the surface is at least 400° Fahrenheit (well above the boiling point of water), cushions of water vapor, or steam, form underneath them, keeping them levitated. The effect also works with other liquids, including oils and alcohol, but the temperature at which it manifests will be different.

Boreyko’s lab discovered that this effect can also be achieved in ice simply by placing a thin, flat disc of ice on a heated aluminum surface. When the plate was heated above 150° C (302° F), the ice did not levitate on a vapor the way liquid water does. Instead, there was a significantly higher threshold of 550° Celsius (1,022° F) for levitation of the ice to occur. Unless that critical threshold is reached, the meltwater below the ice just keeps boiling in direct contact with the surface. Cross that critical point and you will get a three-phase Leidenfrost effect.

The key is a temperature differential in the meltwater just beneath the ice disc. The bottom of the meltwater is boiling, but the top of the meltwater sticks to the ice. It takes a lot to maintain such an extreme difference in temperature, and doing so consumes most of the heat from the aluminum surface, which is why it’s harder to achieve levitation of an ice disc. Ice can suppress the Leidenfrost effect even at very high temperatures (up to 550° C), which means that using ice particles instead of liquid droplets would be better for many applications involving spray quenching: rapid cooling in nuclear power plants, for example, firefighting, or rapid heat quenching when shaping metals.

This time around, Boreyko et al. have turned their attention to what the authors term “a more viscous analog” to a Leidenfrost ratchet, a form of droplet self-propulsion. “What’s different here is we’re no longer trying to levitate or even boil,” Boreyko told Ars. “Now we’re asking a more straightforward question: Is there a way to make ice move across the surface directionally as it is melting? Regular melting at room temperature. We’re not boiling, we’re not levitating, we’re not Leidenfrosting. We just want to know, can we make ice shoot across the surface if we design a surface in the right way?”

Mysterious moving boulders

The researchers were inspired by Death Valley’s famous “sailing stones” on Racetrack Playa. Watermelon-sized boulders are strewn throughout the dry lake bed, and they leave trails in the cracked earth as they slowly migrate a couple of hundred meters each season. Scientists didn’t figure out what was happening until 2014. Although co-author Ralph Lorenz (Johns Hopkins University) admitted he thought theirs would be “the most boring experiment ever” when they first set it up in 2011, two years later, the boulders did indeed begin to move while the playa was covered with a pond of water a few inches deep.

So Lorenz and his co-authors were finally able to identify the mechanism. The ground is too hard to absorb rainfall, and that water freezes when the temperature drops. When temperatures rise above freezing again, the ice starts to melt, creating ice rafts floating on the meltwater. And when the winds are sufficiently strong, they cause the ice rafts to drift along the surface.

A sailing stone in Death Valley's Racetrack Playa.

A sailing stone at Death Valley’s Racetrack Playa. Credit: Tahoenathan/CC BY-SA 3.0

“Nature had to have wind blowing to kind of push the boulder and the ice along the meltwater that was beneath the ice,” said Boreyko. “We thought, what if we could have a similar idea of melting ice moving directionally but use an engineered structure to make it happen spontaneously so we don’t have to have energy or wind or anything active to make it work?”

The team made their ice discs by pouring distilled water into thermally insulated polycarbonate Petrie dishes. This resulted in bottom-up freezing, which minimizes air bubbles in the ice. They then milled asymmetric grooves into uncoated aluminum plates in a herringbone pattern—essentially creating arrowhead-shaped channels—and then bonded them to hot plates heated to the desired temperature. Each ice disc was placed on the plate with rubber tongs, and the experiments were filmed from various angles to fully capture the disc behavior.

The herringbone pattern is the key. “The directionality is what really pushes the water,” Jack Tapocik, a graduate student in Boreyko’s lab, told Ars. “The herringbone doesn’t allow for water to flow backward, the water has to go forward, and that basically pushes the water and the ice together forward. We don’t have a treated surface, so the water just sits on top and the ice all moves as one unit.”

Boreyko draws an analogy to tubing on a river, except it’s the directional channels rather than gravity causing the flow. “You can see [in the video below] how it just follows the meltwater,” he said. “This is your classic entrainment mechanism where if the water flows that way and you’re floating on the water, you’re going to go the same way, too. It’s basically the same idea as what makes a Leidenfrost droplet also move one way: It has a vapor flow underneath. The only difference is that was a liquid drifting on a vapor flow, whereas now we have a solid drifting on a liquid flow. The densities and viscosities are different, but the idea is the same: You have a more dense phase that is drifting on the top of a lighter phase that is flowing directionally.”

Jonathan Boreyko/Virginia Tech

Next, the team repeated the experiment, this time coating the aluminum herringbone surface with water-repellant spray, hoping to speed up the disc propulsion. Instead, they found that the disc ended up sticking to the treated surface for a while before suddenly slingshotting across the metal plate.

“It’s a totally different concept with totally different physics behind it, and it’s so much cooler,” said Tapocik. “As the ice is melting on these coated surfaces, the water just doesn’t want to sit within the channels. It wants to sit on top because of the [hydrophobic] coating we have on there. The ice is directly sticking now to the surface, unlike before when it was floating. You get this elongated puddle in front. The easiest place [for the ice] to be is in the center of this giant, long puddle. So it re-centers, and that’s what moves it forward like a slingshot.”

Essentially, the water keeps expanding asymmetrically, and that difference in shape gives rise to a mismatch in surface tension because the amount of force that surface tension exerts on a body depends on curvature. The flatter puddle shape in front has less curvature than the smaller shape in back. As the video below shows, when the mismatch in surface tension becomes sufficiently strong, “It just rips the ice off the surface and flings it along,” said Boreyko. “In the future, we could try putting little things like magnets on top of the ice. We could probably put a boulder on it if we wanted to. The Death Valley effect would work with or without a boulder because it’s the floating ice raft that moves with the wind.”

Jonathan Boreyko/Virginia Tech

One potential application is energy harvesting. For example, one could pattern the metal surface in a circle rather than a straight line so the melting ice disk would continually rotate. Put magnets on the disk, and they would also rotate and generate power. One might even attach a turbine or gear to the rotating disc.

The effect might also provide a more energy-efficient means of defrosting, a longstanding research interest for Boreyko. “If you had a herringbone surface with a frosting problem, you could melt the frost, even partially, and use these directional flows to slingshot the ice off the surface,” he said. “That’s both faster and uses less energy than having to entirely melt the ice into pure water. We’re looking at potentially over a tenfold reduction in heating requirements if you only have to partially melt the ice.”

That said, “Most practical applications don’t start from knowing the application beforehand,” said Boreyko. “It starts from ‘Oh, that’s a really cool phenomenon. What’s going on here?’ It’s only downstream from that it turns out you can use this for better defrosting of heat exchangers for heat pumps. I just think it’s fun to say that we can make a little melting disk of ice very suddenly slingshot across the table. It’s a neat way to grab your attention and think more about melting and ice and how all this stuff works.”

DOI: ACS Applied Materials and Interfaces, 2025. 10.1021/acsami.5c08993  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Ice discs slingshot across a metal surface all on their own Read More »

misunderstood-“photophoresis”-effect-could-loft-metal-sheets-to-exosphere

Misunderstood “photophoresis” effect could loft metal sheets to exosphere


Photophoresis can generate a tiny bit of lift without any moving parts.

Image of a wooden stand holding a sealed glass bulb with a spinning set of vanes, each of which has a lit and dark side.

Most people would recognize the device in the image above, although they probably wouldn’t know it by its formal name: the Crookes radiometer. As its name implies, placing the radiometer in light produces a measurable change: the blades start spinning.

Unfortunately, many people misunderstand the physics of its operation (which we’ll return to shortly). The actual forces that drive the blades to spin, called photophoresis, can act on a variety of structures as long as they’re placed in a sufficiently low-density atmosphere. Now, a team of researchers has figured out that it may be possible to use the photophoretic effect to loft thin sheets of metal into the upper atmosphere of Earth and other planets. While their idea is to use it to send probes to the portion of the atmosphere that’s too high for balloons and too low for satellites, they have tested some working prototypes a bit closer to the Earth’s surface.

Photophoresis

It’s quite common—and quite wrong—to see explanations of the Crookes radiometer that involve radiation pressure. Supposedly, the dark sides of the blades absorb more photons, each of which carries a tiny bit of momentum, giving the dark side of the blades a consistent push. The problem with this explanation is that photons are bouncing off the silvery side, which imparts even more momentum. If the device were spinning due to radiation pressure, it would be turning in the opposite direction than it actually does.

An excess of the absorbed photons on the dark side is key to understanding how it works, though. Photophoresis operates through the temperature difference that develops between the warm, light-absorbing dark side of the blade and the cooler silvered side.

Any gas molecule that bumps into the dark side will likely pick up some of the excess thermal energy from it and move away from the blade faster than it arrived. At the sorts of atmospheric pressures we normally experience, these molecules don’t get very far before they bump into other gas molecules, which keeps any significant differences from developing.

But a Crookes radiometer is in a sealed glass container with a far lower air pressure. This allows the gas molecules to speed off much farther from the dark surface of the blade before they run into anything, creating an area of somewhat lower pressure at its surface. That causes gas near the surface of the shiny side to rush around and fill this lower-pressure area, imparting the force that starts the blades turning.

It’s pretty impressively inefficient in that sort of configuration, though. So people have spent a lot of time trying to design alternative configurations that can generate a bit more force. One idea with a lot of research traction is a setup that involves two thin metal sheets—one light, one dark—arranged parallel to each other. Both sheets would be heavily perforated to cut down on weight. And a subset of them would have a short pipe connecting holes on the top and bottom sheet. (This has picked up the nickname “nanocardboard.”)

These pipes would serve several purposes. One is to simply link the two sheets into a single unit. Another is to act as an insulator, keeping heat from moving from the dark sheet to the light one, and thus enhancing the temperature gradient. Finally, they provide a direct path for air to move from the top of the light-colored sheet to the bottom of the dark one, giving a bit of directed thrust to help keep the sheets aloft.

Optimization

As you might imagine, there are a lot of free parameters you can tweak: the size of the gap between the sheets, the density of perforations in them, the number of those holes that are connected by a pipe, and so on. So a small team of researchers developed a system to model different configurations and attempt to optimize for lift. (We’ll get to their motivations for doing so a bit later.)

Starting with a disk of nanocardboard, “The inputs to the model are the geometric, optical and thermal properties of the disk, ambient gas conditions, and external radiative heat fluxes on the disk,” as the researchers describe it. “The outputs are the conductive heat fluxes on the two membranes, the membrane temperatures, and the net photophoretic lofting force on the structure.” In general, the ambient gas conditions needed to generate lift are similar to the ones inside the Crookes radiometer: well below the air pressure at sea level.

The model suggested that three trends should influence any final designs. The first is that the density of perforations is a balance. At relatively low elevations (meaning a denser atmosphere), many perforations increase the stress on large sheets, but they decrease the stress for small items at high elevations. The other thing is that, rather than increasing with surface area, lift tends to drop because the sheets are more likely to equilibrate to the prevailing temperatures. A square millimeter of nanocardboard produces over 10 times more lift per surface area than a 10-square-centimeter piece of the same material.

Finally, the researchers calculate that the lift is at its maximum in the mesosphere, the area just above the stratosphere (50–100 kilometers above Earth’s surface).

Light and lifting

The researchers then built a few sheets of nanocardboard to test the output of their model. The actual products, primarily made of chromium, aluminum, and aluminum oxide, were incredibly light, weighing only a gram for a square meter of material. When illuminated by a laser or white LED, they generated measurable force on a testing device, provided the atmosphere was kept sufficiently sparse. With an exposure equivalent to sunlight, the device generated more than it weighed.

It’s a really nice demonstration that we can take a relatively obscure and weak physical effect and design devices that can levitate in the upper atmosphere, powered by nothing more than sunlight—which is pretty cool.

But the researchers have a goal beyond that. The mesophere turns out to be a really difficult part of the atmosphere to study. It’s not dense enough to support balloons or aircraft, but it still has enough gas to make quick work of any satellites. So the researchers really want to turn one of these devices into an instrument-carrying aircraft. Unfortunately, that would mean adding the structural components needed to hold instruments, along with the instruments themselves. And even in the mesosphere, where lift is optimal, these things do not generate much in the way of lift.

Plus, there’s the issue of getting them there, given that they won’t generate enough lift in the lower atmosphere, so they’ll have to be carried into the upper stratosphere by something else and then be released gently enough to not damage their fragile structure. And then, unless you’re lofting them during the polar summer, they will likely come floating back down at night.

None of this is to say this is an impossible dream. But there are definitely a lot of very large hurdles between the work and practical applications on Earth—much less on Mars, where the authors suggest the system could also be used to explore the mesosphere. But even if that doesn’t end up being realistic, this is still a pretty neat bit of physics.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Misunderstood “photophoresis” effect could loft metal sheets to exosphere Read More »

trump-orders-cull-of-regulations-governing-commercial-rocket-launches

Trump orders cull of regulations governing commercial rocket launches


The head of the FAA’s commercial spaceflight division will become a political appointee.

Birds take flight at NASA’s Kennedy Space Center in Florida in this 2010 photo. Credit: NASA

President Donald Trump signed an executive order Wednesday directing government agencies to “eliminate or expedite” environmental reviews for commercial launch and reentry licenses.

The Federal Aviation Administration (FAA), part of the Department of Transportation (DOT), grants licenses for commercial launch and reentry operations. The FAA is charged with ensuring launch and reentries comply with environmental laws, comport with US national interests, and don’t endanger the public.

The drive toward deregulation will be welcome news for companies like SpaceX, led by onetime Trump ally Elon Musk; SpaceX conducts nearly all of the commercial launches and reentries licensed by the FAA.

Deregulation time

Trump ordered Transportation Secretary Sean Duffy, who also serves as the acting administrator of NASA, to “use all available authorities to eliminate or expedite… environmental reviews for… launch and reentry licenses and permits.” In the order signed by Trump, White House officials wrote that Duffy should consult with the chair of the Council on Environmental Quality and follow “applicable law” in the regulatory cull.

The executive order also includes a clause directing Duffy to reevaluate, amend, or rescind a slate of launch-safety regulations written during the first Trump administration. The FAA published the new regulations, known as Part 450, in 2020, and they went into effect in 2021, but space companies have complained they are too cumbersome and have slowed down the license approval process.

And there’s more. Trump ordered NASA, the military, and DOT to eliminate duplicative reviews for spaceport development. This is particularly pertinent at federally owned launch ranges like those at Cape Canaveral, Florida; Vandenberg Space Force Base, California; and Wallops Island, Virginia.

The Trump administration also plans to make the head of the FAA’s Office of Commercial Space Transportation a political appointee. This office oversees commercial launch and reentry licensing and was previously led by a career civil servant. Duffy will also hire an advisor on deregulation in the commercial spaceflight industry to join DOT, and the Office of Space Commerce will be elevated to a more prominent position within the Commerce Department.

“It is the policy of the United States to enhance American greatness in space by enabling a competitive launch marketplace and substantially increasing commercial space launch cadence and novel space activities by 2030,” Trump’s executive order reads. “To accomplish this, the federal government will streamline commercial license and permit approvals for United States-based operators.”

News of the executive order was reported last month by ProPublica, which wrote that the Trump administration was circulating draft language among federal agencies to slash rules to protect the environment and the public from the dangers of rocket launches. The executive order signed by Trump and released by the White House on Wednesday confirms ProPublica’s reporting.

Jared Margolis, a senior attorney for the Center for Biological Diversity, criticized the Trump administration’s move.

“This reckless order puts people and wildlife at risk from private companies launching giant rockets that often explode and wreak devastation on surrounding areas,” Margolis said in a statement. “Bending the knee to powerful corporations by allowing federal agencies to ignore bedrock environmental laws is incredibly dangerous and puts all of us in harm’s way. This is clearly not in the public interest.”

Duffy, the first person to lead NASA and another federal department at the same time, argued the order is important to sustain economic growth in the space industry.

“By slashing red tape tying up spaceport construction, streamlining launch licenses so they can occur at scale, and creating high-level space positions in government, we can unleash the next wave of innovation,” Duffy said in a statement. “At NASA, this means continuing to work with commercial space companies and improving our spaceports’ ability to launch.”

Nipping NEPA

The executive order is emblematic of the Trump administration’s broader push to curtail environmental reviews for large infrastructure projects.

The White House has already directed federal agencies to repeal regulations enforcing the National Environmental Policy Act (NEPA), a 1969 law that requires the feds prepare environmental assessments and environmental impact statements to evaluate the effects of government actions—such as licensing approvals—on the environment.

Regarding commercial spaceflight, the White House ordered the Transportation Department to create a list of activities officials there believe are not subject to NEPA and establish exclusions under NEPA for launch and reentry licenses.

Onlookers watch from nearby sand dunes as SpaceX prepares a Starship rocket for launch from Starbase, Texas. Credit: Stephen Clark/Ars Technica

The changes to the environmental review process might be the most controversial part of Trump’s new executive order. Another section of the order—the attempt to reform or rescind the so-called Part 450 launch and reentry regulations—appears to have bipartisan support in Congress.

The FAA started implementing its new Part 450 commercial launch and reentry regulations less than five years ago after writing the rules in response to another Trump executive order signed in 2018. Part 450 was intended to streamline the launch approval process by allowing companies to submit applications for a series of launches or reentries, rather than requiring a new license for each mission.

But industry officials quickly criticized the new regulations, which they said didn’t account for rapid iteration of rockets and spacecraft like SpaceX’s enormous Starship/Super Heavy launch vehicle. The FAA approved a SpaceX request in May to increase the number of approved Starship launches from five to 25 per year from the company’s base in Starship, Texas, near the US-Mexico border.

Last year, the FAA’s leadership under the Biden administration established a committee to examine the shortcomings of Part 450. The Republican and Democratic leaders of the House Science, Space, and Technology Committee submitted a joint request in February for the Government Accountability Office to conduct an independent review of the FAA’s Part 450 regulations.

“Reforming and streamlining commercial launch regulations and licensing is an area the Biden administration knew needed reform,” wrote Laura Forczyk, founder and executive director of the space consulting firm Astralytical, in a post on X. “However, little was done. Will more be done with this executive order? I hope so. This was needed years ago.”

Dave Cavossa, president of the Commercial Spaceflight Federation, applauded the Trump administration’s regulatory policy.

“This executive order will strengthen and grow the US commercial space industry by cutting red tape while maintaining a commitment to public safety, benefitting the American people and the US government that are increasingly reliant on space for our national and economic security,” Cavossa said in a statement.

Specific language in the new Trump executive order calls for the FAA to evaluate which regulations should be waived for hybrid launch or reentry vehicles that hold FAA airworthiness certificates, and which requirements should be remitted for rockets with a flight termination system, an explosive charge designed to destroy a launch vehicle if it veers off its pre-approved course after liftoff. These are similar to the topics the Biden-era FAA was looking at last year.

The new Trump administration policy also seeks to limit the authority of state officials in enforcing their own environmental rules related to the construction or operation of spaceports.

This is especially relevant after the California Coastal Commission rejected a proposal by SpaceX to double its launch cadence at Vandenberg Space Force Base, a spaceport located roughly 140 miles (225 kilometers) northwest of Los Angeles. The Space Force, which owns Vandenberg and is one of SpaceX’s primary customers, backs SpaceX’s push for more launches.

Finally, the order gives the Department of Commerce responsibility for authorizing “novel space activities” such as in-space assembly and manufacturing, asteroid and planetary mining, and missions to remove space debris from orbit.

This story was updated at 12: 30 am EDT on August 14 with statements from the Center for Biological Diversity and the Commercial Spaceflight Federation.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Trump orders cull of regulations governing commercial rocket launches Read More »

study:-social-media-probably-can’t-be-fixed

Study: Social media probably can’t be fixed


“The [structural] mechanism producing these problematic outcomes is really robust and hard to resolve.”

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

It’s no secret that much of social media has become profoundly dysfunctional. Rather than bringing us together into one utopian public square and fostering a healthy exchange of ideas, these platforms too often create filter bubbles or echo chambers. A small number of high-profile users garner the lion’s share of attention and influence, and the algorithms designed to maximize engagement end up merely amplifying outrage and conflict, ensuring the dominance of the loudest and most extreme users—thereby increasing polarization even more.

Numerous platform-level intervention strategies have been proposed to combat these issues, but according to a preprint posted to the physics arXiv, none of them are likely to be effective. And it’s not the fault of much-hated algorithms, non-chronological feeds, or our human proclivity for seeking out negativity. Rather, the dynamics that give rise to all those negative outcomes are structurally embedded in the very architecture of social media. So we’re probably doomed to endless toxic feedback loops unless someone hits upon a brilliant fundamental redesign that manages to change those dynamics.

Co-authors Petter Törnberg and Maik Larooij of the University of Amsterdam wanted to learn more about the mechanisms that give rise to the worst aspects of social media: the partisan echo chambers, the concentration of influence among a small group of elite users (attention inequality), and the amplification of the most extreme divisive voices. So they combined standard agent-based modeling with large language models (LLMs), essentially creating little AI personas to simulate online social media behavior. “What we found is that we didn’t need to put any algorithms in, we didn’t need to massage the model,” Törnberg told Ars. “It just came out of the baseline model, all of these dynamics.”

They then tested six different intervention strategies social scientists have been proposed to counter those effects: switching to chronological or randomized feeds; inverting engagement-optimization algorithms to reduce the visibility of highly reposted sensational content; boosting the diversity of viewpoints to broaden users’ exposure to opposing political views; using “bridging algorithms” to elevate content that fosters mutual understanding rather than emotional provocation; hiding social statistics like reposts and follower accounts to reduce social influence cues; and removing biographies to limit exposure to identity-based signals.

The results were far from encouraging. Only some interventions showed modest improvements. None were able to fully disrupt the fundamental mechanisms producing the dysfunctional effects. In fact, some interventions actually made the problems worse. For example, chronological ordering had the strongest effect on reducing attention inequality, but there was a tradeoff: It also intensified the amplification of extreme content. Bridging algorithms significantly weakened the link between partisanship and engagement and modestly improved viewpoint diversity, but it also increased attention inequality. Boosting viewpoint diversity had no significant impact at all.

So is there any hope of finding effective intervention strategies to combat these problematic aspects of social media? Or should we nuke our social media accounts altogether and go live in caves? Ars caught up with Törnberg for an extended conversation to learn more about these troubling findings.

Ars Technica: What drove you to conduct this study?

Petter Törnberg: For the last 20 years or so, there has been a ton of research on how social media is reshaping politics in different ways, almost always using observational data. But in the last few years, there’s been a growing appetite for moving beyond just complaining about these things and trying to see how we can be a bit more constructive. Can we identify how to improve social media and create online spaces that are actually living up to those early promises of providing a public sphere where we can deliberate and debate politics in a constructive way?

The problem with using observational data is that it’s very hard to test counterfactuals to implement alternative solutions. So one kind of method that has existed in the field is agent-based simulations and social simulations: create a computer model of the system and then run experiments on that and test counterfactuals. It is useful for looking at the structure and emergence of network dynamics.

But at the same time, those models represent agents as simple rule followers or optimizers, and that doesn’t capture anything of the cultural world or politics or human behavior. I’ve always been of the controversial opinion that those things actually matter,  especially for online politics. We need to study both the structural dynamics of network formations and the patterns of cultural interaction.

Ars Technica: So you developed this hybrid model that combines LLMs with agent-based modeling.

Petter Törnberg: That’s the solution that we find to move beyond the problems of conventional agent-based modeling. Instead of having this simple rule of followers or optimizers, we use AI or LLMs. It’s not a perfect solution—there’s all kind of biases and limitations—but it does represent a step forward compared to a list of if/then rules. It does have something more of capturing human behavior in a more plausible way. We give them personas that we get from the American National Election Survey, which has very detailed questions about US voters and their hobbies and preferences. And then we turn that into a textual persona—your name is Bob, you’re from Massachusetts, and you like fishing—just to give them something to talk about and a little bit richer representation.

And then they see the random news of the day, and they can choose to post the news, read posts from other users, repost them, or they can choose to follow users. If they choose to follow users, they look at their previous messages, look at their user profile.

Our idea was to start with the minimal bare-bones model and then add things to try to see if we could reproduce these problematic consequences. But to our surprise, we actually didn’t have to add anything because these problematic consequences just came out of the bare bones model. This went against our expectations and also what I think the literature would say.

Ars Technica: I’m skeptical of AI in general, particularly in a research context, but there are very specific instances where it can be extremely useful. This strikes me as one of them, largely because your basic model proved to be so robust. You got the same dynamics without introducing anything extra.

Petter Törnberg: Yes. It’s been a big conversation in social science over the last two years or so. There’s a ton of interest in using LLMs for social simulation, but no one has really figured out for what or how it’s going to be helpful, or how we’re going to get past these problems of validity and so on. The kind of approach that we take in this paper is building on a tradition of complex systems thinking. We imagine very simple models of the human world and try to capture very fundamental mechanisms. It’s not really aiming to be realistic or a precise, complete model of human behavior.

I’ve been one of the more critical people of this method, to be honest. At the same time, it’s hard to imagine any other way of studying these kinds of dynamics where we have cultural and structural aspects feeding back into each other. But I still have to take the findings with a grain of salt and realize that these are models, and they’re capturing a kind of hypothetical world—a spherical cow in a vacuum. We can’t predict what someone is going to have for lunch on Tuesday, but we can capture broader mechanisms, and we can see how robust those mechanisms are. We can see whether they’re stable, unstable, which conditions they emerge in, and the general boundaries. And in this case, we found a mechanism that seems to be very robust, unfortunately.

Ars Technica: The dream was that social media would help revitalize the public sphere and support the kind of constructive political dialogue that your paper deems “vital to democratic life.” That largely hasn’t happened. What are the primary negative unexpected consequences that have emerged from social media platforms?

Petter Törnberg: First, you have echo chambers or filter bubbles. The risk of broad agreement is that if you want to have a functioning political conversation, functioning deliberation, you do need to do that across the partisan divide. If you’re only having a conversation with people who already agree with each other, that’s not enough. There’s debate on how widespread echo chambers are online, but it is quite established that there are a lot of spaces online that aren’t very constructive because there’s only people from one political side. So that’s one ingredient that you need. You need to have a diversity of opinion, a diversity of perspective.

The second one is that the deliberation needs to be among equals; people need to have more or less the same influence in the conversation. It can’t be completely controlled by a small, elite group of users. This is also something that people have pointed to on social media: It has a tendency of creating these influencers because attention attracts attention. And then you have a breakdown of conversation among equals.

The final one is what I call (based on Chris Bail’s book) the social media prism. The more extreme users tend to get more attention online. This is often discussed in relation to engagement algorithms, which tend to identify the type of content that most upsets us and then boost that content. I refer to it as a “trigger bubble” instead of the filter bubble. They’re trying to trigger us as a way of making us engage more so they can extract our data and keep our attention.

Ars Technica: Your conclusion is that there’s something within the structural dynamics of the network itself that’s to blame—something fundamental to the construction of social networks that makes these extremely difficult problems to solve.

Petter Törnberg: Exactly. It comes from the fact that we’re using these AI models to capture a richer representation of human behavior, which allows us to see something that wouldn’t really be possible using conventional agent-based modeling. There have been previous models looking at the growth of social networks on social media. People choose to retweet or not, and we know that action tends to be very reactive. We tend to be very emotional in that choice. And it tends to be a highly partisan and polarized type of action. You hit retweet when you see someone being angry about something, or doing something horrific, and then you share that. It’s well-known that this leads to toxic, more polarized content spreading more.

But what we find is that it’s not just that this content spreads; it also shapes the network structures that are formed. So there’s feedback between the effective emotional action of choosing to retweet something and the network structure that emerges. And then in turn, you have a network structure that feeds back what content you see, resulting in a toxic network. The definition of an online social network is that you have this kind of posting, reposting, and following dynamics. It’s quite fundamental to it. That alone seems to be enough to drive these negative outcomes.

Ars Technica: I was frankly surprised at the ineffectiveness of the various intervention strategies you tested. But it does seem to explain the Bluesky conundrum. Bluesky has no algorithm, for example, yet the same dynamics still seem to emerge. I think Bluesky’s founders genuinely want to avoid those dysfunctional issues, but they might not succeed, based on this paper. Why are such interventions so ineffective? 

Petter Törnberg: We’ve been discussing whether these things are due to the platforms doing evil things with algorithms or whether we as users are choosing that we want a bad environment. What we’re saying is that it doesn’t have to be either of those. This is often the unintended outcomes from interactions based on underlying rules. It’s not necessarily because the platforms are evil; it’s not necessarily because people want to be in toxic, horrible environments. It just follows from the structure that we’re providing.

We tested six different interventions. Google has been trying to make social media less toxic and recently released a newsfeed algorithm based on the content of the text. So that’s one example. We’re also trying to do more subtle interventions because often you can find a certain way of nudging the system so it switches over to healthier dynamics. Some of them have moderate or slightly positive effects on one of the attributes, but then they often have negative effects on another attribute, or they have no impact whatsoever.

I should say also that these are very extreme interventions in the sense that, if you depended on making money on your platform, you probably don’t want to implement them because it probably makes it really boring to use. It’s like showing the least influential users, the least retweeted messages on the platform. Even so, it doesn’t really make a difference in changing the basic outcomes. What we take from that is that the mechanism producing these problematic outcomes is really robust and hard to resolve given the basic structure of these platforms.

Ars Technica: So how might one go about building a successful social network that doesn’t have these problems? 

Petter Törnberg: There are several directions where you could imagine going, but there’s also the constraint of what is popular use. Think back to the early Internet, like ICQ. ICQ had this feature where you could just connect to a random person. I loved it when I was a kid. I would talk to random people all over the world. I was 12 in the countryside on a small island in Sweden, and I was talking to someone from Arizona, living a different life. I don’t know how successful that would be these days, the Internet having become a lot less innocent than it was.

For instance, we can focus on the question of inequality of attention, a very well-studied and robust feature of these networks. I personally thought we would be able to address it with our interventions, but attention draws attention, and this leads to a power law distribution, where 1 percent [of users] dominates the entire conversation. We know the conditions under which those power laws emerge. This is one of the main outcomes of social network dynamics: extreme inequality of attention.

But in social science, we always teach that everything is a normal distribution. The move from studying the conventional social world to studying the online social world means that you’re moving from these nice normal distributions to these horrible power law distributions. Those are the outcomes of having social networks where the probability of connecting to someone depends on how many previous connections they have. If we want to get rid of that, we probably have to move away from the social network model and have some kind of spatial model or group-based model that makes things a little bit more local, a little bit less globally interconnected.

Ars Technica: It sounds like you’d want to avoid those big influential nodes that play such a central role in a large, complex global network. 

Petter Törnberg: Exactly. I think that having those global networks and structures fundamentally undermines the possibility of the kind of conversations that political scientists and political theorists traditionally talked about when they were discussing in the public square. They were talking about social interaction in a coffee house or a tea house, or reading groups and so on. People thought the Internet was going to be precisely that. It’s very much not that. The dynamics are fundamentally different because of those structural differences. We shouldn’t expect to be able to get a coffee house deliberation structure when we have a global social network where everyone is connected to everyone. It is difficult to imagine a functional politics building on that.

Ars Technica: I want to come back to your comment on the power law distribution, how 1 percent of people dominate the conversation, because I think that is something that most users routinely forget. The horrible things we see people say on the Internet are not necessarily indicative of the vast majority of people in the world. 

Petter Törnberg: For sure. That is capturing two aspects. The first is the social media prism, where the perspective we get of politics when we see it through the lens of social media is fundamentally different from what politics actually is. It seems much more toxic, much more polarized. People seem a little bit crazier than they really are. It’s a very well-documented aspect of the rise of polarization: People have a false perception of the other side. Most people have fairly reasonable and fairly similar opinions. The actual polarization is lower than the perceived polarization. And that arguably is a result of social media, how it misrepresents politics.

And then we see this very small group of users that become very influential who often become highly visible as a result of being a little bit crazy and outrageous. Social media creates an incentive structure that is really central to reshaping not just how we see politics but also what politics is, which politicians become powerful and influential, because it is controlling the distribution of what is arguably the most valuable form of capital of our era: attention. Especially for politicians, being able to control attention is the most important thing. And since social media creates the conditions of who gets attention or not, it creates an incentive structure where certain personalities work better in a way that’s just fundamentally different from how it was in previous eras.

Ars Technica: There are those who have sworn off social media, but it seems like simply not participating isn’t really a solution, either.

Petter Törnberg: No. First, even if you only read, say, The New York Times, that newspaper is still reshaped by what works on social media, the social media logic. I had a student who did a little project this last year showing that as social media became more influential, the headlines of The New York Times became more clickbaity and adapted to the style of what worked on social media. So conventional media and our very culture is being transformed.

But more than that, as I was just saying, it’s the type of politicians, it’s the type of people who are empowered—it’s the entire culture. Those are the things that are being transformed by the power of the incentive structures of social media. It’s not like, “This is things that are happening in social media and this is the rest of the world.” It’s all entangled, and somehow social media has become the cultural engine that is shaping our politics and society in very fundamental ways. Unfortunately.

Ars Technica: I usually like to say that technological tools are fundamentally neutral and can be used for good or ill, but this time I’m not so sure. Is there any hope of finding a way to take the toxic and turn it into a net positive?

Petter Törnberg: What I would say to that is that we are at a crisis point with the rise of LLMs and AI. I have a hard time seeing the contemporary model of social media continuing to exist under the weight of LLMs and their capacity to mass-produce false information or information that optimizes these social network dynamics. We already see a lot of actors—based on this monetization of platforms like X—that are using AI to produce content that just seeks to maximize attention. So misinformation, often highly polarized information as AI models become more powerful, that content is going to take over. I have a hard time seeing the conventional social media models surviving that.

We’ve already seen the process of people retreating in part to credible brands and seeking to have gatekeepers. Young people, especially, are going into WhatsApp groups and other closed communities. Of course, there’s misinformation from social media leaking into those chats also. But these kinds of crisis points at least have the hope that we’ll see a changing situation. I wouldn’t bet that it’s a situation for the better. You wanted me to sound positive, so I tried my best. Maybe it’s actually “good riddance.”

Ars Technica: So let’s just blow up all the social media networks. It still won’t be better, but at least we’ll have different problems.

Petter Törnberg: Exactly. We’ll find a new ditch.

DOI: arXiv, 2025. 10.48550/arXiv.2508.03385  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Study: Social media probably can’t be fixed Read More »