Science

ice-discs-slingshot-across-a-metal-surface-all-on-their-own

Ice discs slingshot across a metal surface all on their own


VA Tech experiment was inspired by Death Valley’s mysterious “sailing stones” at Racetrack Playa.

Graduate student Jack Tapocik sets up ice on an engineered surface in the VA Tech lab of Jonathan Boreyko. Credit: Alex Parrish/Virginia Tech

Scientists have figured out how to make frozen discs of ice self-propel across a patterned metal surface, according to a new paper published in the journal ACS Applied Materials and Interfaces. It’s the latest breakthrough to come out of the Virginia Tech lab of mechanical engineer Jonathan Boreyko.

A few years ago, Boreyko’s lab experimentally demonstrated a three-phase Leidenfrost effect in water vapor, liquid water, and ice. The Leidenfrost effect is what happens when you dash a few drops of water onto a very hot, sizzling skillet. The drops levitate, sliding around the pan with wild abandon. If the surface is at least 400° Fahrenheit (well above the boiling point of water), cushions of water vapor, or steam, form underneath them, keeping them levitated. The effect also works with other liquids, including oils and alcohol, but the temperature at which it manifests will be different.

Boreyko’s lab discovered that this effect can also be achieved in ice simply by placing a thin, flat disc of ice on a heated aluminum surface. When the plate was heated above 150° C (302° F), the ice did not levitate on a vapor the way liquid water does. Instead, there was a significantly higher threshold of 550° Celsius (1,022° F) for levitation of the ice to occur. Unless that critical threshold is reached, the meltwater below the ice just keeps boiling in direct contact with the surface. Cross that critical point and you will get a three-phase Leidenfrost effect.

The key is a temperature differential in the meltwater just beneath the ice disc. The bottom of the meltwater is boiling, but the top of the meltwater sticks to the ice. It takes a lot to maintain such an extreme difference in temperature, and doing so consumes most of the heat from the aluminum surface, which is why it’s harder to achieve levitation of an ice disc. Ice can suppress the Leidenfrost effect even at very high temperatures (up to 550° C), which means that using ice particles instead of liquid droplets would be better for many applications involving spray quenching: rapid cooling in nuclear power plants, for example, firefighting, or rapid heat quenching when shaping metals.

This time around, Boreyko et al. have turned their attention to what the authors term “a more viscous analog” to a Leidenfrost ratchet, a form of droplet self-propulsion. “What’s different here is we’re no longer trying to levitate or even boil,” Boreyko told Ars. “Now we’re asking a more straightforward question: Is there a way to make ice move across the surface directionally as it is melting? Regular melting at room temperature. We’re not boiling, we’re not levitating, we’re not Leidenfrosting. We just want to know, can we make ice shoot across the surface if we design a surface in the right way?”

Mysterious moving boulders

The researchers were inspired by Death Valley’s famous “sailing stones” on Racetrack Playa. Watermelon-sized boulders are strewn throughout the dry lake bed, and they leave trails in the cracked earth as they slowly migrate a couple of hundred meters each season. Scientists didn’t figure out what was happening until 2014. Although co-author Ralph Lorenz (Johns Hopkins University) admitted he thought theirs would be “the most boring experiment ever” when they first set it up in 2011, two years later, the boulders did indeed begin to move while the playa was covered with a pond of water a few inches deep.

So Lorenz and his co-authors were finally able to identify the mechanism. The ground is too hard to absorb rainfall, and that water freezes when the temperature drops. When temperatures rise above freezing again, the ice starts to melt, creating ice rafts floating on the meltwater. And when the winds are sufficiently strong, they cause the ice rafts to drift along the surface.

A sailing stone in Death Valley's Racetrack Playa.

A sailing stone at Death Valley’s Racetrack Playa. Credit: Tahoenathan/CC BY-SA 3.0

“Nature had to have wind blowing to kind of push the boulder and the ice along the meltwater that was beneath the ice,” said Boreyko. “We thought, what if we could have a similar idea of melting ice moving directionally but use an engineered structure to make it happen spontaneously so we don’t have to have energy or wind or anything active to make it work?”

The team made their ice discs by pouring distilled water into thermally insulated polycarbonate Petrie dishes. This resulted in bottom-up freezing, which minimizes air bubbles in the ice. They then milled asymmetric grooves into uncoated aluminum plates in a herringbone pattern—essentially creating arrowhead-shaped channels—and then bonded them to hot plates heated to the desired temperature. Each ice disc was placed on the plate with rubber tongs, and the experiments were filmed from various angles to fully capture the disc behavior.

The herringbone pattern is the key. “The directionality is what really pushes the water,” Jack Tapocik, a graduate student in Boreyko’s lab, told Ars. “The herringbone doesn’t allow for water to flow backward, the water has to go forward, and that basically pushes the water and the ice together forward. We don’t have a treated surface, so the water just sits on top and the ice all moves as one unit.”

Boreyko draws an analogy to tubing on a river, except it’s the directional channels rather than gravity causing the flow. “You can see [in the video below] how it just follows the meltwater,” he said. “This is your classic entrainment mechanism where if the water flows that way and you’re floating on the water, you’re going to go the same way, too. It’s basically the same idea as what makes a Leidenfrost droplet also move one way: It has a vapor flow underneath. The only difference is that was a liquid drifting on a vapor flow, whereas now we have a solid drifting on a liquid flow. The densities and viscosities are different, but the idea is the same: You have a more dense phase that is drifting on the top of a lighter phase that is flowing directionally.”

Jonathan Boreyko/Virginia Tech

Next, the team repeated the experiment, this time coating the aluminum herringbone surface with water-repellant spray, hoping to speed up the disc propulsion. Instead, they found that the disc ended up sticking to the treated surface for a while before suddenly slingshotting across the metal plate.

“It’s a totally different concept with totally different physics behind it, and it’s so much cooler,” said Tapocik. “As the ice is melting on these coated surfaces, the water just doesn’t want to sit within the channels. It wants to sit on top because of the [hydrophobic] coating we have on there. The ice is directly sticking now to the surface, unlike before when it was floating. You get this elongated puddle in front. The easiest place [for the ice] to be is in the center of this giant, long puddle. So it re-centers, and that’s what moves it forward like a slingshot.”

Essentially, the water keeps expanding asymmetrically, and that difference in shape gives rise to a mismatch in surface tension because the amount of force that surface tension exerts on a body depends on curvature. The flatter puddle shape in front has less curvature than the smaller shape in back. As the video below shows, when the mismatch in surface tension becomes sufficiently strong, “It just rips the ice off the surface and flings it along,” said Boreyko. “In the future, we could try putting little things like magnets on top of the ice. We could probably put a boulder on it if we wanted to. The Death Valley effect would work with or without a boulder because it’s the floating ice raft that moves with the wind.”

Jonathan Boreyko/Virginia Tech

One potential application is energy harvesting. For example, one could pattern the metal surface in a circle rather than a straight line so the melting ice disk would continually rotate. Put magnets on the disk, and they would also rotate and generate power. One might even attach a turbine or gear to the rotating disc.

The effect might also provide a more energy-efficient means of defrosting, a longstanding research interest for Boreyko. “If you had a herringbone surface with a frosting problem, you could melt the frost, even partially, and use these directional flows to slingshot the ice off the surface,” he said. “That’s both faster and uses less energy than having to entirely melt the ice into pure water. We’re looking at potentially over a tenfold reduction in heating requirements if you only have to partially melt the ice.”

That said, “Most practical applications don’t start from knowing the application beforehand,” said Boreyko. “It starts from ‘Oh, that’s a really cool phenomenon. What’s going on here?’ It’s only downstream from that it turns out you can use this for better defrosting of heat exchangers for heat pumps. I just think it’s fun to say that we can make a little melting disk of ice very suddenly slingshot across the table. It’s a neat way to grab your attention and think more about melting and ice and how all this stuff works.”

DOI: ACS Applied Materials and Interfaces, 2025. 10.1021/acsami.5c08993  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Ice discs slingshot across a metal surface all on their own Read More »

misunderstood-“photophoresis”-effect-could-loft-metal-sheets-to-exosphere

Misunderstood “photophoresis” effect could loft metal sheets to exosphere


Photophoresis can generate a tiny bit of lift without any moving parts.

Image of a wooden stand holding a sealed glass bulb with a spinning set of vanes, each of which has a lit and dark side.

Most people would recognize the device in the image above, although they probably wouldn’t know it by its formal name: the Crookes radiometer. As its name implies, placing the radiometer in light produces a measurable change: the blades start spinning.

Unfortunately, many people misunderstand the physics of its operation (which we’ll return to shortly). The actual forces that drive the blades to spin, called photophoresis, can act on a variety of structures as long as they’re placed in a sufficiently low-density atmosphere. Now, a team of researchers has figured out that it may be possible to use the photophoretic effect to loft thin sheets of metal into the upper atmosphere of Earth and other planets. While their idea is to use it to send probes to the portion of the atmosphere that’s too high for balloons and too low for satellites, they have tested some working prototypes a bit closer to the Earth’s surface.

Photophoresis

It’s quite common—and quite wrong—to see explanations of the Crookes radiometer that involve radiation pressure. Supposedly, the dark sides of the blades absorb more photons, each of which carries a tiny bit of momentum, giving the dark side of the blades a consistent push. The problem with this explanation is that photons are bouncing off the silvery side, which imparts even more momentum. If the device were spinning due to radiation pressure, it would be turning in the opposite direction than it actually does.

An excess of the absorbed photons on the dark side is key to understanding how it works, though. Photophoresis operates through the temperature difference that develops between the warm, light-absorbing dark side of the blade and the cooler silvered side.

Any gas molecule that bumps into the dark side will likely pick up some of the excess thermal energy from it and move away from the blade faster than it arrived. At the sorts of atmospheric pressures we normally experience, these molecules don’t get very far before they bump into other gas molecules, which keeps any significant differences from developing.

But a Crookes radiometer is in a sealed glass container with a far lower air pressure. This allows the gas molecules to speed off much farther from the dark surface of the blade before they run into anything, creating an area of somewhat lower pressure at its surface. That causes gas near the surface of the shiny side to rush around and fill this lower-pressure area, imparting the force that starts the blades turning.

It’s pretty impressively inefficient in that sort of configuration, though. So people have spent a lot of time trying to design alternative configurations that can generate a bit more force. One idea with a lot of research traction is a setup that involves two thin metal sheets—one light, one dark—arranged parallel to each other. Both sheets would be heavily perforated to cut down on weight. And a subset of them would have a short pipe connecting holes on the top and bottom sheet. (This has picked up the nickname “nanocardboard.”)

These pipes would serve several purposes. One is to simply link the two sheets into a single unit. Another is to act as an insulator, keeping heat from moving from the dark sheet to the light one, and thus enhancing the temperature gradient. Finally, they provide a direct path for air to move from the top of the light-colored sheet to the bottom of the dark one, giving a bit of directed thrust to help keep the sheets aloft.

Optimization

As you might imagine, there are a lot of free parameters you can tweak: the size of the gap between the sheets, the density of perforations in them, the number of those holes that are connected by a pipe, and so on. So a small team of researchers developed a system to model different configurations and attempt to optimize for lift. (We’ll get to their motivations for doing so a bit later.)

Starting with a disk of nanocardboard, “The inputs to the model are the geometric, optical and thermal properties of the disk, ambient gas conditions, and external radiative heat fluxes on the disk,” as the researchers describe it. “The outputs are the conductive heat fluxes on the two membranes, the membrane temperatures, and the net photophoretic lofting force on the structure.” In general, the ambient gas conditions needed to generate lift are similar to the ones inside the Crookes radiometer: well below the air pressure at sea level.

The model suggested that three trends should influence any final designs. The first is that the density of perforations is a balance. At relatively low elevations (meaning a denser atmosphere), many perforations increase the stress on large sheets, but they decrease the stress for small items at high elevations. The other thing is that, rather than increasing with surface area, lift tends to drop because the sheets are more likely to equilibrate to the prevailing temperatures. A square millimeter of nanocardboard produces over 10 times more lift per surface area than a 10-square-centimeter piece of the same material.

Finally, the researchers calculate that the lift is at its maximum in the mesosphere, the area just above the stratosphere (50–100 kilometers above Earth’s surface).

Light and lifting

The researchers then built a few sheets of nanocardboard to test the output of their model. The actual products, primarily made of chromium, aluminum, and aluminum oxide, were incredibly light, weighing only a gram for a square meter of material. When illuminated by a laser or white LED, they generated measurable force on a testing device, provided the atmosphere was kept sufficiently sparse. With an exposure equivalent to sunlight, the device generated more than it weighed.

It’s a really nice demonstration that we can take a relatively obscure and weak physical effect and design devices that can levitate in the upper atmosphere, powered by nothing more than sunlight—which is pretty cool.

But the researchers have a goal beyond that. The mesophere turns out to be a really difficult part of the atmosphere to study. It’s not dense enough to support balloons or aircraft, but it still has enough gas to make quick work of any satellites. So the researchers really want to turn one of these devices into an instrument-carrying aircraft. Unfortunately, that would mean adding the structural components needed to hold instruments, along with the instruments themselves. And even in the mesosphere, where lift is optimal, these things do not generate much in the way of lift.

Plus, there’s the issue of getting them there, given that they won’t generate enough lift in the lower atmosphere, so they’ll have to be carried into the upper stratosphere by something else and then be released gently enough to not damage their fragile structure. And then, unless you’re lofting them during the polar summer, they will likely come floating back down at night.

None of this is to say this is an impossible dream. But there are definitely a lot of very large hurdles between the work and practical applications on Earth—much less on Mars, where the authors suggest the system could also be used to explore the mesosphere. But even if that doesn’t end up being realistic, this is still a pretty neat bit of physics.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Misunderstood “photophoresis” effect could loft metal sheets to exosphere Read More »

trump-orders-cull-of-regulations-governing-commercial-rocket-launches

Trump orders cull of regulations governing commercial rocket launches


The head of the FAA’s commercial spaceflight division will become a political appointee.

Birds take flight at NASA’s Kennedy Space Center in Florida in this 2010 photo. Credit: NASA

President Donald Trump signed an executive order Wednesday directing government agencies to “eliminate or expedite” environmental reviews for commercial launch and reentry licenses.

The Federal Aviation Administration (FAA), part of the Department of Transportation (DOT), grants licenses for commercial launch and reentry operations. The FAA is charged with ensuring launch and reentries comply with environmental laws, comport with US national interests, and don’t endanger the public.

The drive toward deregulation will be welcome news for companies like SpaceX, led by onetime Trump ally Elon Musk; SpaceX conducts nearly all of the commercial launches and reentries licensed by the FAA.

Deregulation time

Trump ordered Transportation Secretary Sean Duffy, who also serves as the acting administrator of NASA, to “use all available authorities to eliminate or expedite… environmental reviews for… launch and reentry licenses and permits.” In the order signed by Trump, White House officials wrote that Duffy should consult with the chair of the Council on Environmental Quality and follow “applicable law” in the regulatory cull.

The executive order also includes a clause directing Duffy to reevaluate, amend, or rescind a slate of launch-safety regulations written during the first Trump administration. The FAA published the new regulations, known as Part 450, in 2020, and they went into effect in 2021, but space companies have complained they are too cumbersome and have slowed down the license approval process.

And there’s more. Trump ordered NASA, the military, and DOT to eliminate duplicative reviews for spaceport development. This is particularly pertinent at federally owned launch ranges like those at Cape Canaveral, Florida; Vandenberg Space Force Base, California; and Wallops Island, Virginia.

The Trump administration also plans to make the head of the FAA’s Office of Commercial Space Transportation a political appointee. This office oversees commercial launch and reentry licensing and was previously led by a career civil servant. Duffy will also hire an advisor on deregulation in the commercial spaceflight industry to join DOT, and the Office of Space Commerce will be elevated to a more prominent position within the Commerce Department.

“It is the policy of the United States to enhance American greatness in space by enabling a competitive launch marketplace and substantially increasing commercial space launch cadence and novel space activities by 2030,” Trump’s executive order reads. “To accomplish this, the federal government will streamline commercial license and permit approvals for United States-based operators.”

News of the executive order was reported last month by ProPublica, which wrote that the Trump administration was circulating draft language among federal agencies to slash rules to protect the environment and the public from the dangers of rocket launches. The executive order signed by Trump and released by the White House on Wednesday confirms ProPublica’s reporting.

Jared Margolis, a senior attorney for the Center for Biological Diversity, criticized the Trump administration’s move.

“This reckless order puts people and wildlife at risk from private companies launching giant rockets that often explode and wreak devastation on surrounding areas,” Margolis said in a statement. “Bending the knee to powerful corporations by allowing federal agencies to ignore bedrock environmental laws is incredibly dangerous and puts all of us in harm’s way. This is clearly not in the public interest.”

Duffy, the first person to lead NASA and another federal department at the same time, argued the order is important to sustain economic growth in the space industry.

“By slashing red tape tying up spaceport construction, streamlining launch licenses so they can occur at scale, and creating high-level space positions in government, we can unleash the next wave of innovation,” Duffy said in a statement. “At NASA, this means continuing to work with commercial space companies and improving our spaceports’ ability to launch.”

Nipping NEPA

The executive order is emblematic of the Trump administration’s broader push to curtail environmental reviews for large infrastructure projects.

The White House has already directed federal agencies to repeal regulations enforcing the National Environmental Policy Act (NEPA), a 1969 law that requires the feds prepare environmental assessments and environmental impact statements to evaluate the effects of government actions—such as licensing approvals—on the environment.

Regarding commercial spaceflight, the White House ordered the Transportation Department to create a list of activities officials there believe are not subject to NEPA and establish exclusions under NEPA for launch and reentry licenses.

Onlookers watch from nearby sand dunes as SpaceX prepares a Starship rocket for launch from Starbase, Texas. Credit: Stephen Clark/Ars Technica

The changes to the environmental review process might be the most controversial part of Trump’s new executive order. Another section of the order—the attempt to reform or rescind the so-called Part 450 launch and reentry regulations—appears to have bipartisan support in Congress.

The FAA started implementing its new Part 450 commercial launch and reentry regulations less than five years ago after writing the rules in response to another Trump executive order signed in 2018. Part 450 was intended to streamline the launch approval process by allowing companies to submit applications for a series of launches or reentries, rather than requiring a new license for each mission.

But industry officials quickly criticized the new regulations, which they said didn’t account for rapid iteration of rockets and spacecraft like SpaceX’s enormous Starship/Super Heavy launch vehicle. The FAA approved a SpaceX request in May to increase the number of approved Starship launches from five to 25 per year from the company’s base in Starship, Texas, near the US-Mexico border.

Last year, the FAA’s leadership under the Biden administration established a committee to examine the shortcomings of Part 450. The Republican and Democratic leaders of the House Science, Space, and Technology Committee submitted a joint request in February for the Government Accountability Office to conduct an independent review of the FAA’s Part 450 regulations.

“Reforming and streamlining commercial launch regulations and licensing is an area the Biden administration knew needed reform,” wrote Laura Forczyk, founder and executive director of the space consulting firm Astralytical, in a post on X. “However, little was done. Will more be done with this executive order? I hope so. This was needed years ago.”

Dave Cavossa, president of the Commercial Spaceflight Federation, applauded the Trump administration’s regulatory policy.

“This executive order will strengthen and grow the US commercial space industry by cutting red tape while maintaining a commitment to public safety, benefitting the American people and the US government that are increasingly reliant on space for our national and economic security,” Cavossa said in a statement.

Specific language in the new Trump executive order calls for the FAA to evaluate which regulations should be waived for hybrid launch or reentry vehicles that hold FAA airworthiness certificates, and which requirements should be remitted for rockets with a flight termination system, an explosive charge designed to destroy a launch vehicle if it veers off its pre-approved course after liftoff. These are similar to the topics the Biden-era FAA was looking at last year.

The new Trump administration policy also seeks to limit the authority of state officials in enforcing their own environmental rules related to the construction or operation of spaceports.

This is especially relevant after the California Coastal Commission rejected a proposal by SpaceX to double its launch cadence at Vandenberg Space Force Base, a spaceport located roughly 140 miles (225 kilometers) northwest of Los Angeles. The Space Force, which owns Vandenberg and is one of SpaceX’s primary customers, backs SpaceX’s push for more launches.

Finally, the order gives the Department of Commerce responsibility for authorizing “novel space activities” such as in-space assembly and manufacturing, asteroid and planetary mining, and missions to remove space debris from orbit.

This story was updated at 12: 30 am EDT on August 14 with statements from the Center for Biological Diversity and the Commercial Spaceflight Federation.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Trump orders cull of regulations governing commercial rocket launches Read More »

study:-social-media-probably-can’t-be-fixed

Study: Social media probably can’t be fixed


“The [structural] mechanism producing these problematic outcomes is really robust and hard to resolve.”

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

It’s no secret that much of social media has become profoundly dysfunctional. Rather than bringing us together into one utopian public square and fostering a healthy exchange of ideas, these platforms too often create filter bubbles or echo chambers. A small number of high-profile users garner the lion’s share of attention and influence, and the algorithms designed to maximize engagement end up merely amplifying outrage and conflict, ensuring the dominance of the loudest and most extreme users—thereby increasing polarization even more.

Numerous platform-level intervention strategies have been proposed to combat these issues, but according to a preprint posted to the physics arXiv, none of them are likely to be effective. And it’s not the fault of much-hated algorithms, non-chronological feeds, or our human proclivity for seeking out negativity. Rather, the dynamics that give rise to all those negative outcomes are structurally embedded in the very architecture of social media. So we’re probably doomed to endless toxic feedback loops unless someone hits upon a brilliant fundamental redesign that manages to change those dynamics.

Co-authors Petter Törnberg and Maik Larooij of the University of Amsterdam wanted to learn more about the mechanisms that give rise to the worst aspects of social media: the partisan echo chambers, the concentration of influence among a small group of elite users (attention inequality), and the amplification of the most extreme divisive voices. So they combined standard agent-based modeling with large language models (LLMs), essentially creating little AI personas to simulate online social media behavior. “What we found is that we didn’t need to put any algorithms in, we didn’t need to massage the model,” Törnberg told Ars. “It just came out of the baseline model, all of these dynamics.”

They then tested six different intervention strategies social scientists have been proposed to counter those effects: switching to chronological or randomized feeds; inverting engagement-optimization algorithms to reduce the visibility of highly reposted sensational content; boosting the diversity of viewpoints to broaden users’ exposure to opposing political views; using “bridging algorithms” to elevate content that fosters mutual understanding rather than emotional provocation; hiding social statistics like reposts and follower accounts to reduce social influence cues; and removing biographies to limit exposure to identity-based signals.

The results were far from encouraging. Only some interventions showed modest improvements. None were able to fully disrupt the fundamental mechanisms producing the dysfunctional effects. In fact, some interventions actually made the problems worse. For example, chronological ordering had the strongest effect on reducing attention inequality, but there was a tradeoff: It also intensified the amplification of extreme content. Bridging algorithms significantly weakened the link between partisanship and engagement and modestly improved viewpoint diversity, but it also increased attention inequality. Boosting viewpoint diversity had no significant impact at all.

So is there any hope of finding effective intervention strategies to combat these problematic aspects of social media? Or should we nuke our social media accounts altogether and go live in caves? Ars caught up with Törnberg for an extended conversation to learn more about these troubling findings.

Ars Technica: What drove you to conduct this study?

Petter Törnberg: For the last 20 years or so, there has been a ton of research on how social media is reshaping politics in different ways, almost always using observational data. But in the last few years, there’s been a growing appetite for moving beyond just complaining about these things and trying to see how we can be a bit more constructive. Can we identify how to improve social media and create online spaces that are actually living up to those early promises of providing a public sphere where we can deliberate and debate politics in a constructive way?

The problem with using observational data is that it’s very hard to test counterfactuals to implement alternative solutions. So one kind of method that has existed in the field is agent-based simulations and social simulations: create a computer model of the system and then run experiments on that and test counterfactuals. It is useful for looking at the structure and emergence of network dynamics.

But at the same time, those models represent agents as simple rule followers or optimizers, and that doesn’t capture anything of the cultural world or politics or human behavior. I’ve always been of the controversial opinion that those things actually matter,  especially for online politics. We need to study both the structural dynamics of network formations and the patterns of cultural interaction.

Ars Technica: So you developed this hybrid model that combines LLMs with agent-based modeling.

Petter Törnberg: That’s the solution that we find to move beyond the problems of conventional agent-based modeling. Instead of having this simple rule of followers or optimizers, we use AI or LLMs. It’s not a perfect solution—there’s all kind of biases and limitations—but it does represent a step forward compared to a list of if/then rules. It does have something more of capturing human behavior in a more plausible way. We give them personas that we get from the American National Election Survey, which has very detailed questions about US voters and their hobbies and preferences. And then we turn that into a textual persona—your name is Bob, you’re from Massachusetts, and you like fishing—just to give them something to talk about and a little bit richer representation.

And then they see the random news of the day, and they can choose to post the news, read posts from other users, repost them, or they can choose to follow users. If they choose to follow users, they look at their previous messages, look at their user profile.

Our idea was to start with the minimal bare-bones model and then add things to try to see if we could reproduce these problematic consequences. But to our surprise, we actually didn’t have to add anything because these problematic consequences just came out of the bare bones model. This went against our expectations and also what I think the literature would say.

Ars Technica: I’m skeptical of AI in general, particularly in a research context, but there are very specific instances where it can be extremely useful. This strikes me as one of them, largely because your basic model proved to be so robust. You got the same dynamics without introducing anything extra.

Petter Törnberg: Yes. It’s been a big conversation in social science over the last two years or so. There’s a ton of interest in using LLMs for social simulation, but no one has really figured out for what or how it’s going to be helpful, or how we’re going to get past these problems of validity and so on. The kind of approach that we take in this paper is building on a tradition of complex systems thinking. We imagine very simple models of the human world and try to capture very fundamental mechanisms. It’s not really aiming to be realistic or a precise, complete model of human behavior.

I’ve been one of the more critical people of this method, to be honest. At the same time, it’s hard to imagine any other way of studying these kinds of dynamics where we have cultural and structural aspects feeding back into each other. But I still have to take the findings with a grain of salt and realize that these are models, and they’re capturing a kind of hypothetical world—a spherical cow in a vacuum. We can’t predict what someone is going to have for lunch on Tuesday, but we can capture broader mechanisms, and we can see how robust those mechanisms are. We can see whether they’re stable, unstable, which conditions they emerge in, and the general boundaries. And in this case, we found a mechanism that seems to be very robust, unfortunately.

Ars Technica: The dream was that social media would help revitalize the public sphere and support the kind of constructive political dialogue that your paper deems “vital to democratic life.” That largely hasn’t happened. What are the primary negative unexpected consequences that have emerged from social media platforms?

Petter Törnberg: First, you have echo chambers or filter bubbles. The risk of broad agreement is that if you want to have a functioning political conversation, functioning deliberation, you do need to do that across the partisan divide. If you’re only having a conversation with people who already agree with each other, that’s not enough. There’s debate on how widespread echo chambers are online, but it is quite established that there are a lot of spaces online that aren’t very constructive because there’s only people from one political side. So that’s one ingredient that you need. You need to have a diversity of opinion, a diversity of perspective.

The second one is that the deliberation needs to be among equals; people need to have more or less the same influence in the conversation. It can’t be completely controlled by a small, elite group of users. This is also something that people have pointed to on social media: It has a tendency of creating these influencers because attention attracts attention. And then you have a breakdown of conversation among equals.

The final one is what I call (based on Chris Bail’s book) the social media prism. The more extreme users tend to get more attention online. This is often discussed in relation to engagement algorithms, which tend to identify the type of content that most upsets us and then boost that content. I refer to it as a “trigger bubble” instead of the filter bubble. They’re trying to trigger us as a way of making us engage more so they can extract our data and keep our attention.

Ars Technica: Your conclusion is that there’s something within the structural dynamics of the network itself that’s to blame—something fundamental to the construction of social networks that makes these extremely difficult problems to solve.

Petter Törnberg: Exactly. It comes from the fact that we’re using these AI models to capture a richer representation of human behavior, which allows us to see something that wouldn’t really be possible using conventional agent-based modeling. There have been previous models looking at the growth of social networks on social media. People choose to retweet or not, and we know that action tends to be very reactive. We tend to be very emotional in that choice. And it tends to be a highly partisan and polarized type of action. You hit retweet when you see someone being angry about something, or doing something horrific, and then you share that. It’s well-known that this leads to toxic, more polarized content spreading more.

But what we find is that it’s not just that this content spreads; it also shapes the network structures that are formed. So there’s feedback between the effective emotional action of choosing to retweet something and the network structure that emerges. And then in turn, you have a network structure that feeds back what content you see, resulting in a toxic network. The definition of an online social network is that you have this kind of posting, reposting, and following dynamics. It’s quite fundamental to it. That alone seems to be enough to drive these negative outcomes.

Ars Technica: I was frankly surprised at the ineffectiveness of the various intervention strategies you tested. But it does seem to explain the Bluesky conundrum. Bluesky has no algorithm, for example, yet the same dynamics still seem to emerge. I think Bluesky’s founders genuinely want to avoid those dysfunctional issues, but they might not succeed, based on this paper. Why are such interventions so ineffective? 

Petter Törnberg: We’ve been discussing whether these things are due to the platforms doing evil things with algorithms or whether we as users are choosing that we want a bad environment. What we’re saying is that it doesn’t have to be either of those. This is often the unintended outcomes from interactions based on underlying rules. It’s not necessarily because the platforms are evil; it’s not necessarily because people want to be in toxic, horrible environments. It just follows from the structure that we’re providing.

We tested six different interventions. Google has been trying to make social media less toxic and recently released a newsfeed algorithm based on the content of the text. So that’s one example. We’re also trying to do more subtle interventions because often you can find a certain way of nudging the system so it switches over to healthier dynamics. Some of them have moderate or slightly positive effects on one of the attributes, but then they often have negative effects on another attribute, or they have no impact whatsoever.

I should say also that these are very extreme interventions in the sense that, if you depended on making money on your platform, you probably don’t want to implement them because it probably makes it really boring to use. It’s like showing the least influential users, the least retweeted messages on the platform. Even so, it doesn’t really make a difference in changing the basic outcomes. What we take from that is that the mechanism producing these problematic outcomes is really robust and hard to resolve given the basic structure of these platforms.

Ars Technica: So how might one go about building a successful social network that doesn’t have these problems? 

Petter Törnberg: There are several directions where you could imagine going, but there’s also the constraint of what is popular use. Think back to the early Internet, like ICQ. ICQ had this feature where you could just connect to a random person. I loved it when I was a kid. I would talk to random people all over the world. I was 12 in the countryside on a small island in Sweden, and I was talking to someone from Arizona, living a different life. I don’t know how successful that would be these days, the Internet having become a lot less innocent than it was.

For instance, we can focus on the question of inequality of attention, a very well-studied and robust feature of these networks. I personally thought we would be able to address it with our interventions, but attention draws attention, and this leads to a power law distribution, where 1 percent [of users] dominates the entire conversation. We know the conditions under which those power laws emerge. This is one of the main outcomes of social network dynamics: extreme inequality of attention.

But in social science, we always teach that everything is a normal distribution. The move from studying the conventional social world to studying the online social world means that you’re moving from these nice normal distributions to these horrible power law distributions. Those are the outcomes of having social networks where the probability of connecting to someone depends on how many previous connections they have. If we want to get rid of that, we probably have to move away from the social network model and have some kind of spatial model or group-based model that makes things a little bit more local, a little bit less globally interconnected.

Ars Technica: It sounds like you’d want to avoid those big influential nodes that play such a central role in a large, complex global network. 

Petter Törnberg: Exactly. I think that having those global networks and structures fundamentally undermines the possibility of the kind of conversations that political scientists and political theorists traditionally talked about when they were discussing in the public square. They were talking about social interaction in a coffee house or a tea house, or reading groups and so on. People thought the Internet was going to be precisely that. It’s very much not that. The dynamics are fundamentally different because of those structural differences. We shouldn’t expect to be able to get a coffee house deliberation structure when we have a global social network where everyone is connected to everyone. It is difficult to imagine a functional politics building on that.

Ars Technica: I want to come back to your comment on the power law distribution, how 1 percent of people dominate the conversation, because I think that is something that most users routinely forget. The horrible things we see people say on the Internet are not necessarily indicative of the vast majority of people in the world. 

Petter Törnberg: For sure. That is capturing two aspects. The first is the social media prism, where the perspective we get of politics when we see it through the lens of social media is fundamentally different from what politics actually is. It seems much more toxic, much more polarized. People seem a little bit crazier than they really are. It’s a very well-documented aspect of the rise of polarization: People have a false perception of the other side. Most people have fairly reasonable and fairly similar opinions. The actual polarization is lower than the perceived polarization. And that arguably is a result of social media, how it misrepresents politics.

And then we see this very small group of users that become very influential who often become highly visible as a result of being a little bit crazy and outrageous. Social media creates an incentive structure that is really central to reshaping not just how we see politics but also what politics is, which politicians become powerful and influential, because it is controlling the distribution of what is arguably the most valuable form of capital of our era: attention. Especially for politicians, being able to control attention is the most important thing. And since social media creates the conditions of who gets attention or not, it creates an incentive structure where certain personalities work better in a way that’s just fundamentally different from how it was in previous eras.

Ars Technica: There are those who have sworn off social media, but it seems like simply not participating isn’t really a solution, either.

Petter Törnberg: No. First, even if you only read, say, The New York Times, that newspaper is still reshaped by what works on social media, the social media logic. I had a student who did a little project this last year showing that as social media became more influential, the headlines of The New York Times became more clickbaity and adapted to the style of what worked on social media. So conventional media and our very culture is being transformed.

But more than that, as I was just saying, it’s the type of politicians, it’s the type of people who are empowered—it’s the entire culture. Those are the things that are being transformed by the power of the incentive structures of social media. It’s not like, “This is things that are happening in social media and this is the rest of the world.” It’s all entangled, and somehow social media has become the cultural engine that is shaping our politics and society in very fundamental ways. Unfortunately.

Ars Technica: I usually like to say that technological tools are fundamentally neutral and can be used for good or ill, but this time I’m not so sure. Is there any hope of finding a way to take the toxic and turn it into a net positive?

Petter Törnberg: What I would say to that is that we are at a crisis point with the rise of LLMs and AI. I have a hard time seeing the contemporary model of social media continuing to exist under the weight of LLMs and their capacity to mass-produce false information or information that optimizes these social network dynamics. We already see a lot of actors—based on this monetization of platforms like X—that are using AI to produce content that just seeks to maximize attention. So misinformation, often highly polarized information as AI models become more powerful, that content is going to take over. I have a hard time seeing the conventional social media models surviving that.

We’ve already seen the process of people retreating in part to credible brands and seeking to have gatekeepers. Young people, especially, are going into WhatsApp groups and other closed communities. Of course, there’s misinformation from social media leaking into those chats also. But these kinds of crisis points at least have the hope that we’ll see a changing situation. I wouldn’t bet that it’s a situation for the better. You wanted me to sound positive, so I tried my best. Maybe it’s actually “good riddance.”

Ars Technica: So let’s just blow up all the social media networks. It still won’t be better, but at least we’ll have different problems.

Petter Törnberg: Exactly. We’ll find a new ditch.

DOI: arXiv, 2025. 10.48550/arXiv.2508.03385  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Study: Social media probably can’t be fixed Read More »

openai,-cofounder-sam-altman-to-take-on-neuralink-with-new-startup

OpenAI, cofounder Sam Altman to take on Neuralink with new startup

The company aims to raise $250 million from OpenAI and other investors, although the talks are at an early stage. Altman will not personally invest.

The new venture would be in direct competition with Neuralink, founded by Musk in 2016, which seeks to wire brains directly to computers.

Musk and Altman cofounded OpenAI, but Musk left the board in 2018 after clashing with Altman, and the two have since become fierce rivals in their pursuit of AI.

Musk launched his own AI start-up, xAI, in 2023 and has been attempting to block OpenAI’s conversion from a nonprofit in the courts. Musk donated much of the initial capital to get OpenAI off the ground.

Neuralink is one of a pack of so-called brain-computer interface companies, while a number of start-ups, such as Precision Neuroscience and Synchron, have also emerged on the scene.

Neuralink earlier this year raised $650 million at a $9 billion valuation, and it is backed by investors including Sequoia Capital, Thrive Capital, and Vy Capital. Altman had previously invested in Neuralink.

Brain implants are a decades-old technology, but recent leaps forward in AI and in the electronic components used to collect brain signals have offered the prospect that they can become more practically useful.

Altman has backed a number of other companies in markets adjacent to ChatGPT-maker OpenAI, which is valued at $300 billion. In addition to cofounding World, he has also invested in the nuclear fission group Oklo and nuclear fusion project Helion.

OpenAI declined to comment.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

OpenAI, cofounder Sam Altman to take on Neuralink with new startup Read More »

space-force-officials-take-secrecy-to-new-heights-ahead-of-key-rocket-launch

Space Force officials take secrecy to new heights ahead of key rocket launch

The Vulcan rocket checks off several important boxes for the Space Force. First, it relies entirely on US-made rocket engines. The Atlas V rocket it is replacing uses Russian-built main engines, and given the chilled relations between the two powers, US officials have long desired to stop using Russian engines to power the Pentagon’s satellites into orbit. Second, ULA says the Vulcan rocket will eventually provide a heavy-lift launch capability at a lower cost than the company’s now-retired Delta IV Heavy rocket.

Third, Vulcan provides the Space Force with an alternative to SpaceX’s Falcon 9 and Falcon Heavy, which have been the only rockets in their class available to the military since the last national security mission was launched on an Atlas V rocket one year ago.

Col. Jim Horne, mission director for the USSF-106 launch, said this flight marks a “pretty historic point in our program’s history. We officially end our reliance on Russian-made main engines with this launch, and we continue to maintain our assured access to space with at least two independent rocket service companies that we can leverage to get our capabilities on orbit.”

What’s onboard?

The Space Force has only acknowledged one of the satellites aboard the USSF-106 mission, but there are more payloads cocooned inside the Vulcan rocket’s fairing.

The $250 million mission that officials are willing to talk about is named Navigation Technology Satellite-3, or NTS-3. This experimental spacecraft will test new satellite navigation technologies that may eventually find their way on next-generation GPS satellites. A key focus for engineers who designed and will operate the NTS-3 satellite is to look at ways of overcoming GPS jamming and spoofing, which can degrade satellite navigation signals used by military forces, commercial airliners, and civilian drivers.

“We’re going to be doing, we anticipate, over 100 different experiments,” said Joanna Hinks, senior research aerospace engineer at the Air Force Research Laboratory’s space vehicles directorate, which manages the NTS-3 mission. “Some of the major areas we’re looking at—we have an electronically steerable phased array antenna so that we can deliver higher power to get through interference to the location that it’s needed.”

Arlen Biersgreen, then-program manager for the NTS-3 satellite mission at the Air Force Research Laboratory, presents a one-third scale model of the NTS-3 spacecraft to an audience in 2022. Credit: US Air Force/Andrea Rael

GPS jamming is especially a problem in and near war zones. Investigators probing the crash of Azerbaijan Airlines Flight 8243 last December determined GPS jamming, likely by Russian military forces attempting to counter a Ukrainian drone strike, interfered with the aircraft’s navigation as it approached its destination in the Russian republic of Chechnya. Azerbaijani government officials blamed a Russian surface-to-air missile for damaging the aircraft, ultimately leading to a crash in nearby Kazakhstan that killed 38 people.

“We have a number of different advanced signals that we’ve designed,” Hinks said. “One of those is the Chimera anti-spoofing signal… to protect civil users from spoofing that’s affecting so many aircraft worldwide today, as well as ships.”

The NTS-3 spacecraft, developed by L3Harris and Northrop Grumman, only takes up a fraction of the Vulcan rocket’s capacity. The satellite weighs less than 3,000 pounds (about 1,250 kilograms), about a quarter of what this version of the Vulcan rocket can deliver to geosynchronous orbit.

Space Force officials take secrecy to new heights ahead of key rocket launch Read More »

scientists-hid-secret-codes-in-light-to-combat-video-fakes

Scientists hid secret codes in light to combat video fakes

Hiding in the light

Previously, the Cornell team had figured out how to make small changes to specific pixels to tell if a video had been manipulated or created by AI. But its success depended on the creator of the video using a specific camera or AI model. Their new method, “noise-coded illumination” (NCI), addresses those and other shortcomings by hiding watermarks in the apparent noise of light sources. A small piece of software can do this for computer screens and certain types of room lighting, while off-the-shelf lamps can be coded via a small attached computer chip.

“Each watermark carries a low-fidelity time-stamped version of the unmanipulated video under slightly different lighting. We call these code videos,” Davis said. “When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos, which lets us see where changes were made. And if someone tries to generate fake video with AI, the resulting code videos just look like random variations.” Because the watermark is designed to look like noise, it’s difficult to detect without knowing the secret code.

The Cornell team tested their method with a broad range of types of manipulation: changing warp cuts, speed and acceleration, for instance, and compositing and deep fakes. Their technique proved robust to things like signal levels below human perception; subject and camera motion; camera flash; human subjects with different skin tones; different levels of video compression; and indoor and outdoor settings.

“Even if an adversary knows the technique is being used and somehow figures out the codes, their job is still a lot harder,” Davis said. “Instead of faking the light for just one video, they have to fake each code video separately, and all those fakes have to agree with each other.” That said, Davis added, “This is an important ongoing problem. It’s not going to go away, and in fact it’s only going to get harder,” he added.

DOI: ACM Transactions on Graphics, 2025. 10.1145/3742892  (About DOIs).

Scientists hid secret codes in light to combat video fakes Read More »

experiment-will-attempt-to-counter-climate-change-by-altering-ocean

Experiment will attempt to counter climate change by altering ocean


Gulf of Maine will be site of safety and effectiveness testing.

Woods Hole researchers, Adam Subhas (left) and Chris Murray, conducted a series of lab experiments earlier this year to test the impact of an alkaline substance, known as sodium hydroxide, on copepods in the Gulf of Maine. Credit: Daniel Hentz/Woods Hole Oceanographic Institution

Later this summer, a fluorescent reddish-pink spiral will bloom across the Wilkinson Basin in the Gulf of Maine, about 40 miles northeast of Cape Cod. Scientists from the Woods Hole Oceanographic Institution will release the nontoxic water tracer dye behind their research vessel, where it will unfurl into a half-mile wide temporary plume, bright enough to catch the attention of passing boats and even satellites.

As it spreads, the researchers will track its movement to monitor a tightly controlled, federally approved experiment testing whether the ocean can be engineered to absorb more carbon, and in turn, help combat the climate crisis.

As the world struggles to stay below the 1.5° Celsius global warming threshold—a goal set out in the Paris Agreement to avoid the most severe impacts of climate change—experts agree that reducing greenhouse gas emissions won’t be enough to avoid overshooting this target. The latest Intergovernmental Panel on Climate Change report, published in 2023, emphasizes the urgent need to actively remove carbon from the atmosphere, too.

“If we really want to have a shot at mitigating the worst effects of climate change, carbon removal needs to start scaling to the point where it can supplement large-scale emissions reductions,” said Adam Subhas, an associate scientist in marine chemistry and geochemistry at the Woods Hole Oceanographic Institution, who will oversee the week-long experiment.

The test is part of the LOC-NESS project—short for Locking away Ocean Carbon in the Northeast Shelf and Slope—which Subhas has been leading since 2023. The ongoing research initiative is evaluating the effectiveness and environmental impact of a marine carbon dioxide removal approach called ocean alkalinity enhancement (OAE).

This method of marine carbon dioxide removal involves adding alkaline substances to the ocean to boost its natural ability to neutralize acids produced by greenhouse gases. It’s promising, Subhas said, because it has the potential to lock away carbon permanently.

“Ocean alkalinity enhancement does have the potential to reach sort of gigatons per year of carbon removal, which is the scale at which you would need to supplement emissions reductions,” Subhas said. “Once the alkalinity is dissolved in seawater, it reacts with carbon dioxide and forms bicarbonate—essentially dissolved baking soda. That bicarbonate is one of the most stable forms of carbon in the ocean, and it can stay locked away for tens of thousands, even hundreds of thousands of years.”

But it will be a long time before this could happen at the magnitude needed to mitigate climate change.

According to Wil Burns, co-director of the Institute for Responsible Carbon Removal at American University, between 6 and 10 gigatons of carbon need to be removed from the atmosphere annually by 2050 in order to meet the Paris Agreement climate target. “It’s a titanic task,” he said.

Most marine carbon dioxide removal initiatives, including those involving OAE, are still in a nascent stage.

“We’re really far from having any of these technologies be mature,” said Lisa Levin, an oceanographer and professor at the Scripps Institution of Oceanography at the University of California San Diego, who spoke on a panel at the United Nations Ocean Conference in June about the potential environmental risks of mining and carbon dioxide removal on deep-sea ecosystems. “We’re looking at a decade until any serious, large-scale marine carbon removal is going to be able to happen—or more.”

“In the meantime, everybody acknowledges that what we have to do is to reduce emissions, right, and not rely on taking carbon out of the atmosphere,” she said.

Marine carbon dioxide removal

So far, most carbon removal efforts have centered on land-based strategies, such as planting trees, restoring soils, and building machines that capture carbon dioxide directly from the air. Increasingly, researchers are exploring whether the oceans might help.

“Looking at the oceans makes a lot of sense when it comes to carbon removal, because the oceans sequester 70 times more CO2 than terrestrial sources,” Burns said. What if it can hold more?

That question is drawing growing attention, not only from scientists. In recent years, a wave of private companies have started piloting various methods of removing carbon from the oceans.

“It’s really the private sector that’s pushing the scaling of this very quickly,” Subhas said. In the US and Canada, he said, there are at least four companies piloting varied ocean alkalinity enhancement techniques.

Last year, Ebb Carbon, a California-based startup focused on marine carbon dioxide removal, signed a deal with Microsoft to remove up to 350,000 metric tons of CO2 over the next decade using an ocean alkalinity enhancement process that splits seawater into acidic and alkaline streams. The alkaline stream is then returned to the sea where it reacts with CO2 and stores it as bicarbonate, enabling the ocean to absorb more carbon dioxide from the atmosphere. In return, Microsoft will purchase carbon removal credits from the startup.

Another company called Vesta, which has headquarters in San Francisco, is using an approach called Coastal Carbon Capture. This involves adding finely ground olivine—a naturally occurring olive-green colored mineral—to sandy beaches. From there, ocean tides and waves carry it into the sea. Olivine reacts quickly with seawater in a process known as enhanced weathering, increasing ocean alkalinity. The company piloted one of their projects in Duck, North Carolina, last year where it estimated approximately 5,000 metric tons of carbon dioxide would be removed through coastal carbon capture after accounting for project emissions, according to its website.

But these efforts are not without risk, AU’s Burns said. “We have to proceed in an extremely precautionary manner,” he said.

Some scientists are concerned that OAE initiatives that involve olivine, which contains heavy metals like nickel and chromium, may harm marine life, he said. Another concern is that the olivine could cloud certain ocean areas and block light from penetrating to deeper depths. If too much alkalinity is introduced too fast in concentrated areas, he said, some animals might not be able to adjust.

Other marine carbon dioxide removal projects are using other methods besides OAE. Some involve adding iron to the ocean to stimulate growth in microscopic plants called phytoplankton, which absorb carbon dioxide through photosynthesis. Others include the cultivation of large-scale farms of kelp and seaweed, which also absorb carbon dioxide through photosynthesis. The marine plants can then be sunk in the deep ocean to store the carbon they absorbed.

In 2023, researchers from Woods Hole Oceanographic Institution conducted their first OAE-related field experiment from the 90-foot research vessel R/V Connecticut south of Massachusetts. As part of this first experiment, nontoxic water tracer dye was released into the ocean. Researchers tracked its movement through the water for 72 hours to model the dispersion of a plume of alkalinity over time.

Credit: Woods Hole Oceanographic Institution

In 2023, researchers from Woods Hole Oceanographic Institution conducted their first OAE-related field experiment from the 90-foot research vessel R/V Connecticut south of Massachusetts. As part of this first experiment, nontoxic water tracer dye was released into the ocean. Researchers tracked its movement through the water for 72 hours to model the dispersion of a plume of alkalinity over time. Credit: Woods Hole Oceanographic Institution

One technique that has not yet been tried, but may be piloted in the future, according to the science-based conservation nonprofit Ocean Visions, would employ new technology to accelerate the ocean’s natural process of transferring surface water and carbon to the deep ocean. That’s called artificial downwelling. In a reverse process—artificial upwelling—cooler, nutrient-rich waters from the deep ocean would be pumped to the surface to spur phytoplankton growth.

So far, UC San Diego’s Levin said she is not convinced that these trials will lead to impactful carbon removal.

“I do not think the ocean is ever going to be a really large part of that solution,” she said. However, she added, “It might be part of the storage solution. Right now, people are looking at injecting carbon dioxide that’s removed from industry activities on land and transporting it to the ocean and injecting it into basalt.”

Levin said she’s also worried that we don’t know enough yet about the consequences of altering natural ocean processes.

“I am concerned about how many field trials would be required to actually understand what would happen, and whether we could truly understand the environmental risk of a fully scaled-up operation,” she said.

The experiment

Most marine carbon dioxide removal projects that have kicked off already are significantly larger in scale than the LOC-NESS experiment, which Subhas estimates will remove around 50 tons of CO2.

But, he emphasized, the goal of this project is not to compete in size or scale. He said the aim is to provide independent academic research that can help guide and inform the future of this industry and ensure it does not have negative repercussions on the marine environment.

There is some concern, he said, that commercial entities may pursue large-scale OAE initiatives to capitalize on the growing voluntary carbon market without first conducting adequate testing for safety and efficacy. Unlike those initiatives, there is no profit to be made from LOC-NESS. No carbon credits will be sold, Subhas said.

The project is funded by a collection of government and philanthropic sources, including the National Oceanic and Atmospheric Administration and the Carbon to Sea Initiative, a nonprofit that brings funders and scientists together to support marine carbon dioxide removal research and technology.

“We really feel like it’s necessary for the scientific community to be delivering transparent, trusted, and rigorous science to evaluate these things as these activities are currently happening and scaling in the ocean by the private sector,” Subhas said.

The LOC-NESS field trial in Wilkinson Basin will be the first “academic only” OAE experiment conducted from a ship in US waters. It is also the first of its kind to receive a permit from the Environmental Protection Agency under the Marine Protection, Research, and Sanctuaries Act.

“There’s no research in the past or planned that gets even close to providing a learning opportunity that this research is providing for OAE in the pelagic environment,” said Carbon to Sea Initiative’s Antonius Gagern, referring to the open sea experiment.

The permit was granted in April after a year of consultations between the EPA and other federal agencies.

During the process’ public comment periods, commenters expressed concerns about the potential impact on marine life, including the critically endangered North Atlantic right whales, small crustaceans that they eat called copepods, and larvae for the commercially important squid and mackerel fisheries. In a written response to some of these comments, the EPA stated that the small-scale project “demonstrates scientific rigor” and is “not expected to significantly affect human health, the marine environment, or other uses of the ocean.”

Subhas and his interdisciplinary team of chemists, biologists, engineers, and physicists from Woods Hole have spent the last few years planning this experiment and conducting a series of trials at their lab on Cape Cod to ensure they can safely execute and effectively monitor the results of the open-water test they will conduct this summer in the Gulf of Maine.

They specifically tested the effects of sodium hydroxide—an alkaline substance also known as lye or caustic soda—on marine microbes, phytoplankton, and copepods, a crucial food source for many marine species in the region in addition to the right whales. “We chose sodium hydroxide because it’s incredibly pure,” Subhas said. It’s widely used in the US to reduce acidity in drinking water.

It also helps counter ocean acidification, according to Subhas. “It’s like Tums for the ocean,” he said.

Ocean acidification occurs when the ocean absorbs excess carbon dioxide, causing its pH to drop. This makes it harder for corals, krill, and shellfish like oysters and clams to develop their hard calcium carbonate shells or skeletons.

This month, the team plans to release 50 tons of sodium hydroxide into a designated area of the Wilkinson Basin from the back of one of two research vessels participating in the LOC-NESS operation.

The basin is an ideal test site, according to Subhas, because there is little presence of phytoplankton, zooplankton, commercial fish larvae, and endangered species, including some whales, during this season. Still, as a precautionary measure, Woods Hole has contracted a protected species observer to keep a look out for marine species and mitigate potential harm if they are spotted. That person will be on board as the vessel travels to and from the field trial site, including while the team releases the sodium hydroxide into the ocean.

The alkaline substance will be dispersed over four to 12 hours off the back of one of the research vessels, along with the nontoxic fluorescent red water tracer dye called rhodamine. The dye will help track the location and spread of the sodium hydroxide once released into the ocean, and the vessel’s wake will help mix the solution in with the ocean water.

After about an hour, Subhas said, it will form into a “pinkish” patch of water that can be picked up on satellites. “We’re going to be taking pictures from space and looking at how this patch sort of evolves, dilutes, and stretches and disperses over time.”

For a week after that, scientists aboard the vessels will take rotating shifts to collect data around the clock. They will deploy drones and analyze over 20 types of samples from the research vessel to monitor how the surrounding waters and marine life respond to the experiment. They’ll track changes in ocean chemistry, nutrient levels, plankton populations and water clarity, while also measuring acidity and dissolved CO2.

In March, the team did a large-scale dry run of the dispersal at an open air testing facility on a naval base in New Jersey. According to Subhas, the trial demonstrated their ability to safely and effectively deliver alkalinity to surface seawater.

“The next step is being able to measure the carbon uptake from seawater—from the atmosphere into seawater,” he said. That is a slower process. He said he expects to have some preliminary results on carbon uptake, as well as environmental impacts, early next year.

This story originally appeared on Inside Climate News.

Photo of Inside Climate News

Experiment will attempt to counter climate change by altering ocean Read More »

how-old-is-the-earliest-trace-of-life-on-earth?

How old is the earliest trace of life on Earth?


A recent conference sees doubts raised about the age of the oldest signs of life.

Where the microbe bodies are buried: metamorphosed sediments in Labrador, Canada containing microscopic traces of carbon. Credit: Martin Whitehouse

Where the microbe bodies are buried: metamorphosed sediments in Labrador, Canada containing microscopic traces of carbon. Credit: Martin Whitehouse

The question of when life began on Earth is as old as human culture.

“It’s one of these fundamental human questions: When did life appear on Earth?” said Professor Martin Whitehouse of the Swedish Museum of Natural History.

So when some apparently biological carbon was dated to at least 3.95 billion years ago—making it the oldest remains of life on Earth—the claim sparked interest and skepticism in equal measure, as Ars Technica reported in 2017.

Whitehouse was among those skeptics. This July, he presented new evidence to the Goldschmidt Conference in Prague that the carbon in question is only between 2.7–2.8 billion years old, making it younger than other traces of life found elsewhere.

Organic carbon?

The carbon in question is in rock in Labrador, Canada. The rock was originally silt on the seafloor that, it’s argued, hosted early microbial life that was buried by more silt, leaving the carbon as their remains. The pressure and heat of deep burial and tectonic events over eons have transformed the silt into a hard metamorphic rock, and the microbial carbon in it has metamorphosed into graphite.

“They are very tiny, little graphite bits,” said Whitehouse.

The key to showing that this graphite was originally biological versus geological is its carbon isotope ratio. From life’s earliest days, its enzymes have preferred the slightly lighter isotope carbon-12 over the marginally heavier carbon-13. Organic carbon is therefore much richer in carbon-12 than geological carbon, and the Labrador graphite does indeed have this “light” biological isotope signature.

The key question, however, is its true age.

Mixed-up, muddled-up, shook-up rocks

Sorting out the age of the carbon-containing Labrador rock is a geological can of worms.

These are some of the oldest rocks on the planet—they’ve been heated, squished, melted, and faulted multiple times as Earth went through the growth, collision, and breakup of continents before being worn down by ice and exposed today.

“That rock itself is unbelievably complicated,” said Whitehouse. “It’s been through multiple phases of deformation.”

In general, the only ways to date sediments are if there’s a layer of volcanic ash in them, or by distinctive fossils in the sediments. Neither is available in these Labrador rocks.

“The rock itself is not directly dateable,” said Whitehouse, “so then you fall onto the next best thing, which is you want to look for a classic field geology cross-cutting relationship of something that is younger and something that you can date.”

The idea, which is as old as the science of geology itself, is to bracket the age of the sediment by finding a rock formation that cuts across it. Logically, the cross-cutting rock is younger than the sediment it cuts across.

In this case, the carbon-containing metamorphosed siltstone is surrounded by swirly, gray banded gneiss rock, but the boundary between the siltstone and the gray gneiss is parallel, so there’s no cross-cutting to use.

Professor Tsuyoshi Komiya of The University of Tokyo was a coauthor on the 3.95 billion-year age paper. His team used a cross-cutting rock they found at a different location and extrapolated that to the carbon-bearing siltstone to constrain its age. “It was discovered that the gneiss was intruded into supracrustal rocks (mafic and sedimentary rocks),” said Komiya in an email to Ars Technica.

But Whitehouse disputes that inference between the different outcrops.

“You’re reliant upon making these very long-distance assumptions and correlations to try to date something that might actually not have anything to do with what you think you’re dating,” he said.

Professor Jonathan O’Neil of the University of Ottawa, who was not involved in either Whitehouse’s or Komiya’s studies but who has visited the outcrops in question, agrees with Whitehouse. “I remember I was not convinced either by these cross-cutting relationships,” he told Ars. “It’s not clear to me that one is necessarily older than the other.”

With the field geology evidence disputed, the other pillar holding up the 3.95-billion-year-old date is its radiometric date, measured in zircon crystals extracted from the rocks surrounding the metamorphosed siltstone.

The zircon keeps the score

Geologists use the mineral zircon to date rocks because when it crystallizes, it incorporates uranium but not lead. So as radioactive uranium slowly decays into lead, the ratio of uranium to lead provides the age of the crystal.

But the trouble with any date obtained from rocks as complicated as these is knowing exactly what geological event it dates—the number alone means little without the context of all the other geological evidence for the events that affected the area.

Both Whitehouse and O’Neil have independently sampled and dated the same rocks as Komiya’s team, and where Komiya’s team got a date of 3.95, Whitehouse’s and O’Neil’s new dates are both around 3.87 billion years. Importantly, O’Neil’s and Whitehouse’s dates are far more precise, with errors around plus-or-minus 5 or 6 million years, which is remarkably precise for dates in rocks this old. The 3.95 date had an error around 10 times bigger. “It’s a large error,” said O’Neil.

But there’s a more important question: How is that date related to the age of the organic carbon? The rocks have been through many events that could each have “set” the dates in the zircons. That’s because zircons can survive multiple re-heatings and even partial remelting, with each new event adding a new layer, or “zone,” on the outer surface of the crystal, recording the age of that event.

“This rock has seen all the events, and the zircon in it has responded to all of these events in a way that, when you go in with a very small-scale ion beam to do the sampling on these different zones, you can pick apart the geological history,” Whitehouse said.

Whitehouse’s team zapped tiny spots on the zircons with a beam of negatively charged oxygen ions to dislodge ions from the crystals, then sucked away these ions into a mass spectrometer to measure the uranium-lead ratio, and thus the dates. The tiny beam and relatively small error have allowed Whitehouse to document the events that these rocks have been through.

“Having our own zircon means we’ve been able to go in and look in more detail at the internal structure in the zircon,” said Whitehouse. “Where we might have a core that’s 3.87, we’ll have a rim that is 2.7 billion years, and that rim, morphologically, looks like an igneous zircon,” said Whitehouse.

That igneous outer rim of Whitehouse’s zircons shows that it formed in partially molten rock that would have flowed at that time. That flow was probably what brought it next to the carbon-containing sediments. Its date of 2.7 billion years ago means the carbon in the sediments could be any age older than that.

That’s a key difference from Komiya’s work. He argues that the older dates in the cores of the zircons are the true age of the cross-cutting rock. “Even the igneous zircons must have been affected by the tectonothermal event; therefore, the obtained age is the minimum age, and the true age is older,” said Komiya. “The fact that young zircons were found does not negate our research.”

But Whitehouse contends that the old cores of the zircons instead record a time when the original rock formed, long before it became a gneiss and flowed next to the carbon-bearing sediments.

Zombie crystals

Zircon’s resilience means it can survive being eroded from the rock where it formed and then deposited in a new, sedimentary rock as the undead remnants of an older, now-vanished landscape.

The carbon-containing siltstone contains zombie zircons, and Whitehouse presented new data on them to the Goldschmidt Conference, dating them to 2.8 billion years ago. Whitehouse argues that these crystals formed in an igneous rock 2.8 billion years ago and then were eroded, washed into the sea, and settled in the silt. So the siltstone must be no older than 2.8 billion years old, he said.

“You cannot deposit a zircon that is not formed yet,” O’Neil explained.

greyscale image of tiny fragments of mineral, with multiple layers visible in each fragment. A number of sites are circled on each fragment.

Tiny recorders of history – ancient zircon crystals from Labrador. Left shows layers built up as the zircon went through many heating events. Right shows a zircon with a prism-like outer shape showing that it formed in igneous conditions around an earlier zircon. Circles indicate where an ion beam was used to measure dates. Credit: Martin Whitehouse

This 2.8-billion-year age, along with the igneous zircon age of 2.7 billion years, brackets the age of the organic carbon to anywhere between 2.8 and 2.7 billion years old. That’s much younger than Komiya’s date of 3.95 billion years old.

Komiya disagrees: “I think that the estimated age is minimum age because zircons suffered from many thermal events, so that they were rejuvenated,” he said. In other words, the 2.8-billion-year age again reflects later heating, and the true date is given by the oldest-dated zircons in the siltstone.

But Whitehouse presented a third line of evidence to dispute the 3.95-billion-year date: isotopes of hafnium in the same zombie zircon crystals.

The technique relies on radioactive decay of lutetium-176 to hafnium-176. If the 2.8-billion-year age resulted from rejuvenation by later heating, it would have had to have formed from material with a hafnium isotope ratio incompatible with the isotope composition of the early Earth.

“They go to impossible numbers,” said Whitehouse.

The only way that the uranium-lead ratio can be compatible with the hafnium in the zircons, Whitehouse argued, is if the zircons that settled in the silt had crystallized around 2.8 billion years ago, constraining the organic carbon to being no older than that.

The new oldest remains of life on Earth, for now

If the Labrador carbon is no longer the oldest trace of life on Earth, then where are the oldest remains of life now?

For Whitehouse, it’s in the 3.77-billion-year-old Isua Greenstone Belt in Greenland: “I’m willing to believe that’s a well-documented age… that’s what I think is the best evidence for the oldest biogenicity that we have,” said Whitehouse.

O’Neil recently co-authored a paper on Earth’s oldest surviving crustal rocks, located next to Hudson Bay in Canada. He points there. “I would say it’s in the Nuvvuagittuq Greenstone belt,” said O’Neil, “because I would argue that these rocks are 4.3 billion years old. Again, not everybody agrees!” Intriguingly, the rocks he is referring to contain carbon with a possibly biological origin and are thought to be the remains of the kind of undersea vent where life could well have first emerged.

But the bigger picture is the fact that we have credible traces of life of this vintage—be it 3.8 or 3.9 or 4.3 billion years.

Any of those dates is remarkably early in the planet’s 4.6-billion-year life. It’s long before there was an oxygenated atmosphere, before continents emerged above sea level, and before plate tectonics got going. It’s also much older than the oldest microbial “stromatolite” fossils, which have been dated to about 3.48 billion years ago.

O’Neil thinks that once conditions on Earth were habitable, life would have emerged relatively fast: “To me, it’s not shocking, because the conditions were the same,” he said. “The Earth has the luxury of time… but biology is very quick. So if all the conditions were there by 4.3 billion years old, why would biology wait 500 million years to start?”

Photo of Howard Lee

Howard Lee is a freelance science writer focusing on the evolution of planet Earth through deep time. He earned a B.Sc. in geology and M.Sc. in remote sensing, both from the University of London, UK.

How old is the earliest trace of life on Earth? Read More »

new-adhesive-surface-modeled-on-a-remora-works-underwater

New adhesive surface modeled on a remora works underwater


It was tested for its ability to adhere to the inside of the digestive tract.

Most adhesives can’t stick to wet surfaces because water and other fluids disrupt the adhesive’s bonding mechanisms. This problem, though, has been beautifully solved by evolution in remora suckerfish, which use an adhesive disk on top of their heads to attach to animals like dolphins, sharks, and even manta rays.

A team of MIT scientists has now taken a close look at these remora disks and reverse-engineered them. “Basically, we looked at nature for inspiration,” says Giovanni Traverso, a professor at MIT Department of Mechanical Engineering and senior author of the study.

Sticking Variety

Remora adhesive disks are an evolutionary adaptation of the fish’s first dorsal fin, the one that in other species sits on top of the body, just behind the head and gill covers. The disk rests on an intercalary backbone—a bone structure that most likely evolved from parts of the spine. This bony structure supports lamellae, specialized bony plates with tiny backward-facing spikes called spinules. The entire disk is covered with soft tissue compartments that are open at the top. “This makes the remora fish adhere very securely to soft-bodied, fast-moving marine hosts,” Traverso says.

A remora attaches to the host by pressing itself against the skin, which pushes the water out of these compartments, creating a low-pressure zone. Then, the spinules mechanically interlock with the host’s surface, making the whole thing work a bit like a combination of a suction cup and Velcro. When the fish wants to detach from a host, it lifts the disk, letting water back into the compartments to remove the suction. Once released, it can simply swim away.

What impressed the scientists the most, though, was the versatility of those disks. Reef-associated species of remora like Phtheirichthys lineatus are generalists and stick to various hosts, including other fish, sharks, or turtles. Other species living in the open sea are more specialized and attach to cetaceans, swordfish, or marlins. While most remoras attach to the external tissue of their hosts, R. albescens sticks within the oral cavities and gill chamber of manta rays.

a close up of a fish, showing its head covered by an oval-shaped pad that has lots of transverse ridges.

A close-up of the adhesive pad of a remora. Credit: Stephen Frink

To learn what makes all these different disks so good at sticking underwater, the team first examined their anatomy in detail. It turned out that the difference between the disks was mostly in the positioning of lamellae. Generalist species have a mix of parallel and angled lamellae, while remoras sticking to fast-swimming hosts have them mostly parallel. R. albescens, on the other hand, doesn’t have a dominant lamellae orientation pattern but has them positioned at a very wide variety of angles.

The researchers wanted to make an adhesive device that would work for a wide range of applications, including maritime exploration or underwater manufacturing. Their initial goal, though, was designing a drug delivery platform that could reliably stick to the inside walls of the gastrointestinal tract. So, they chose R. albescens disks as their starting point, since that species already attaches internally to its host. They termed their device an Mechanical Underwater Soft Adhesion System (MUSAS).

However, they didn’t just opt for a biomimetic, copy-and-paste design. “There were things we did differently,” Traverso says.

Upgrading nature

The first key difference was deployment. MUSAS was supposed to travel down the GI tract to reach its destination, so the first challenge was making it fit into a pill. The team chose the size 000 capsule, which at 26 millimeters in length and 9.5 millimeters in diameter, is the largest Food and Drug Administration-approved ingestible form. MUSAS had a supporting structure—just like remora disks, but made with stainless steel. The angled lamellae with spinules fashioned after those on R. albescens were made of a shape memory nickel-titanium alloy. The role of remora’s soft tissues, which provide the suction by dividing the disk into compartments, was played by an elastomer.

MUSAS, would be swallowed in a folded form within its huge pill. “The capsule is tuned to dissolve in specific pH environment, which is how we determine the target location—for example the small intestine has a slightly different pH than the stomach”, says Ziliang Kang, an MIT researcher in Traverso’s group and lead author of the study.  Once released, the shape memory alloy in MUSAS lamellae-like structures would unfold in response to body temperature and the whole thing would stick to the wall of the target organ, be it the esophagus, the stomach, or the intestines.

The mechanism of sticking was also a bit different from that of remoras. “The fish can swim and actively press itself against the surface it wants to stick to. MUSAS can’t do that, so instead we relied on the peristaltic movements within the GI tract to exert the necessary force,” Traverso explains. When the muscles contract, MUSAS would be pressed against the wall and attach to it. And it was expected to stay there for quite some time.

The team ran a series of experiments to evaluate MUSAS performance in a few different scenarios. The drug-delivery platform application was tested on pig organ samples. MUSAS stayed in the sample GI tract for an average of nine days, with the longest sticking time reaching three and a half weeks. MUSAS managed to stay in place despite food and fluids going through the samples.

Even when the team poked the devices with a pipette to test what they called “resisting dynamic interference,” MUSAS just slid a little but remained firmly attached. Other experiments included using MUSAS to attach temperature sensors to external tissues of live fish and putting sensors that could detect reflux events in the GI tract of live pigs.

Branching out

The team is working on making MUSAS compatible with a wider range of drugs and mRNA vaccines. “We also think about using this for stimulating tissues,” Traverso says. The solution he has in mind would use MUSAS to deliver electrical pulses to the walls of the GI tract, which Traverso’s lab has shown can activate appetite-regulating hormones. But the team also wants to go beyond strictly medical applications.

The team demonstrated that MUSAS is really strong as an adhesive. When it sticks to a surface, it can hold a weight over a thousand times greater than its own. This puts MUSAS more or less on par with some of the best adhesives we have, such as polyurethane glues or epoxy resins. What’s more, this sticking strength was measured when MUSAS was attached to soft, uneven, wet surfaces. “On a rigid, even surface, the force-to-weight ratio should be even higher,” Kang claims. And this, Kang thinks, makes scaled-up variants of MUSAS a good match for underwater manufacturing.

“The first scenario I see is using MUSAS as grippers attached to robotic arms moving around soft objects,” Kang explains. Currently, this is done using vacuum systems that simply suck onto a fabric or other surface. The problem is that these solutions are rather complex and heavy. Scaled-up MUSAS should be able to achieve the same thing passively, cutting cost and weight. The second idea Kang has is using MUSAS in robots designed to perform maintenance jobs beneath the waterline on boats or ships. “We are really trying to see what is possible,” Traverso says.

Nature, 2025.  DOI: 10.1038/s41586-025-09304-4

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

New adhesive surface modeled on a remora works underwater Read More »

for-giant-carnivorous-dinosaurs,-big-size-didn’t-mean-a-big-bite

For giant carnivorous dinosaurs, big size didn’t mean a big bite

“And then you have the Spinosaurus which was kind of weird in general,” Rowe says.  “There was a study by Dave Hone and Tom Holtz about how it was waiting on the shorelines, waiting for food to go by that it could fish out.” But Spinosaurus’ foraging wasn’t limited to fishing. There was a pterosaur found preserved in its stomach and there were iguanodon remains found in the maw of a Baryonyx, another large carnivore belonging to the same lineage as the Spinosaurus. “They had great diversity in their diet. They were generalists, but our results show they weren’t these massive bone-crunching predators like the T. rex,” Rowe says. Because the T. rex was just built different.

King of the Cretaceous jungle

The Tyranosauroidea lineage had stiff, akinetic skulls, meaning they had very little mobility in the joints. The T. rex skull could and most likely did withstand very high stress as the animal pursued a “high stress, high power” strategy, entirely different from other large carnivores. “They were very much like big crocodiles with extremely strong, reinforced jaws and powerful muscles that could pulverize bones,” Rowe claims.

The T. rex, he argued, was a specialist—an ambush predator that attacked large, highly mobile prey, aiming to subdue it with a single bite. “And we have fossil evidence of that,” Rowe says. “In the Museum of Natural History in New York, there is a Hadrosaur, a large herbivorous dinosaur with a duck-like beak, and there’s a T. rex tooth embedded in its back.” This, he thinks, means the T. rex was actively preying on this animal, especially since there are healing marks around the stuck tooth. “Even with this super strong bite, the T. rex wasn’t always successful,” Rowe adds.

Still, the fight with the Spinosaurus most likely wouldn’t go the way it did in Jurassic Park III. “The T. rex was built to fight like that; the Spinosaurus really wasn’t”, Rowe says.

Current Biology, 2025.  DOI: 10.1016/j.cub.2025.06.051

For giant carnivorous dinosaurs, big size didn’t mean a big bite Read More »

texas-prepares-for-war-as-invasion-of-flesh-eating-flies-appears-imminent

Texas prepares for war as invasion of flesh-eating flies appears imminent

Past success

As the flies’ host and geographic range expand, pressure is intensifying to control the flies—something many countries have managed to do in the past.

Decades ago, screwworms were endemic throughout Central America and the southern US. However, governments across the regions used intensive, coordinated control efforts to push the flies southward. Screwworms were eliminated from the US around 1966, and were pushed downward through Mexico in the 1970s and 1980s. They were eventually declared eliminated from Panama in 2006, with the population held at bay by a biological barrier at the Darién Gap, at the border of Panama and Colombia. However, in 2022, the barrier was breached, and the flies began advancing northward, primarily through unmonitored livestock movements. The latest surveillance suggests the flies are now about 370 miles south of Texas.

The main method to wipe out screwworms is the sterile insect technique (SIT), which exploits a weakness in the fly’s life cycle since they tend to only mate once. In the 1950s, researchers at the US Department of Agriculture figured out they could use gamma radiation to sterilize male flies without affecting their ability to find mates. They then bred massive amounts of male flies, sterilized them, and carpet-bombed infested areas with aerial releases, which tanked the population.

Panama, in partnership with the US, maintained the biological barrier at the Colombian border with continual sterile-fly bombings for years. But as the flies approached this year, the USDA shifted its aerial deliveries to Mexico. In June, the USDA announced plans to set up a new sterile fly facility in Texas for aerial deliveries to northern Mexico. And last month, the USDA halted livestock trade from southern entry points.

Miller said in the announcement today that SIT is no longer enough, and Texas is taking its own steps. Those include the new bait, insecticides, and new feed for livestock and deer laced with the anti-parasitic drug ivermectin. Miller also said that the state aims to develop a vaccine for cattle that could kill larvae, but such a shot is still in development.

Texas prepares for war as invasion of flesh-eating flies appears imminent Read More »