The company aims to raise $250 million from OpenAI and other investors, although the talks are at an early stage. Altman will not personally invest.
The new venture would be in direct competition with Neuralink, founded by Musk in 2016, which seeks to wire brains directly to computers.
Musk and Altman cofounded OpenAI, but Musk left the board in 2018 after clashing with Altman, and the two have since become fierce rivals in their pursuit of AI.
Musk launched his own AI start-up, xAI, in 2023 and has been attempting to block OpenAI’s conversion from a nonprofit in the courts. Musk donated much of the initial capital to get OpenAI off the ground.
Neuralink is one of a pack of so-called brain-computer interface companies, while a number of start-ups, such as Precision Neuroscience and Synchron, have also emerged on the scene.
Neuralink earlier this year raised $650 million at a $9 billion valuation, and it is backed by investors including Sequoia Capital, Thrive Capital, and Vy Capital. Altman had previously invested in Neuralink.
Brain implants are a decades-old technology, but recent leaps forward in AI and in the electronic components used to collect brain signals have offered the prospect that they can become more practically useful.
Altman has backed a number of other companies in markets adjacent to ChatGPT-maker OpenAI, which is valued at $300 billion. In addition to cofounding World, he has also invested in the nuclear fission group Oklo and nuclear fusion project Helion.
The Vulcan rocket checks off several important boxes for the Space Force. First, it relies entirely on US-made rocket engines. The Atlas V rocket it is replacing uses Russian-built main engines, and given the chilled relations between the two powers, US officials have long desired to stop using Russian engines to power the Pentagon’s satellites into orbit. Second, ULA says the Vulcan rocket will eventually provide a heavy-lift launch capability at a lower cost than the company’s now-retired Delta IV Heavy rocket.
Third, Vulcan provides the Space Force with an alternative to SpaceX’s Falcon 9 and Falcon Heavy, which have been the only rockets in their class available to the military since the last national security mission was launched on an Atlas V rocket one year ago.
Col. Jim Horne, mission director for the USSF-106 launch, said this flight marks a “pretty historic point in our program’s history. We officially end our reliance on Russian-made main engines with this launch, and we continue to maintain our assured access to space with at least two independent rocket service companies that we can leverage to get our capabilities on orbit.”
What’s onboard?
The Space Force has only acknowledged one of the satellites aboard the USSF-106 mission, but there are more payloads cocooned inside the Vulcan rocket’s fairing.
The $250 million mission that officials are willing to talk about is named Navigation Technology Satellite-3, or NTS-3. This experimental spacecraft will test new satellite navigation technologies that may eventually find their way on next-generation GPS satellites. A key focus for engineers who designed and will operate the NTS-3 satellite is to look at ways of overcoming GPS jamming and spoofing, which can degrade satellite navigation signals used by military forces, commercial airliners, and civilian drivers.
“We’re going to be doing, we anticipate, over 100 different experiments,” said Joanna Hinks, senior research aerospace engineer at the Air Force Research Laboratory’s space vehicles directorate, which manages the NTS-3 mission. “Some of the major areas we’re looking at—we have an electronically steerable phased array antenna so that we can deliver higher power to get through interference to the location that it’s needed.”
Arlen Biersgreen, then-program manager for the NTS-3 satellite mission at the Air Force Research Laboratory, presents a one-third scale model of the NTS-3 spacecraft to an audience in 2022. Credit: US Air Force/Andrea Rael
GPS jamming is especially a problem in and near war zones. Investigators probing the crash of Azerbaijan Airlines Flight 8243 last December determined GPS jamming, likely by Russian military forces attempting to counter a Ukrainian drone strike, interfered with the aircraft’s navigation as it approached its destination in the Russian republic of Chechnya. Azerbaijani government officials blamed a Russian surface-to-air missile for damaging the aircraft, ultimately leading to a crash in nearby Kazakhstan that killed 38 people.
“We have a number of different advanced signals that we’ve designed,” Hinks said. “One of those is the Chimera anti-spoofing signal… to protect civil users from spoofing that’s affecting so many aircraft worldwide today, as well as ships.”
The NTS-3 spacecraft, developed by L3Harris and Northrop Grumman, only takes up a fraction of the Vulcan rocket’s capacity. The satellite weighs less than 3,000 pounds (about 1,250 kilograms), about a quarter of what this version of the Vulcan rocket can deliver to geosynchronous orbit.
Previously, the Cornell team had figured out how to make small changes to specific pixels to tell if a video had been manipulated or created by AI. But its success depended on the creator of the video using a specific camera or AI model. Their new method, “noise-coded illumination” (NCI), addresses those and other shortcomings by hiding watermarks in the apparent noise of light sources. A small piece of software can do this for computer screens and certain types of room lighting, while off-the-shelf lamps can be coded via a small attached computer chip.
“Each watermark carries a low-fidelity time-stamped version of the unmanipulated video under slightly different lighting. We call these code videos,” Davis said. “When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos, which lets us see where changes were made. And if someone tries to generate fake video with AI, the resulting code videos just look like random variations.” Because the watermark is designed to look like noise, it’s difficult to detect without knowing the secret code.
The Cornell team tested their method with a broad range of types of manipulation: changing warp cuts, speed and acceleration, for instance, and compositing and deep fakes. Their technique proved robust to things like signal levels below human perception; subject and camera motion; camera flash; human subjects with different skin tones; different levels of video compression; and indoor and outdoor settings.
“Even if an adversary knows the technique is being used and somehow figures out the codes, their job is still a lot harder,” Davis said. “Instead of faking the light for just one video, they have to fake each code video separately, and all those fakes have to agree with each other.” That said, Davis added, “This is an important ongoing problem. It’s not going to go away, and in fact it’s only going to get harder,” he added.
Gulf of Maine will be site of safety and effectiveness testing.
Woods Hole researchers, Adam Subhas (left) and Chris Murray, conducted a series of lab experiments earlier this year to test the impact of an alkaline substance, known as sodium hydroxide, on copepods in the Gulf of Maine. Credit: Daniel Hentz/Woods Hole Oceanographic Institution
Later this summer, a fluorescent reddish-pink spiral will bloom across the Wilkinson Basin in the Gulf of Maine, about 40 miles northeast of Cape Cod. Scientists from the Woods Hole Oceanographic Institution will release the nontoxic water tracer dye behind their research vessel, where it will unfurl into a half-mile wide temporary plume, bright enough to catch the attention of passing boats and even satellites.
As it spreads, the researchers will track its movement to monitor a tightly controlled, federally approved experiment testing whether the ocean can be engineered to absorb more carbon, and in turn, help combat the climate crisis.
As the world struggles to stay below the 1.5° Celsius global warming threshold—a goal set out in the Paris Agreement to avoid the most severe impacts of climate change—experts agree that reducing greenhouse gas emissions won’t be enough to avoid overshooting this target. The latest Intergovernmental Panel on Climate Change report, published in 2023, emphasizes the urgent need to actively remove carbon from the atmosphere, too.
“If we really want to have a shot at mitigating the worst effects of climate change, carbon removal needs to start scaling to the point where it can supplement large-scale emissions reductions,” said Adam Subhas, an associate scientist in marine chemistry and geochemistry at the Woods Hole Oceanographic Institution, who will oversee the week-long experiment.
The test is part of the LOC-NESS project—short for Locking away Ocean Carbon in the Northeast Shelf and Slope—which Subhas has been leading since 2023. The ongoing research initiative is evaluating the effectiveness and environmental impact of a marine carbon dioxide removal approach called ocean alkalinity enhancement (OAE).
This method of marine carbon dioxide removal involves adding alkaline substances to the ocean to boost its natural ability to neutralize acids produced by greenhouse gases. It’s promising, Subhas said, because it has the potential to lock away carbon permanently.
“Ocean alkalinity enhancement does have the potential to reach sort of gigatons per year of carbon removal, which is the scale at which you would need to supplement emissions reductions,” Subhas said. “Once the alkalinity is dissolved in seawater, it reacts with carbon dioxide and forms bicarbonate—essentially dissolved baking soda. That bicarbonate is one of the most stable forms of carbon in the ocean, and it can stay locked away for tens of thousands, even hundreds of thousands of years.”
But it will be a long time before this could happen at the magnitude needed to mitigate climate change.
According to Wil Burns, co-director of the Institute for Responsible Carbon Removal at American University, between 6 and 10 gigatons of carbon need to be removed from the atmosphere annually by 2050 in order to meet the Paris Agreement climate target. “It’s a titanic task,” he said.
Most marine carbon dioxide removal initiatives, including those involving OAE, are still in a nascent stage.
“We’re really far from having any of these technologies be mature,” said Lisa Levin, an oceanographer and professor at the Scripps Institution of Oceanography at the University of California San Diego, who spoke on a panel at the United Nations Ocean Conference in June about the potential environmental risks of mining and carbon dioxide removal on deep-sea ecosystems. “We’re looking at a decade until any serious, large-scale marine carbon removal is going to be able to happen—or more.”
“In the meantime, everybody acknowledges that what we have to do is to reduce emissions, right, and not rely on taking carbon out of the atmosphere,” she said.
Marine carbon dioxide removal
So far, most carbon removal efforts have centered on land-based strategies, such as planting trees, restoring soils, and building machines that capture carbon dioxide directly from the air. Increasingly, researchers are exploring whether the oceans might help.
“Looking at the oceans makes a lot of sense when it comes to carbon removal, because the oceans sequester 70 times more CO2 than terrestrial sources,” Burns said. What if it can hold more?
That question is drawing growing attention, not only from scientists. In recent years, a wave of private companies have started piloting various methods of removing carbon from the oceans.
“It’s really the private sector that’s pushing the scaling of this very quickly,” Subhas said. In the US and Canada, he said, there are at least four companies piloting varied ocean alkalinity enhancement techniques.
Last year, Ebb Carbon, a California-based startup focused on marine carbon dioxide removal, signed a deal with Microsoft to remove up to 350,000 metric tons of CO2 over the next decade using an ocean alkalinity enhancement process that splits seawater into acidic and alkaline streams. The alkaline stream is then returned to the sea where it reacts with CO2 and stores it as bicarbonate, enabling the ocean to absorb more carbon dioxide from the atmosphere. In return, Microsoft will purchase carbon removal credits from the startup.
Another company called Vesta, which has headquarters in San Francisco, is using an approach called Coastal Carbon Capture. This involves adding finely ground olivine—a naturally occurring olive-green colored mineral—to sandy beaches. From there, ocean tides and waves carry it into the sea. Olivine reacts quickly with seawater in a process known as enhanced weathering, increasing ocean alkalinity. The company piloted one of their projects in Duck, North Carolina, last year where it estimated approximately 5,000 metric tons of carbon dioxide would be removed through coastal carbon capture after accounting for project emissions, according to its website.
But these efforts are not without risk, AU’s Burns said. “We have to proceed in an extremely precautionary manner,” he said.
Some scientists are concerned that OAE initiatives that involve olivine, which contains heavy metals like nickel and chromium, may harm marine life, he said. Another concern is that the olivine could cloud certain ocean areas and block light from penetrating to deeper depths. If too much alkalinity is introduced too fast in concentrated areas, he said, some animals might not be able to adjust.
Other marine carbon dioxide removal projects are using other methods besides OAE. Some involve adding iron to the ocean to stimulate growth in microscopic plants called phytoplankton, which absorb carbon dioxide through photosynthesis. Others include the cultivation of large-scale farms of kelp and seaweed, which also absorb carbon dioxide through photosynthesis. The marine plants can then be sunk in the deep ocean to store the carbon they absorbed.
In 2023, researchers from Woods Hole Oceanographic Institution conducted their first OAE-related field experiment from the 90-foot research vessel R/V Connecticut south of Massachusetts. As part of this first experiment, nontoxic water tracer dye was released into the ocean. Researchers tracked its movement through the water for 72 hours to model the dispersion of a plume of alkalinity over time.
Credit: Woods Hole Oceanographic Institution
In 2023, researchers from Woods Hole Oceanographic Institution conducted their first OAE-related field experiment from the 90-foot research vessel R/V Connecticut south of Massachusetts. As part of this first experiment, nontoxic water tracer dye was released into the ocean. Researchers tracked its movement through the water for 72 hours to model the dispersion of a plume of alkalinity over time. Credit: Woods Hole Oceanographic Institution
One technique that has not yet been tried, but may be piloted in the future, according to the science-based conservation nonprofit Ocean Visions, would employ new technology to accelerate the ocean’s natural process of transferring surface water and carbon to the deep ocean. That’s called artificial downwelling. In a reverse process—artificial upwelling—cooler, nutrient-rich waters from the deep ocean would be pumped to the surface to spur phytoplankton growth.
So far, UC San Diego’s Levin said she is not convinced that these trials will lead to impactful carbon removal.
“I do not think the ocean is ever going to be a really large part of that solution,” she said. However, she added, “It might be part of the storage solution. Right now, people are looking at injecting carbon dioxide that’s removed from industry activities on land and transporting it to the ocean and injecting it into basalt.”
Levin said she’s also worried that we don’t know enough yet about the consequences of altering natural ocean processes.
“I am concerned about how many field trials would be required to actually understand what would happen, and whether we could truly understand the environmental risk of a fully scaled-up operation,” she said.
The experiment
Most marine carbon dioxide removal projects that have kicked off already are significantly larger in scale than the LOC-NESS experiment, which Subhas estimates will remove around 50 tons of CO2.
But, he emphasized, the goal of this project is not to compete in size or scale. He said the aim is to provide independent academic research that can help guide and inform the future of this industry and ensure it does not have negative repercussions on the marine environment.
There is some concern, he said, that commercial entities may pursue large-scale OAE initiatives to capitalize on the growing voluntary carbon market without first conducting adequate testing for safety and efficacy. Unlike those initiatives, there is no profit to be made from LOC-NESS. No carbon credits will be sold, Subhas said.
The project is funded by a collection of government and philanthropic sources, including the National Oceanic and Atmospheric Administration and the Carbon to Sea Initiative, a nonprofit that brings funders and scientists together to support marine carbon dioxide removal research and technology.
“We really feel like it’s necessary for the scientific community to be delivering transparent, trusted, and rigorous science to evaluate these things as these activities are currently happening and scaling in the ocean by the private sector,” Subhas said.
The LOC-NESS field trial in Wilkinson Basin will be the first “academic only” OAE experiment conducted from a ship in US waters. It is also the first of its kind to receive a permit from the Environmental Protection Agency under the Marine Protection, Research, and Sanctuaries Act.
“There’s no research in the past or planned that gets even close to providing a learning opportunity that this research is providing for OAE in the pelagic environment,” said Carbon to Sea Initiative’s Antonius Gagern, referring to the open sea experiment.
The permit was granted in April after a year of consultations between the EPA and other federal agencies.
During the process’ public comment periods, commenters expressed concerns about the potential impact on marine life, including the critically endangered North Atlantic right whales, small crustaceans that they eat called copepods, and larvae for the commercially important squid and mackerel fisheries. In a written response to some of these comments, the EPA stated that the small-scale project “demonstrates scientific rigor” and is “not expected to significantly affect human health, the marine environment, or other uses of the ocean.”
Subhas and his interdisciplinary team of chemists, biologists, engineers, and physicists from Woods Hole have spent the last few years planning this experiment and conducting a series of trials at their lab on Cape Cod to ensure they can safely execute and effectively monitor the results of the open-water test they will conduct this summer in the Gulf of Maine.
They specifically tested the effects of sodium hydroxide—an alkaline substance also known as lye or caustic soda—on marine microbes, phytoplankton, and copepods, a crucial food source for many marine species in the region in addition to the right whales. “We chose sodium hydroxide because it’s incredibly pure,” Subhas said. It’s widely used in the US to reduce acidity in drinking water.
It also helps counter ocean acidification, according to Subhas. “It’s like Tums for the ocean,” he said.
Ocean acidification occurs when the ocean absorbs excess carbon dioxide, causing its pH to drop. This makes it harder for corals, krill, and shellfish like oysters and clams to develop their hard calcium carbonate shells or skeletons.
This month, the team plans to release 50 tons of sodium hydroxide into a designated area of the Wilkinson Basin from the back of one of two research vessels participating in the LOC-NESS operation.
The basin is an ideal test site, according to Subhas, because there is little presence of phytoplankton, zooplankton, commercial fish larvae, and endangered species, including some whales, during this season. Still, as a precautionary measure, Woods Hole has contracted a protected species observer to keep a look out for marine species and mitigate potential harm if they are spotted. That person will be on board as the vessel travels to and from the field trial site, including while the team releases the sodium hydroxide into the ocean.
The alkaline substance will be dispersed over four to 12 hours off the back of one of the research vessels, along with the nontoxic fluorescent red water tracer dye called rhodamine. The dye will help track the location and spread of the sodium hydroxide once released into the ocean, and the vessel’s wake will help mix the solution in with the ocean water.
After about an hour, Subhas said, it will form into a “pinkish” patch of water that can be picked up on satellites. “We’re going to be taking pictures from space and looking at how this patch sort of evolves, dilutes, and stretches and disperses over time.”
For a week after that, scientists aboard the vessels will take rotating shifts to collect data around the clock. They will deploy drones and analyze over 20 types of samples from the research vessel to monitor how the surrounding waters and marine life respond to the experiment. They’ll track changes in ocean chemistry, nutrient levels, plankton populations and water clarity, while also measuring acidity and dissolved CO2.
In March, the team did a large-scale dry run of the dispersal at an open air testing facility on a naval base in New Jersey. According to Subhas, the trial demonstrated their ability to safely and effectively deliver alkalinity to surface seawater.
“The next step is being able to measure the carbon uptake from seawater—from the atmosphere into seawater,” he said. That is a slower process. He said he expects to have some preliminary results on carbon uptake, as well as environmental impacts, early next year.
A recent conference sees doubts raised about the age of the oldest signs of life.
Where the microbe bodies are buried: metamorphosed sediments in Labrador, Canada containing microscopic traces of carbon. Credit: Martin Whitehouse
Where the microbe bodies are buried: metamorphosed sediments in Labrador, Canada containing microscopic traces of carbon. Credit: Martin Whitehouse
The question of when life began on Earth is as old as human culture.
“It’s one of these fundamental human questions: When did life appear on Earth?” said Professor Martin Whitehouse of the Swedish Museum of Natural History.
Whitehouse was among those skeptics. This July, he presented new evidence to the Goldschmidt Conference in Prague that the carbon in question is only between 2.7–2.8 billion years old, making it younger than other traces of life found elsewhere.
Organic carbon?
The carbon in question is in rock in Labrador, Canada. The rock was originally silt on the seafloor that, it’s argued, hosted early microbial life that was buried by more silt, leaving the carbon as their remains. The pressure and heat of deep burial and tectonic events over eons have transformed the silt into a hard metamorphic rock, and the microbial carbon in it has metamorphosed into graphite.
“They are very tiny, little graphite bits,” said Whitehouse.
The key to showing that this graphite was originally biological versus geological is its carbon isotope ratio. From life’s earliest days, its enzymes have preferred the slightly lighter isotope carbon-12 over the marginally heavier carbon-13. Organic carbon is therefore much richer in carbon-12 than geological carbon, and the Labrador graphite does indeed have this “light” biological isotope signature.
The key question, however, is its true age.
Mixed-up, muddled-up, shook-up rocks
Sorting out the age of the carbon-containing Labrador rock is a geological can of worms.
These are some of the oldest rocks on the planet—they’ve been heated, squished, melted, and faulted multiple times as Earth went through the growth, collision, and breakup of continents before being worn down by ice and exposed today.
“That rock itself is unbelievably complicated,” said Whitehouse. “It’s been through multiple phases of deformation.”
In general, the only ways to date sediments are if there’s a layer of volcanic ash in them, or by distinctive fossils in the sediments. Neither is available in these Labrador rocks.
“The rock itself is not directly dateable,” said Whitehouse, “so then you fall onto the next best thing, which is you want to look for a classic field geology cross-cutting relationship of something that is younger and something that you can date.”
The idea, which is as old as the science of geology itself, is to bracket the age of the sediment by finding a rock formation that cuts across it. Logically, the cross-cutting rock is younger than the sediment it cuts across.
In this case, the carbon-containing metamorphosed siltstone is surrounded by swirly, gray banded gneiss rock, but the boundary between the siltstone and the gray gneiss is parallel, so there’s no cross-cutting to use.
Professor Tsuyoshi Komiya of The University of Tokyo was a coauthor on the 3.95 billion-year age paper. His team used a cross-cutting rock they found at a different location and extrapolated that to the carbon-bearing siltstone to constrain its age. “It was discovered that the gneiss was intruded into supracrustal rocks (mafic and sedimentary rocks),” said Komiya in an email to Ars Technica.
But Whitehouse disputes that inference between the different outcrops.
“You’re reliant upon making these very long-distance assumptions and correlations to try to date something that might actually not have anything to do with what you think you’re dating,” he said.
Professor Jonathan O’Neil of the University of Ottawa, who was not involved in either Whitehouse’s or Komiya’s studies but who has visited the outcrops in question, agrees with Whitehouse. “I remember I was not convinced either by these cross-cutting relationships,” he told Ars. “It’s not clear to me that one is necessarily older than the other.”
With the field geology evidence disputed, the other pillar holding up the 3.95-billion-year-old date is its radiometric date, measured in zircon crystals extracted from the rocks surrounding the metamorphosed siltstone.
The zircon keeps the score
Geologists use the mineral zircon to date rocks because when it crystallizes, it incorporates uranium but not lead. So as radioactive uranium slowly decays into lead, the ratio of uranium to lead provides the age of the crystal.
But the trouble with any date obtained from rocks as complicated as these is knowing exactly what geological event it dates—the number alone means little without the context of all the other geological evidence for the events that affected the area.
Both Whitehouse and O’Neil have independently sampled and dated the same rocks as Komiya’s team, and where Komiya’s team got a date of 3.95, Whitehouse’s and O’Neil’s new dates are both around 3.87 billion years. Importantly, O’Neil’s and Whitehouse’s dates are far more precise, with errors around plus-or-minus 5 or 6 million years, which is remarkably precise for dates in rocks this old. The 3.95 date had an error around 10 times bigger. “It’s a large error,” said O’Neil.
But there’s a more important question: How is that date related to the age of the organic carbon? The rocks have been through many events that could each have “set” the dates in the zircons. That’s because zircons can survive multiple re-heatings and even partial remelting, with each new event adding a new layer, or “zone,” on the outer surface of the crystal, recording the age of that event.
“This rock has seen all the events, and the zircon in it has responded to all of these events in a way that, when you go in with a very small-scale ion beam to do the sampling on these different zones, you can pick apart the geological history,” Whitehouse said.
Whitehouse’s team zapped tiny spots on the zircons with a beam of negatively charged oxygen ions to dislodge ions from the crystals, then sucked away these ions into a mass spectrometer to measure the uranium-lead ratio, and thus the dates. The tiny beam and relatively small error have allowed Whitehouse to document the events that these rocks have been through.
“Having our own zircon means we’ve been able to go in and look in more detail at the internal structure in the zircon,” said Whitehouse. “Where we might have a core that’s 3.87, we’ll have a rim that is 2.7 billion years, and that rim, morphologically, looks like an igneous zircon,” said Whitehouse.
That igneous outer rim of Whitehouse’s zircons shows that it formed in partially molten rock that would have flowed at that time. That flow was probably what brought it next to the carbon-containing sediments. Its date of 2.7 billion years ago means the carbon in the sediments could be any age older than that.
That’s a key difference from Komiya’s work. He argues that the older dates in the cores of the zircons are the true age of the cross-cutting rock. “Even the igneous zircons must have been affected by the tectonothermal event; therefore, the obtained age is the minimum age, and the true age is older,” said Komiya. “The fact that young zircons were found does not negate our research.”
But Whitehouse contends that the old cores of the zircons instead record a time when the original rock formed, long before it became a gneiss and flowed next to the carbon-bearing sediments.
Zombie crystals
Zircon’s resilience means it can survive being eroded from the rock where it formed and then deposited in a new, sedimentary rock as the undead remnants of an older, now-vanished landscape.
The carbon-containing siltstone contains zombie zircons, and Whitehouse presented new data on them to the Goldschmidt Conference, dating them to 2.8 billion years ago. Whitehouse argues that these crystals formed in an igneous rock 2.8 billion years ago and then were eroded, washed into the sea, and settled in the silt. So the siltstone must be no older than 2.8 billion years old, he said.
“You cannot deposit a zircon that is not formed yet,” O’Neil explained.
Tiny recorders of history – ancient zircon crystals from Labrador. Left shows layers built up as the zircon went through many heating events. Right shows a zircon with a prism-like outer shape showing that it formed in igneous conditions around an earlier zircon. Circles indicate where an ion beam was used to measure dates. Credit: Martin Whitehouse
This 2.8-billion-year age, along with the igneous zircon age of 2.7 billion years, brackets the age of the organic carbon to anywhere between 2.8 and 2.7 billion years old. That’s much younger than Komiya’s date of 3.95 billion years old.
Komiya disagrees: “I think that the estimated age is minimum age because zircons suffered from many thermal events, so that they were rejuvenated,” he said. In other words, the 2.8-billion-year age again reflects later heating, and the true date is given by the oldest-dated zircons in the siltstone.
But Whitehouse presented a third line of evidence to dispute the 3.95-billion-year date: isotopes of hafnium in the same zombie zircon crystals.
The technique relies on radioactive decay of lutetium-176 to hafnium-176. If the 2.8-billion-year age resulted from rejuvenation by later heating, it would have had to have formed from material with a hafnium isotope ratio incompatible with the isotope composition of the early Earth.
“They go to impossible numbers,” said Whitehouse.
The only way that the uranium-lead ratio can be compatible with the hafnium in the zircons, Whitehouse argued, is if the zircons that settled in the silt had crystallized around 2.8 billion years ago, constraining the organic carbon to being no older than that.
The new oldest remains of life on Earth, for now
If the Labrador carbon is no longer the oldest trace of life on Earth, then where are the oldest remains of life now?
For Whitehouse, it’s in the 3.77-billion-year-old Isua Greenstone Belt in Greenland: “I’m willing to believe that’s a well-documented age… that’s what I think is the best evidence for the oldest biogenicity that we have,” said Whitehouse.
O’Neil recently co-authored a paper on Earth’s oldest surviving crustal rocks, located next to Hudson Bay in Canada. He points there. “I would say it’s in the Nuvvuagittuq Greenstone belt,” said O’Neil, “because I would argue that these rocks are 4.3 billion years old. Again, not everybody agrees!” Intriguingly, the rocks he is referring to contain carbon with a possibly biological origin and are thought to be the remains of the kind of undersea vent where life could well have first emerged.
But the bigger picture is the fact that we have credible traces of life of this vintage—be it 3.8 or 3.9 or 4.3 billion years.
O’Neil thinks that once conditions on Earth were habitable, life would have emerged relatively fast: “To me, it’s not shocking, because the conditions were the same,” he said. “The Earth has the luxury of time… but biology is very quick. So if all the conditions were there by 4.3 billion years old, why would biology wait 500 million years to start?”
Howard Lee is a freelance science writer focusing on the evolution of planet Earth through deep time. He earned a B.Sc. in geology and M.Sc. in remote sensing, both from the University of London, UK.
It was tested for its ability to adhere to the inside of the digestive tract.
Most adhesives can’t stick to wet surfaces because water and other fluids disrupt the adhesive’s bonding mechanisms. This problem, though, has been beautifully solved by evolution in remora suckerfish, which use an adhesive disk on top of their heads to attach to animals like dolphins, sharks, and even manta rays.
A team of MIT scientists has now taken a close look at these remora disks and reverse-engineered them. “Basically, we looked at nature for inspiration,” says Giovanni Traverso, a professor at MIT Department of Mechanical Engineering and senior author of the study.
Sticking Variety
Remora adhesive disks are an evolutionary adaptation of the fish’s first dorsal fin, the one that in other species sits on top of the body, just behind the head and gill covers. The disk rests on an intercalary backbone—a bone structure that most likely evolved from parts of the spine. This bony structure supports lamellae, specialized bony plates with tiny backward-facing spikes called spinules. The entire disk is covered with soft tissue compartments that are open at the top. “This makes the remora fish adhere very securely to soft-bodied, fast-moving marine hosts,” Traverso says.
A remora attaches to the host by pressing itself against the skin, which pushes the water out of these compartments, creating a low-pressure zone. Then, the spinules mechanically interlock with the host’s surface, making the whole thing work a bit like a combination of a suction cup and Velcro. When the fish wants to detach from a host, it lifts the disk, letting water back into the compartments to remove the suction. Once released, it can simply swim away.
What impressed the scientists the most, though, was the versatility of those disks. Reef-associated species of remora like Phtheirichthys lineatus are generalists and stick to various hosts, including other fish, sharks, or turtles. Other species living in the open sea are more specialized and attach to cetaceans, swordfish, or marlins. While most remoras attach to the external tissue of their hosts, R. albescens sticks within the oral cavities and gill chamber of manta rays.
A close-up of the adhesive pad of a remora. Credit: Stephen Frink
To learn what makes all these different disks so good at sticking underwater, the team first examined their anatomy in detail. It turned out that the difference between the disks was mostly in the positioning of lamellae. Generalist species have a mix of parallel and angled lamellae, while remoras sticking to fast-swimming hosts have them mostly parallel. R. albescens, on the other hand, doesn’t have a dominant lamellae orientation pattern but has them positioned at a very wide variety of angles.
The researchers wanted to make an adhesive device that would work for a wide range of applications, including maritime exploration or underwater manufacturing. Their initial goal, though, was designing a drug delivery platform that could reliably stick to the inside walls of the gastrointestinal tract. So, they chose R. albescens disks as their starting point, since that species already attaches internally to its host. They termed their device an Mechanical Underwater Soft Adhesion System (MUSAS).
However, they didn’t just opt for a biomimetic, copy-and-paste design. “There were things we did differently,” Traverso says.
Upgrading nature
The first key difference was deployment. MUSAS was supposed to travel down the GI tract to reach its destination, so the first challenge was making it fit into a pill. The team chose the size 000 capsule, which at 26 millimeters in length and 9.5 millimeters in diameter, is the largest Food and Drug Administration-approved ingestible form. MUSAS had a supporting structure—just like remora disks, but made with stainless steel. The angled lamellae with spinules fashioned after those on R. albescens were made of a shape memory nickel-titanium alloy. The role of remora’s soft tissues, which provide the suction by dividing the disk into compartments, was played by an elastomer.
MUSAS, would be swallowed in a folded form within its huge pill. “The capsule is tuned to dissolve in specific pH environment, which is how we determine the target location—for example the small intestine has a slightly different pH than the stomach”, says Ziliang Kang, an MIT researcher in Traverso’s group and lead author of the study. Once released, the shape memory alloy in MUSAS lamellae-like structures would unfold in response to body temperature and the whole thing would stick to the wall of the target organ, be it the esophagus, the stomach, or the intestines.
The mechanism of sticking was also a bit different from that of remoras. “The fish can swim and actively press itself against the surface it wants to stick to. MUSAS can’t do that, so instead we relied on the peristaltic movements within the GI tract to exert the necessary force,” Traverso explains. When the muscles contract, MUSAS would be pressed against the wall and attach to it. And it was expected to stay there for quite some time.
The team ran a series of experiments to evaluate MUSAS performance in a few different scenarios. The drug-delivery platform application was tested on pig organ samples. MUSAS stayed in the sample GI tract for an average of nine days, with the longest sticking time reaching three and a half weeks. MUSAS managed to stay in place despite food and fluids going through the samples.
Even when the team poked the devices with a pipette to test what they called “resisting dynamic interference,” MUSAS just slid a little but remained firmly attached. Other experiments included using MUSAS to attach temperature sensors to external tissues of live fish and putting sensors that could detect reflux events in the GI tract of live pigs.
Branching out
The team is working on making MUSAS compatible with a wider range of drugs and mRNA vaccines. “We also think about using this for stimulating tissues,” Traverso says. The solution he has in mind would use MUSAS to deliver electrical pulses to the walls of the GI tract, which Traverso’s lab has shown can activate appetite-regulating hormones. But the team also wants to go beyond strictly medical applications.
The team demonstrated that MUSAS is really strong as an adhesive. When it sticks to a surface, it can hold a weight over a thousand times greater than its own. This puts MUSAS more or less on par with some of the best adhesives we have, such as polyurethane glues or epoxy resins. What’s more, this sticking strength was measured when MUSAS was attached to soft, uneven, wet surfaces. “On a rigid, even surface, the force-to-weight ratio should be even higher,” Kang claims. And this, Kang thinks, makes scaled-up variants of MUSAS a good match for underwater manufacturing.
“The first scenario I see is using MUSAS as grippers attached to robotic arms moving around soft objects,” Kang explains. Currently, this is done using vacuum systems that simply suck onto a fabric or other surface. The problem is that these solutions are rather complex and heavy. Scaled-up MUSAS should be able to achieve the same thing passively, cutting cost and weight. The second idea Kang has is using MUSAS in robots designed to perform maintenance jobs beneath the waterline on boats or ships. “We are really trying to see what is possible,” Traverso says.
Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.
“And then you have the Spinosaurus which was kind of weird in general,” Rowe says. “There was a study by Dave Hone and Tom Holtz about how it was waiting on the shorelines, waiting for food to go by that it could fish out.” But Spinosaurus’ foraging wasn’t limited to fishing. There was a pterosaur found preserved in its stomach and there were iguanodon remains found in the maw of a Baryonyx, another large carnivore belonging to the same lineage as the Spinosaurus. “They had great diversity in their diet. They were generalists, but our results show they weren’t these massive bone-crunching predators like the T. rex,” Rowe says. Because the T. rex was just built different.
King of the Cretaceous jungle
The Tyranosauroidea lineage had stiff, akinetic skulls, meaning they had very little mobility in the joints. The T. rex skull could and most likely did withstand very high stress as the animal pursued a “high stress, high power” strategy, entirely different from other large carnivores. “They were very much like big crocodiles with extremely strong, reinforced jaws and powerful muscles that could pulverize bones,” Rowe claims.
The T. rex, he argued, was a specialist—an ambush predator that attacked large, highly mobile prey, aiming to subdue it with a single bite. “And we have fossil evidence of that,” Rowe says. “In the Museum of Natural History in New York, there is a Hadrosaur, a large herbivorous dinosaur with a duck-like beak, and there’s a T. rex tooth embedded in its back.” This, he thinks, means the T. rex was actively preying on this animal, especially since there are healing marks around the stuck tooth. “Even with this super strong bite, the T. rex wasn’t always successful,” Rowe adds.
Still, the fight with the Spinosaurus most likely wouldn’t go the way it did in Jurassic Park III. “The T. rex was built to fight like that; the Spinosaurus really wasn’t”, Rowe says.
As the flies’ host and geographic range expand, pressure is intensifying to control the flies—something many countries have managed to do in the past.
Decades ago, screwworms were endemic throughout Central America and the southern US. However, governments across the regions used intensive, coordinated control efforts to push the flies southward. Screwworms were eliminated from the US around 1966, and were pushed downward through Mexico in the 1970s and 1980s. They were eventually declared eliminated from Panama in 2006, with the population held at bay by a biological barrier at the Darién Gap, at the border of Panama and Colombia. However, in 2022, the barrier was breached, and the flies began advancing northward, primarily through unmonitored livestock movements. The latest surveillance suggests the flies are now about 370 miles south of Texas.
The main method to wipe out screwworms is the sterile insect technique (SIT), which exploits a weakness in the fly’s life cycle since they tend to only mate once. In the 1950s, researchers at the US Department of Agriculture figured out they could use gamma radiation to sterilize male flies without affecting their ability to find mates. They then bred massive amounts of male flies, sterilized them, and carpet-bombed infested areas with aerial releases, which tanked the population.
Panama, in partnership with the US, maintained the biological barrier at the Colombian border with continual sterile-fly bombings for years. But as the flies approached this year, the USDA shifted its aerial deliveries to Mexico. In June, the USDA announced plans to set up a new sterile fly facility in Texas for aerial deliveries to northern Mexico. And last month, the USDA halted livestock trade from southern entry points.
Miller said in the announcement today that SIT is no longer enough, and Texas is taking its own steps. Those include the new bait, insecticides, and new feed for livestock and deer laced with the anti-parasitic drug ivermectin. Miller also said that the state aims to develop a vaccine for cattle that could kill larvae, but such a shot is still in development.
The nation’s premier group of scientific advisers announced Thursday that it will conduct an independent, fast-track review of the latest climate science. It will do so with an eye to weighing in on the Trump administration’s planned repeal of the government’s 2009 determination that greenhouse gas emissions harm human health and the environment.
The move by the National Academies of Sciences, Engineering, and Medicine to self-fund the study is a departure from their typical practice of responding to requests by government agencies or Congress for advice. The Academies intend to publicly release it in September, in time to inform the Environmental Protection Agency’s decision on the so-called “endangerment finding,” they said in a prepared statement.
“It is critical that federal policymaking is informed by the best available scientific evidence,” said Marcia McNutt, president of the National Academy of Sciences. “Decades of climate research and data have yielded expanded understanding of how greenhouse gases affect the climate. We are undertaking this fresh examination of the latest climate science in order to provide the most up-to-date assessment to policymakers and the public.”
The Academies are private, nonprofit institutions that operate under an 1863 congressional charter, signed by President Abraham Lincoln, directing them to provide independent, objective analysis and advice to inform public policy decisions.
The Trump administration’s move to rescind the endangerment finding, announced last month, would eliminate the legal underpinning of the most important actions the federal government has taken on climate change—regulation of carbon pollution from motor vehicles and power plants under the Clean Air Act. Since assuming his role, EPA Administrator Lee Zeldin has made clear he intends to repeal the climate rules that were put in place under the Biden administration, but his job will be far easier with the elimination of the endangerment finding.
The EPA based its proposal mainly on a narrow interpretation of the agency’s legal authority, but the agency also cited uncertainties in the science, pointing to a report published the same day by the Department of Energy that was authored by a hand-picked quintet of well-known skeptics of the mainstream consensus on climate change. The administration has given a short window of opportunity—30 days—for the public to respond to its endangerment finding proposal and to the DOE report on climate science.
The EPA did not immediately respond to a request for comment on the announcement by the National Academies. Critics of the Trump administration’s approach applauded the decision by the scientific panel.
“I think the National Academies have identified a very fundamental need that is not being met, which is the need for independent, disinterested expert advice on what the science is telling us,” said Bob Sussman, who served as deputy administrator of the EPA in the Clinton administration and was a senior adviser in the agency during the Obama administration.
Earlier Thursday, before the National Academies announcement, Sussman posted a blog at the Environmental Law Institute website calling for a “blue-ribbon review” of the science around the endangerment finding. Sussman noted the review of the state of climate science that the National Academies conducted in 2001 at the request of President George W. Bush’s administration. Since then, the Academies have conducted numerous studies on aspects of climate change, including the development of a “climate-ready workforce,” how to power AI sustainably, and emerging technologies for removing carbon from the atmosphere, for example.
The National Academies announced in 2023 that they were developing a rapid response capacity to address the many emerging scientific policy issues the nation was facing. The first project they worked on was an assessment of the state of science around diagnostics for avian influenza.
Andrew Dessler, director of the Texas Center for Extreme Weather at Texas A&M University, said the new controversy that the Trump administration had stirred around climate science was a fitting subject for a fast-track effort by the National Academies.
“The National Academies [were] established exactly to do things like this—to answer questions of scientific importance for the government,” he said. “This is what the DOE should have done all along, rather than hire five people who represent a tiny minority of the scientific community and have views that virtually nobody else agrees with.”
Dessler is leading an effort to coordinate a response from the scientific community to the DOE report, which would also be submitted to the EPA. He said that he had heard from about 70 academics eager to participate after putting out a call on the social media network Bluesky. He said that work will continue because it seems to have a slightly different focus than the National Academies’ announced review, which does not mention the DOE report but talks about focusing on the scientific evidence on the harms of greenhouse gas emissions that has emerged since 2009, the year the endangerment finding was adopted by the EPA.
On Thursday, the Trump administration issued an executive order asserting political control over grant funding, including all federally supported research. The order requires that any announcement of funding opportunities be reviewed by the head of the agency or someone they designate, which means a political appointee will have the ultimate say over what areas of science the US funds. Individual grants will also require clearance from a political appointee and “must, where applicable, demonstrably advance the President’s policy priorities.”
The order also instructs agencies to formalize the ability to cancel previously awarded grants at any time if they’re considered to “no longer advance agency priorities.” Until a system is in place to enforce the new rules, agencies are forbidden from starting new funding programs.
In short, the new rules would mean that all federal science research would need to be approved by a political appointee who may have no expertise in the relevant areas, and the research can be canceled at any time if the political winds change. It would mark the end of a system that has enabled US scientific leadership for roughly 70 years.
We’re in control
The text of the executive order recycles prior accusations the administration has used to justify attacks on the US scientific endeavor: Too much money goes to pay for the facilities and administrative staff that universities provide researchers; grants have gone to efforts to diversify the scientific community; some studies can’t be replicated; and there have been instances of scientific fraud. Its “solution” to these problems (some of which are real), however, is greater control of the grant-making process by non-expert staff appointed by the president.
In general, the executive order inserts a layer of political control over both the announcement of new funding opportunities and the approval of individual grants. It orders the head of every agency that issues grants—meaning someone appointed by the president—to either make funding decisions themselves, or to designate another senior appointee to do it on their behalf. That individual will then exert control over whether any funding announcements or grants can move forward. Decisions will also require “continuation of existing coordination with OMB [Office of Management and Budget].” The head of OMB, Russell Vought, has been heavily involved in trying to cut science funding, including a recent attempt to block all grants made by the National Institutes of Health.
Some stone tools found near a river on the Indonesian island of Sulawesi suggest that the first hominins had reached the islands by at least 1.04 million years ago. That’s around the same time that the ancestors of the infamously diminutive “Hobbits” may have reached the island of Flores.
Archaeologist Budianto Hakim of Indonesia’s National Research and Innovation Agency and his colleagues were the ones who recently unearthed the tools from a site on Sulawesi. Although a handful of stone flakes from that island don’t tell us who the ancestors of the small species were or how they reached remote islands like Flores and Luzon, the tools are one more piece in the puzzle. And this handful of stone flakes may eventually play a role in helping us understand how other hominin species conquered most of the world long before we came along.
Crossing the ocean a million years ago
Sometimes the deep past leaves the smallest traces. At the Calio site, a sandstone outcrop in what’s now a cornfield outside the village of Ujung in southern Sulawesi, people left behind just a handful of sharp stone flakes roughly a million years ago. There are seven of them, ranging from 22 to 60 millimeters long, and they’re scratched, worn, and chipped from tumbling around at the bottom of a river. But it’s still clear that they were once shaped by skilled human—or at least human-like—hands that used hard stones as hammers to make sharp-edged chert flakes for cutting and scraping.
The oldest of these tools is likely to be between 1.04 and 1.48 million years old. Hakim and his colleagues dated teeth from a wild pig to around 1.26 million years ago. They were part of a jawbone archaeologists unearthed from a layer just above the oldest flake. Throw in some statistical modeling, and you get the range of likely dates for the stone flake buried in the deepest layer of soil.
Even the younger end of that estimate would make these tools the oldest evidence yet of hominins (of any species) in the islands of Indonesia and the Philippines. This area, sometimes called Wallacea, lies between the continents of Asia and Australia, separated from both by wide channels of deep ocean.
“But the Calio site has yet to yield any hominin fossils,” said Brumm, “so while we now know there were tool-makers on Sulawesi a million years ago, their identity remains a mystery.” But they may be related to the Hobbits, a short-statured group of hominins who lived hundreds of kilometers away on the island of Flores until around 50,000 years ago.
“The discovery of Early Pleistocene artifacts at Calio suggests that Sulawesi was populated by hominins at around the same time as Flores, if not earlier,” wrote Hakim and his colleagues in their recent paper.
The Flores connection
The islands that now make up Indonesia and the Philippines have been a hominin hotspot for at least a million years. Our species wandered onto the scene sometime between 63,000 and 73,000 years ago, but at least one other hominin species had already been there for at least a million years. We’re just not sure exactly who they were, when they arrived, or how.
“Precisely when hominins first crossed to Sulawesi remains an open question, as does the taxonomic affinity of the colonizing population,” the authors note.
This map shows the islands of Wallacea. The large one just east of Java is Sulawesi. Credit: Darren O’Connell
That’s why the handful of stone tools the team recently unearthed at Calio matter: They’re another piece of that puzzle, albeit a small one. Every slightly older date is one step closer to the first hominin tools, bones, or footprints in these islands, and another pin on the map of who was where and when.
And that map is accumulating quite a lot of pins, representing an ever-increasing number of species. Once the first hominins made it across the Makassar Strait, they found themselves in isolated groups on islands cut off from the mainland—and each other—so the hominin family tree started branching very quickly. On at least two islands, Flores and Luzon, those original hominin settlers eventually gave rise to local species, Homo floresiensis and Homo luzonensis. And University of Wollongong paleoanthropologist Richard Roberts, a co-discoverer of Homo floresiensis, thinks there are probably more isolated island hominin species.
In 2019, when Homoluzonensis was first described, Roberts told Ars, “These new fossils, and the assignation of them to a new species (Homo luzonensis), fulfills one of the predictions Mike Morwood and others (myself included) made when we first reported (15 years ago!) the discovery of Homo floresiensis: that other unknown species of hominins would be found in the islands of Southeast Asia.”
Both Homo floresiensis (the original “Hobbits”) and Homo luzonensis were short, clocking in at just over a meter tall. Their bones and teeth are different enough from each other to set them apart as a unique species, but they have enough in common that they probably share a common ancestor—one they don’t share with us. They’re more like our distant cousins, and the islands of Wallacea may have been home to many other such cousins, if Roberts and his colleagues are correct.
Complicated family history
But who was the common ancestor of all these hominin cousins? That’s where things get complicated (as if they weren’t already). Most paleoanthropologists lean toward Homo erectus, but there’s a chance—along with some tantalizing hints, and no direct evidence—that much more ancient human relatives called Australopithecines may have made the journey a million (or two) years before Homo erectus.
Finger and toe bones from Homo luzonensis are curved, as if they spent as much of their lives climbing trees as walking. That’s more like Australopithecines than any member of our genus Homo. But their teeth are smaller and shaped more like ours. Anthropologists call this mix of features a mosaic, and it can make it tough to figure out how hominin species are related. That’s part of why the question of when the ancestors of the Hobbits arrived on their respective islands is so important.
Compare the teeth and phalanx of Homo luzonensis to those of Homo sapiens (right) and Australopithecus afarensis (left). Credit: Tocheri 2019
We don’t know the answer yet, but we do know that someone was making stone tools on Flores by 1.02 million years ago. Those toolmakers may have been Homo erectus, Australopithecines, or something already recognizable as tiny Homo floresiensis. The Hobbits (or their ancestors) were distinctly “Hobbity” by around 700,000 years ago; fossil teeth and bones from a handful of hominins at a site called Mata Menge make that clear. The Hobbits discovered at Liang Bua Cave on Flores date to somewhere between 50,000 and 100,000 years ago.
Meanwhile, 2,800 kilometers away on the island of Luzon, the oldest stone tools, along with their obvious cut marks left behind on animal bones, date back to 700,000 years ago. That’s as old as the Mata Menge Hobbits on Flores. The oldest Homo luzonensis fossils are between 50,000 and 67,000 years old. It’s entirely possible that older evidence, of the island’s original settlers and of Homo luzonensis, may eventually be found, but until then, we’re left with a lot of blank space and a lot of questions.
And now we know that the oldest traces of hominin presence on Sulawesi is at least 1.04 million years old. But might Sulawesi have its own diminutive hominins?
So are there more Hobbits out there?
“Sulawesi is a wild card—it’s like a mini-continent in itself,” said Brumm. “If hominins were cut off on this huge and ecologically rich island for a million years, would they have undergone the same evolutionary changes as the Flores hobbits? Or would something totally different have happened?”
Reconstruction of Homo floresiensis by Atelier Elisabeth Daynes. Credit: Kinez Riza
A phenomenon called island dwarfism played a role in Homo floresiensis‘ evolution; species that live in relative isolation on small islands tend to evolve into either much larger or much smaller versions of their ancestors (which is why the Hobbits shared their island home with pygmy elephants and giant moas). But how small does an island need to be before island dwarfism kicks in? Sulawesi is about 12 times as large as Flores, for example. So what might the descendants of the Calio toolmakers have looked like by 100,000 years ago?
That’s something that we’ll only know if archaeologists on Sulawesi, like Hakim and his team, find fossil remains of those hominins.
Seafarers or tsunami survivors?
Understanding exactly when hominins first set foot on the island of Sulawesi might eventually help us figure out how they got there. These islands are thousands of kilometers from the Southeast Asian mainland and from each other, so getting there would have meant crossing vast stretches of deep, open ocean.
Archaeologists haven’t found any evidence that anyone who came before our species built boats or rafts, although those watercraft would have been made of materials that tend to decay pretty quickly, so even scraps of ancient wood and rope are extremely rare and lucky finds. But some ancient hominins did have a decent grasp of all the basic skills they’d need for at least a simple raft: woodworking and rope-making.
Another possibility is that hominins living on the coast of mainland Southeast Asia could have been swept out to sea by a tsunami, and some of them could have been lucky enough to survive the misadventure and wash ashore someplace like Sulawesi, Flores, or Luzon (RIP to any others). But for that scenario to work, enough hominins would have had to reach each island to create a lasting population, and it probably had to happen more than once to end up with hominin groups on at least three distant islands.
Either way, it’s no small feat, even for a Hobbit with small feet.
A collection of new studies on gene activity shows that AI tools aren’t very good.
Gene activity appears to remain beyond the abilities of AI at the moment. Credit: BSIP
Biology is an area of science where AI and machine-learning approaches have seen some spectacular successes, such as designing enzymes to digest plastics and proteins to block snake venom. But in an era of seemingly endless AI hype, it might be easy to think that we could just set AI loose on the mounds of data we’ve already generated and end up with a good understanding of most areas of biology, allowing us to skip a lot of messy experiments and the unpleasantness of research on animals.
But biology involves a whole lot more than just protein structures. And it’s extremely premature to suggest that AI can be equally effective at handling all aspects of biology. So we were intrigued to see a study comparing a set of AI software packages designed to predict how active genes will be in cells exposed to different conditions. As it turns out, the AI systems couldn’t manage to do any better than a deliberately simplified method of predicting.
The results serve as a useful caution that biology is incredibly complex, and developing AI systems that work for one aspect of it is not an indication that they can work for biology generally.
AI and gene activity
The study was conducted by a trio of researchers based in Heidelberg: Constantin Ahlmann-Eltze, Wolfgang Huber, and Simon Anders. They note that a handful of additional studies have been released while their work was on a pre-print server, all of them coming to roughly the same conclusions. But these authors’ approach is pretty easy to understand, so we’ll use it as an example.
The AI software they examined attempts to predict changes in gene activity. While every cell carries copies of the roughly 20,000 genes in the human genome, not all of them are active in a given cell—”active” in this case meaning they are producing messenger RNAs. Some provide an essential function and are active at high levels at all times. Others are only active in specific cell types, like nerves or skin. Still others are activated under specific conditions, like low oxygen or high temperatures.
Over the years, we’ve done many studies examining the activity of every gene in a given cell type under different conditions. These studies can range from using gene chips to determine which messenger RNAs are present in a population of cells to sequencing the RNAs isolated from single cells and using that data to identify which genes are active. But collectively, they can provide a broad, if incomplete, picture that links the activity of genes with different biological circumstances. It’s a picture you could potentially use to train an AI that would make predictions about gene activity under conditions that haven’t been tested.
Ahlmann-Eltze, Huber, and Anders tested a set of what are called single-cell foundation models that have been trained on this sort of gene activity data. The “single cell” portion indicates that these models have been trained on gene activity obtained from individual cells rather than a population average of a cell type. Foundation models mean that they have been trained on a broad range of data but will require additional training before they’re deployed for a specific task.
Underwhelming performance
The task in this case is predicting how gene activity might change when genes are altered. When an individual gene is lost or activated, it’s possible that the only messenger RNA that is altered is the one made by that gene. But some genes encode proteins that regulate a collection of other genes, in which case you might see changes in the activity of dozens of genes. In other cases, the loss or activation of a gene could affect a cell’s metabolism, resulting in widespread alterations of gene activity.
Things get even more complicated when two genes are involved. In many cases, the genes will do unrelated things, and you get a simple additive effect: the changes caused by the loss of one, plus the changes caused by the loss of others. But if there’s some overlap between the functions, you can get an enhancement of some changes, suppression of others, and other unexpected changes.
To start exploring these effects, researchers have intentionally altered the activity of one or more genes using the CRISPR DNA editing technology, then sequenced every RNA in the cell afterward to see what sorts of changes took place. This approach (termed Perturb-seq) is useful because it can give us a sense of what the altered gene does in a cell. But for Ahlmann-Eltze, Huber, and Anders, it provides the data they need to determine if these foundation models can be trained to predict the ensuing changes in the activity of other genes.
Starting with the foundation models, the researchers conducted additional training using data from an experiment where either one or two genes were activated using CRISPR. This training used the data from 100 individual gene activations and another 62 where two genes were activated. Then, the AI packages were asked to predict the results for another 62 pairs of genes that were activated. For comparison, the researchers also made predictions using two extremely simple models: one that always predicted that nothing would change and a second that always predicted an additive effect (meaning that activating genes A and B would produce the changes caused by activating A plus the changes caused by activating B).
They didn’t work. “All models had a prediction error substantially higher than the additive baseline,” the researchers concluded. The result held when the researchers used alternative measurements of the accuracy of the AI’s predictions.
The gist of the problem seemed to be that the trained foundation models weren’t very good at predicting when the alterations of pairs of genes would produce complex patterns of changes—when the alteration of one gene synergized with the alteration of a second. “The deep learning models rarely predicted synergistic interactions, and it was even rarer that those predictions were correct,” the researchers concluded. In a separate test that looked specifically at these synergies between genes, it turned out that none of the models were better than the simplified system that always predicted no changes.
Not there yet
The overall conclusions from the work are pretty clear. “As our deliberately simple baselines are incapable of representing realistic biological complexity yet were not outperformed by the foundation models,” the researchers write, “we conclude that the latter’s goal of providing a generalizable representation of cellular states and predicting the outcome of not-yet-performed experiments is still elusive.”
It’s important to emphasize that “still elusive” doesn’t mean we’re incapable of ever developing an AI that can help with this problem. It also doesn’t mean that this applies to all cellular states (the results are specific to gene activity), much less all of biology. At the same time, the work provides a valuable caution at a time when there’s a lot of enthusiasm for the idea that AI’s success in a couple of areas means we’re on the cusp of a world where it can be applied to anything.
John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.