Science

10,000-generations-of-hominins-used-the-same-stone-tools-to-weather-a-changing-world

10,000 generations of hominins used the same stone tools to weather a changing world

“This site reveals an extraordinary story of cultural continuity,” said Braun in a recent press release.

When the going gets tough, the tough make tools

Nomorotukunan’s layers of stone tools span the transition from the Pliocene to the Pleistocene, during which Earth’s climate turned gradually cooler and drier after a 2 to 3 million-year warm spell. Pollen and other microscopic traces of plants in the sediment at Nomorotukunan tell the tale: the lakeshore marsh gradually dried up, giving way to arid grassland dotted with shrubs. On a shorter timescale, hominins at Nomorotukunan faced wildfires (based on microcharcoal in the sediments), droughts, and rivers drying up or changing course.

“As vegetation shifted, the toolmaking remained steady,” said National University of Kenya archaeologist Rahab N. Kinyanjui in a recent press release. “This is resilience.”

Making sharp stone tools may have helped generations of hominins survive their changing, drying world. In the warm, humid Pliocene, finding food would have been relatively easy, but as conditions got tougher, hominins probably had to scavenge or dig for their meals. At least one animal bone at Nomorotukunan bears cut marks where long-ago hominins carved up the carcass for meat—something our lineage isn’t really equipped to do with its bare hands and teeth. Tools also would have enabled early hominins to dig up and cut tubers or roots.

It’s fair to assume that sharpened wood sticks probably also played a role in that particular work, but wood doesn’t tend to last as long as stone in the archaeological record, so we can’t say for sure. What is certain are the stone tools and cut bones, which hint at what Utrecht University archaeologist Dan Rolier, a coauthor of the paper, calls “one of our oldest habits: using technology to steady ourselves against change.”

A tale as old as time

Nomorotukunan may hint that Oldowan technology is even older than the earliest tools archaeologists have unearthed so far. The oldest tools unearthed from the deepest layer at Nomorotukunan are the work of skilled flint-knappers who understood where to strike a stone, and at exactly which angle, to flake off the right shape. They also clearly knew how to select the right stones for the job (fine-grained chalcedony for the win, in this case). In other words, these tools weren’t the work of a bunch of hominins who were just figuring out, for the first time, how to bang the rocks together.

10,000 generations of hominins used the same stone tools to weather a changing world Read More »

rocket-report:-canada-invests-in-sovereign-launch;-india-flexes-rocket-muscles

Rocket Report: Canada invests in sovereign launch; India flexes rocket muscles


Europe’s Ariane 6 rocket gave an environmental monitoring satellite a perfect ride to space.

Rahul Goel, the CEO of Canadian launch startup NordSpace, poses with a suborbital demo rocket and members of his team in Toronto earlier this year. Credit: Andrew Francis Wallace/Toronto Star via Getty Images

Welcome to Edition 8.18 of the Rocket Report! NASA is getting a heck of a deal from Blue Origin for launching the agency’s ESCAPADE mission to Mars. Blue Origin is charging NASA about $20 million for the launch on the company’s heavy-lift New Glenn rocket. A dedicated ride on any other rocket capable of the job would undoubtedly cost more.

But there are trade-offs. First, there’s the question of risk. The New Glenn rocket is only making its second flight, and it hasn’t been certified by NASA or the US Space Force. Second, the schedule for ESCAPADE’s launch has been at the whim of Blue Origin, which has delayed the mission several times due to issues developing New Glenn. NASA’s interplanetary missions typically have a fixed launch period, and the agency pays providers like SpaceX and United Launch Alliance a premium to ensure the launch happens when it needs to happen.

New Glenn is ready, the satellites are ready, and Blue Origin has set a launch date for Sunday, November 9. The mission will depart Earth outside of the usual interplanetary launch window, so orbital dynamics wizards came up with a unique trajectory that will get the satellites to Mars in 2027.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

Canadian government backs launcher development. The federal budget released by the Liberal Party-led government of Canada this week includes a raft of new defense initiatives, including 182.6 million Canadian dollars ($129.4 million) for sovereign space launch capability, SpaceQ reports. The new funding is meant to “establish a sovereign space launch capability” with funds available this fiscal year and spent over three years. How the money will be spent and on what has yet to be released. As anticipated, Canada will have a new Defense Investment Agency (DIA) to oversee defense procurement. Overall, the government outlined 81.8 billion Canadian dollars ($58 billion) over five years for the Canadian Armed Forces. The Department of National Defense will manage the government’s cash infusion for sovereign launch capability.

Kick-starting an industry … Canada joins a growing list of nations pursuing homegrown launchers as many governments see access to space as key to national security and an opportunity for economic growth. International governments don’t want to be beholden to a small number of foreign launch providers from established space powers. That’s why startups in Germany, the United Kingdom, South Korea, and Australia are making a play in the launch arena, often with government support. A handful of Canadian startups, such as Maritime Launch Services, Reaction Dynamics, and NordSpace, are working on commercial satellite launchers. The Canadian government’s announcement came days after MDA Space, the largest established space company in Canada, announced its own multimillion-dollar investment in Maritime Launch Services.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Money alone won’t solve Europe’s space access woes. Increasing tensions with Russia have prompted defense spending boosts throughout Europe that will benefit fledgling smallsat launcher companies across the continent. But Europe is still years away from meeting its own space access needs, Space News reports. Space News spoke with industry analysts from two European consulting firms. They concluded that a lack of experience, not a deficit of money, is holding European launch startups back. None of the new crop of European rocket companies have completed a successful orbital flight.

Swimming in cash … The German company Isar Aerospace has raised approximately $600 million, the most funding of any of the European launch startups. Isar is also the only one of the bunch to make an orbital launch attempt. Its Spectrum rocket failed less than 30 seconds after liftoff last March, and a second launch is expected next year. Isar has attracted more investment than Rocket Lab, Firefly Aerospace, and Astra collectively raised on the private market before each of them successfully launched a rocket into orbit. In addition to Isar, several other European companies have raised more than $100 million on the road to developing a small satellite launcher. (submitted by EllPeaTea)

Successful ICBM test from Vandenberg. Air Force Global Strike Command tested an unarmed Minuteman III intercontinental ballistic missile in the predawn hours of Wednesday, Air and Space Forces Magazine reports. The test, the latest in a series of launches that have been carried out at regular intervals for decades, came as Russian President Vladimir Putin has touted the development of two new nuclear weapons and President Donald Trump has suggested in recent days that the US might resume nuclear testing. The ICBM launched from an underground silo at Vandenberg Space Force Base, California, and traveled some 4,200 miles to a test range in the Pacific Ocean after receiving launch orders from an airborne nuclear command-and-control plane.

Rehearsing for the unthinkable … The test, known as Glory Trip 254 (GT 254), provided a “comprehensive assessment” of the Minuteman III’s readiness to launch at a moment’s notice, according to the Air Force. “The data collected during the test is invaluable in ensuring the continued reliability and accuracy of the ICBM weapon system,” said Lt. Col. Karrie Wray, commander of the 576th Flight Test Squadron. For Minuteman III tests, the Air Force pulls its missiles from the fleet of some 400 operational ICBMs. This week’s test used one from F.E. Warren Air Force Base, Wyoming, and the missile was equipped with a single unarmed reentry vehicle that carried telemetry instrumentation instead of a warhead, service officials said. (submitted by EllPeaTea)

One crew launches, another may be stranded. Three astronauts launched to China’s Tiangong space station on October 31 and arrived at the outpost a few hours later, extending the station’s four-year streak of continuous crew operations. The Shenzhou 21 crew spacecraft lifted off on a Chinese Long March 2F rocket from the Jiuquan space center in the Gobi Desert. Shenzhou 21 is supposed to replace a three-man crew that has been on the Tiangong station since April, but China’s Manned Space Agency announced Tuesday the outgoing crew’s return craft may have been damaged by space junk, Ars reports.

Few details … Chinese officials said the Shenzhou 20 spacecraft will remain at the station while engineers investigate the potential damage. As of Thursday, China has not set a new landing date or declared whether the spacecraft is safe to return to Earth at all. “The Shenzhou 20 manned spacecraft is suspected of being impacted by small space debris,” Chinese officials wrote on social media. “Impact analysis and risk assessment are underway. To ensure the safety and health of the astronauts and the complete success of the mission, it has been decided that the Shenzhou 20 return mission, originally scheduled for November 5, will be postponed.” In the event Shenzhou 20 is unsafe to return, China could launch a rescue craft—Shenzhou 22—already on standby at the Jiuquan space center.

Falcon 9 rideshare boosts Vast ambitions. A pathfinder mission for Vast’s privately owned space station launched into orbit Sunday and promptly extended its solar panel, kicking off a shakedown cruise to prove the company’s designs can meet the demands of spaceflight, Ars reports. Vast’s Haven Demo mission lifted off just after midnight Sunday from Cape Canaveral Space Force Station, Florida, and rode a SpaceX Falcon 9 rocket into orbit. Haven Demo was one of 18 satellites sharing a ride on SpaceX’s Bandwagon 4 mission, launching alongside a South Korean spy satellite and a small testbed for Starcloud, a startup working with Nvidia to build an orbital data center.

Subscale testing … After release from the Falcon 9, the half-ton Haven Demo spacecraft stabilized itself and extended its power-generating solar array. The satellite captured 4K video of the solar array deployment, and Vast shared the beauty shot on social media. “Haven Demo’s mission success has turned us into a proven spacecraft company,” Vast’s CEO, Max Haot, posted on X. “The next step will be to become an actual commercial space station company next year. Something no one has achieved yet.” Vast plans to launch its first human-rated habitat, named Haven-1, into low-Earth orbit in 2026. Haven Demo lacks crew accommodations but carries several systems that are “architecturally similar” to Haven-1, according to Vast. For example, Haven-1 will have 12 solar arrays, each identical to the single array on Haven Demo. The pathfinder mission uses a subset of Haven-1’s propulsion system, but with identical thrusters, valves, and tanks.

Lights out at Vostochny. One of Russia’s most important projects over the last 15 years has been the construction of the Vostochny spaceport as the country seeks to fly its rockets from native soil and modernize its launch operations. Progress has been slow as corruption clouded Vostochny’s development. Now, the primary contractor building the spaceport, the Kazan Open Stock Company (PSO Kazan), has failed to pay its bills, Ars reports. The story, first reported by the Moscow Times, says that the energy company supplying Vostochny cut off electricity to areas of the spaceport still under construction after PSO Kazan racked up $627,000 in unpaid energy charges. The electricity company did so, it said, “to protect the interests of the region’s energy system.”

A dark reputation … Officials at the government-owned spaceport said PSO Kazan would repay its debt by the end of November, but the local energy company said it intends to file a lawsuit against KSO Kazan to declare the entity bankrupt. The two operational launch pads at Vostochny are apparently not affected by the power cuts. Vostochny has been a fiasco from the start. After construction began in 2011, the project was beset by hunger strikes, claims of unpaid workers, and the theft of $126 million. Additionally, a man driving a diamond-encrusted Mercedes was arrested after embezzling $75,000. Five years ago, there was another purge of top officials after another round of corruption.

Ariane 6 delivers for Europe again. European launch services provider Arianespace has successfully launched the Sentinel 1D Earth observation satellite aboard an Ariane 62 rocket for the European Commission, European Spaceflight reports. Launched in its two-booster configuration, the Ariane 6 rocket lifted off from the Guiana Space Center in South America on Tuesday. Approximately 34 minutes after liftoff, the satellite was deployed from the rocket’s upper stage into a Sun-synchronous orbit at an altitude of 693 kilometers (430 miles). Sentinel 1D is the newest spacecraft to join Europe’s Copernicus program, the world’s most expansive network of environmental monitoring satellites. The new satellite will extend Europe’s record of global around-the-clock radar imaging, revealing information about environmental disasters, polar ice cover, and the use of water resources.

Doubling cadence … This was the fourth flight of Europe’s new Ariane 6 rocket, and its third operational launch. Arianespace plans one more Ariane 62 launch to close out the year with a pair of Galileo navigation satellites. The company aims to double its Ariane 6 launch cadence in 2026, with between six and eight missions planned, according to David Cavaillès, Arianespace’s CEO. The European launch provider will open its 2026 manifest with the first flight of the more powerful four-booster variant of the rocket. If the company does manage eight Ariane 6 flights in 2026, it will already be close to reaching the stated maximum launch cadence of between nine and 10 flights per year.

India sets its own record for payload mass. The Indian Space Research Organization on Sunday successfully launched the Indian Navy’s advanced communication satellite GSAT-7R, or CMS-03, on an LVM3 rocket from the Satish Dhawan Space Center, The Hindu reports. The indigenously designed and developed satellite, weighing approximately 4.4 metric tons (9,700 pounds), is the heaviest satellite ever launched by an Indian rocket and marks a major milestone in strengthening the Navy’s space-based communications and maritime domain awareness.

Going heavy … The launch Sunday was India’s fourth of 2025, a decline from the country’s high-water mark of eight orbital launches in a year in 2023. The failure in May of India’s most-flown rocket, the PSLV, has contributed to this year’s slower launch cadence. India’s larger rockets, the GSLV and LVM3, have been more active while officials grounded the PSLV for an investigation into the launch failure. (submitted by EllPeaTea)

Blue Origin preps for second flight of New Glenn. The road to the second flight of Blue Origin’s heavy-lifting New Glenn rocket got a lot clearer this week. The company confirmed it is targeting Sunday, November 9, for the launch of New Glenn from Cape Canaveral Space Force Station, Florida. This follows a successful test-firing of the rocket’s seven BE-4 main engines last week, Ars reports. Blue Origin, the space company owned by billionaire Jeff Bezos, said the engines operated at full power for 22 seconds, generating nearly 3.9 million pounds of thrust on the launch pad.

Fully integrated … With the launch date approaching, engineers worked this week to attach the rocket’s payload shroud containing two NASA satellites set to embark on a journey to Mars. Now that the rocket is fully integrated, ground crews will roll it back to Blue Origin’s Launch Complex-36 (LC-36) for final countdown preps. The launch window on Sunday opens at 2: 45 pm EST (19: 45 UTC). Blue Origin is counting on recovering the New Glenn first stage on the next flight after missing the landing on the rocket’s inaugural mission in January. Officials plan to reuse this booster on the third New Glenn launch early next year, slated to propel Blue Origin’s first unpiloted Blue Moon lander toward the Moon.

Next three launches

Nov. 8: Falcon 9 | Starlink 10-51 | Kennedy Space Center, Florida | 08: 30 UTC

Nov. 8: Long March 11H| Unknown Payload | Haiyang Spaceport, China Coastal Waters | 21: 00 UTC

Nov. 9: New Glenn | ESCAPADE | Cape Canaveral Space Force Station, Florida | 19: 45 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: Canada invests in sovereign launch; India flexes rocket muscles Read More »

new-quantum-hardware-puts-the-mechanics-in-quantum-mechanics

New quantum hardware puts the mechanics in quantum mechanics


As a test case, the machine was used to test a model of superconductivity.

Quantum computers based on ions or atoms have one major advantage: The qubits themselves aren’t manufactured, and there’s no device-to-device among atoms. Every atom is the same and should perform similarly every time. And since the qubits themselves can be moved around, it’s theoretically possible to entangle any atom or ion with any other in the system, allowing for a lot of flexibility in how algorithms and error correction are performed.

This combination of consistent, high-fidelity performance with all-to-all connectivity has led many key demonstrations of quantum computing to be done on trapped-ion hardware. Unfortunately, the hardware has been held back a bit by relatively low qubit counts—a few dozen compared to the hundred or more seen in other technologies. But on Wednesday, a company called Quantinuum announced a new version of its trapped-ion hardware that significantly boosts the qubit count and uses some interesting technology to manage their operation.

Trapped-ion computing

Both neutral atom and trapped-ion computers store their qubits in the spin of the nucleus. That spin is somewhat shielded from the environment by the cloud of electrons around the nucleus, giving these qubits a relatively long coherence time. While neutral atoms are held in place by a network of lasers, trapped ions are manipulated via electromagnetic control based on the ion’s charge. This means that key components of the hardware can be built using standard electronic manufacturing, although lasers are still needed for manipulations and readout.

While the electronics are static—they stay wherever they were manufactured—they can be used to move the ions around. That means that as long as the trackways the atoms can move on enable it, any two ions can be brought into close proximity and entangled. This all-to-all connectivity can enable more efficient implementation of algorithms performed directly on the hardware qubits or the use of error-correction codes that require a complicated geometry of connections. That’s one reason why Microsoft used a Quantinuum machine to demonstrate error-correction code based on a tesseract.

But arranging the trackways so that any two qubits can be next to each other can become increasingly complicated. Moving ions around is a relatively slow process, so retrieving two ions from the far ends of a chip too often can cause a system to start pushing up against the coherence time of the qubits. In the long term, Quantinuum plans to build chips with a square grid reminiscent of the street layout of many cities. But doing so will require a mastery of controlling the flow of ions through four-way intersections.

And that’s what Quantinuum is doing in part with its new chip, named Helios. It has a single intersection that couples two ion-storage areas, enabling operations as ions slosh from one end of the chip to the other. And it comes with significantly more qubits than its earlier hardware, moving from 56 to 96 qubits without sacrificing performance. “We’ve kept and actually even improved the two qubit gate fidelity,” Quantinuum VP Jenni Strabley told Ars. “So we’re not seeing any degradation in the two-qubit gate fidelity as we go to larger and larger sizes.”

Doing the loop

The image below is taken using the fluorescence of the atoms in the hardware itself. As you can see, the layout is dominated by two features: A loop at the left and two legs extending to the right. They’re connected by a four-way intersection. The Quantinuum staff described this intersection as being central to the computer’s operation.

A black background on which a series of small blue dots trace out a circle and two parallel lines connected by an x-shaped junction.

The actual ions trace out the physical layout of the Helios system, featuring a storage ring and two legs that contain dedicated operation sites. Credit: Quantinuum

The system works by rotating the ions around the loop. As an ion reaches the intersection, the system chooses whether to kick it into one of the legs and, if so, which leg. “We spin that ring almost like a hard drive, really, and whenever the ion that we want to gate gets close to the junction, there’s a decision that happens: Either that ion goes [into the legs], or it kind of makes a little turn and goes back into the ring,” said David Hayes, Quantinuum’s director of Computational Design and Theory. “And you can make that decision with just a few electrodes that are right at that X there.”

Each leg has a region where operations can take place, so this system can ensure that the right qubits are present together in the operation zones for things like two-qubit gates. Once the operations are complete, the qubits can be moved into the leg storage regions, and new qubits can be shuffled in. When the legs fill up, the qubits can be sent back to the loop, and the process is restarted.

“You get less traffic jams if all the traffic is running one way going through the gate zones,” Hayes told Ars. “If you had to move them past each other, you would have to do kind of physical swaps, and you want to avoid that.”

Obviously, issuing all the commands to control the hardware will be quite challenging for anything but the simplest operations. That puts an increasing emphasis on the compilers that add a significant layer of abstraction between what you want a quantum computer to do and the actual hardware commands needed to implement it. Quantinuum has developed its own compiler to take user-generated code and produce something that the control system can convert into the sequence of commands needed.

The control system now incorporates a real-time engine that can read data from Helios and update the commands it issues based on the state of the qubits. Quantinuum has this portion of the system running on GPUs rather than requiring customized hardware.

Quantinuum’s SDK for users is called Guppy and is based on Python, which has been modified to allow users to describe what they’d like the system to do. Helios is being accompanied by a new version of Guppy that includes some traditional programming tools like FOR loops and IF-based conditionals. These will be critical for the sorts of things we want to do as we move toward error-corrected qubits. This includes testing for errors, fixing them if they’re present, or repeatedly attempting initialization until it succeeds without error.

Hayes said the new version is also moving toward error correction. Thanks to Guppy’s ability to dynamically reassign qubits, Helios will be able to operate as a machine with 94 qubits while detecting errors on any of them. Alternatively, the 96 hardware qubits can be configured as a single unit that hosts 48 error-corrected qubits. “It’s actually a concatenated code,” Hayes told Ars. “You take two error detection codes and weave them together… it’s a single code block, but it has 48 logical cubits housed inside of it.” (Hayes said it’s a distance-four code, meaning it can fix up to two errors that occur simultaneously.)

Tackling superconductivity

While Quantinuum hardware has always had low error rates relative to most of its competitors, there was only so much you could do with 56 qubits. With 96 now at their disposal, researchers at the company decided to build a quantum implementation of a model (called the Fermi-Hubbard model) that’s meant to help study the electron pairing that takes place during the transition to superconductivity.

“There are definitely terms that the model doesn’t capture,” Quantinuum’s Henrik Dreyer acknowledged. “They neglect their electrorepulsion that [the electrons] still have—I mean, they’re still negatively charged; they are still repelling. There are definitely terms that the model doesn’t capture. On the other hand, I should say that this Fermi-Hubbard model—it has many of the features that a superconductor has.”

Superconductivity occurs when electrons join to form what are called Cooper pairs, overcoming their normal repulsion. And the model can tell that apart from normal conductivity in the same material.

“You ask the question ‘What’s the chance that one of the charged particles spontaneously disappears because of quantum fluctuations and goes over here?’” Dreyer said, describing what happens when simulating a conductor. “What people do in superconductivity is they take this concept, but instead of asking what’s the chance of a single-charge particle to tunnel over there spontaneously, they’re asking what is the chance of a pair to tunnel spontaneously?”

Even in its simplified form, however, it’s still a model of a quantum system, with all the computational complexity that comes with that. So the Quantinuum team modeled a few systems that classical computers struggle with. One was simply looking at a larger grid of atoms than most classical simulations have done; another expanded the grid in an additional dimension, modeling layers of a material. Perhaps the most complicated simulation involved what happens when a laser pulse of the right wavelength hits a superconductor at room temperature, an event that briefly induces a superconducting state.

And the system produced results, even without error correction. “It’s maybe a technical point, but I think it’s very important technical point, which is [that] the circuits that we ran, they all had errors,” Dreyer told Ars. “Maybe on the average of three or so errors, and for some reason, that is not very fully understood for this application, it doesn’t matter. You still get almost the perfect result in some of these cases.”

That said, he also indicated that having higher-fidelity hardware would help the team do a better job of putting the system in a ground state or running the simulation for longer. But those will have to wait for future hardware.

What’s next

If you look at Quantinuum’s roadmap for that future hardware, Helios would appear to be the last of its kind. It and earlier versions of the processors have loops and large straight stretches; everything in the future features a grid of squares. But both Strabley and Hayes said that Helios has several key transitional features. “Those ions are moving through that junction many, many times over the course of a circuit,” Strabley told Ars. “And so it’s really enabled us to work on the reliability of the junction, and that will translate into the large-scale systems.”

Image of a product roadmap, with years from 2020 to 2029 noted across the top. There are five processors arrayed from left to right, each with increasingly complex geometry.

Helios sits at the pivot between the simple geometries of earlier Quantinuum processors and the grids of future designs. Credit: Quantinuum

The collection of squares seen in future processors will also allow the same sorts of operations to be done with the loop-and-legs of Helios. Some squares can serve as the equivalent of a loop in terms of storage and sorting, while some of the straight lines nearby can be used for operations.

“What will be common to both of them is kind of the general concept that you can have a storage and sorting region and then gating regions on the side and they’re separated from one another,” Hayes said. “It’s not public yet, but that’s the direction we’re heading: a storage region where you can do really fast sorting in these 2D grids, and then gating regions that have parallelizable logical operations.”

In the meantime, we’re likely to see improvements made to Helios—ideas that didn’t quite make today’s release. “There’s always one more improvement that people want to make, and I’m the person that says, ‘No, we’re going to go now. Put this on the market, and people are going to go use it,’” Strabley said. “So there is a long list of things that we’re going to add to improve the performance. So expect that over the course of Helios, the performance is going to get better and better and better.”

That performance is likely to be used for the sort of initial work done on superconductivity or the algorithm recently described by Google, which is at or a bit beyond what classical computers can manage and may start providing some useful insights. But it will still be a generation or two before we start seeing quantum computing fulfill some of its promise.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

New quantum hardware puts the mechanics in quantum mechanics Read More »

if-you-want-to-satiate-ai’s-hunger-for-power,-google-suggests-going-to-space

If you want to satiate AI’s hunger for power, Google suggests going to space


Google engineers think they already have all the pieces needed to build a data center in orbit.

With Project Suncatcher, Google will test its Tensor Processing Units on satellites. Credit: Google

It was probably always when, not if, Google would add its name to the list of companies intrigued by the potential of orbiting data centers.

Google announced Tuesday a new initiative, named Project Suncatcher, to examine the feasibility of bringing artificial intelligence to space. The idea is to deploy swarms of satellites in low-Earth orbit, each carrying Google’s AI accelerator chips designed for training, content generation, synthetic speech and vision, and predictive modeling. Google calls these chips Tensor Processing Units, or TPUs.

“Project Suncatcher is a moonshot exploring a new frontier: equipping solar-powered satellite constellations with TPUs and free-space optical links to one day scale machine learning compute in space,” Google wrote in a blog post.

“Like any moonshot, it’s going to require us to solve a lot of complex engineering challenges,” Google’s CEO, Sundar Pichai, wrote on X. Pichai noted that Google’s early tests show the company’s TPUs can withstand the intense radiation they will encounter in space. “However, significant challenges still remain like thermal management and on-orbit system reliability.”

The why and how

Ars reported on Google’s announcement on Tuesday, and Google published a research paper outlining the motivation for such a moonshot project. One of the authors, Travis Beals, spoke with Ars about Project Suncatcher and offered his thoughts on why it just might work.

“We’re just seeing so much demand from people for AI,” said Beals, senior director of Paradigms of Intelligence, a research team within Google. “So, we wanted to figure out a solution for compute that could work no matter how large demand might grow.”

Higher demand will lead to bigger data centers consuming colossal amounts of electricity. According to the MIT Technology Review, AI alone could consume as much electricity annually as 22 percent of all US households by 2028. Cooling is also a problem, often requiring access to vast water resources, raising important questions about environmental sustainability.

Google is looking to the sky to avoid potential bottlenecks. A satellite in space can access an infinite supply of renewable energy and an entire Universe to absorb heat.

“If you think about a data center on Earth, it’s taking power in and it’s emitting heat out,” Beals said. “For us, it’s the satellite that’s doing the same. The satellite is going to have solar panels … They’re going to feed that power to the TPUs to do whatever compute we need them to do, and then the waste heat from the TPUs will be distributed out over a radiator that will then radiate that heat out into space.”

Google envisions putting a legion of satellites into a special kind of orbit that rides along the day-night terminator, where sunlight meets darkness. This north-south, or polar, orbit would be synchronized with the Sun, allowing a satellite’s power-generating solar panels to remain continuously bathed in sunshine.

“It’s much brighter even than the midday Sun on Earth because it’s not filtered by Earth’s atmosphere,” Beals said.

This means a solar panel in space can produce up to eight times more power than the same collecting area on the ground, and you don’t need a lot of batteries to reserve electricity for nighttime. This may sound like the argument for space-based solar power, an idea first described by Isaac Asimov in his short story Reason published in 1941. But instead of transmitting the electricity down to Earth for terrestrial use, orbiting data centers would tap into the power source in space.

“As with many things, the ideas originate in science fiction, but it’s had a number of challenges, and one big one is, how do you get the power down to Earth?” Beals said. “So, instead of trying to figure out that, we’re embarking on this moonshot to bring [machine learning] compute chips into space, put them on satellites that have the solar panels and the radiators for cooling, and then integrate it all together so you don’t actually have to be powered on Earth.”

SpaceX is driving down launch costs, thanks to reusable rockets and an abundant volume of Starlink satellite launches. Credit: SpaceX

Google has a mixed record with its ambitious moonshot projects. One of the most prominent moonshot graduates is the self-driving car kit developer Waymo, which spun out to form a separate company in 2016 and is now operational. The Project Loon initiative to beam Internet signals from high-altitude balloons is one of the Google moonshots that didn’t make it.

Ars published two stories last week on the promise of space-based data centers. One of the startups in this field, named Starcloud, is partnering with Nvidia, the world’s largest tech company by market capitalization, to build a 5 gigawatt orbital data center with enormous solar and cooling panels approximately 4 kilometers (2.5 miles) in width and length. In response to that story, Elon Musk said SpaceX is pursuing the same business opportunity but didn’t provide any details. It’s worth noting that Google holds an estimated 7 percent stake in SpaceX.

Strength in numbers

Google’s proposed architecture differs from that of Starcloud and Nvidia in an important way. Instead of putting up just one or a few massive computing nodes, Google wants to launch a fleet of smaller satellites that talk to one another through laser data links. Essentially, a satellite swarm would function as a single data center, using light-speed interconnectivity to aggregate computing power hundreds of miles over our heads.

If that sounds implausible, take a moment to think about what companies are already doing in space today. SpaceX routinely launches more than 100 Starlink satellites per week, each of which uses laser inter-satellite links to bounce Internet signals around the globe. Amazon’s Kuiper satellite broadband network uses similar technology, and laser communications will underpin the US Space Force’s next-generation data-relay constellation.

Artist’s illustration of laser crosslinks in space. Credit: TESAT

Autonomously constructing a miles-long structure in orbit, as Nvidia and Starcloud foresee, would unlock unimagined opportunities. The concept also relies on tech that has never been tested in space, but there are plenty of engineers and investors who want to try. Starcloud announced an agreement last week with a new in-space assembly company, Rendezvous Robotics, to explore the use of modular, autonomous assembly to build Starcloud’s data centers.

Google’s research paper describes a future computing constellation of 81 satellites flying at an altitude of some 400 miles (650 kilometers), but Beals said the company could dial the total swarm size to as many spacecraft as the market demands. This architecture could enable terawatt-class orbital data centers, according to Google.

“What we’re actually envisioning is, potentially, as you scale, you could have many clusters,” Beals said.

Whatever the number, the satellites will communicate with one another using optical inter-satellite links for high-speed, low-latency connectivity. The satellites will need to fly in tight formation, perhaps a few hundred feet apart, with a swarm diameter of a little more than a mile, or about 2 kilometers. Google says its physics-based model shows satellites can maintain stable formations at such close ranges using automation and “reasonable propulsion budgets.”

“If you’re doing something that requires a ton of tight coordination between many TPUs—training, in particular—you want links that have as low latency as possible and as high bandwidth as possible,” Beals said. “With latency, you run into the speed of light, so you need to get things close together there to reduce latency. But bandwidth is also helped by bringing things close together.”

Some machine-learning applications could be done with the TPUs on just one modestly sized satellite, while others may require the processing power of multiple spacecraft linked together.

“You might be able to fit smaller jobs into a single satellite. This is an approach where, potentially, you can tackle a lot of inference workloads with a single satellite or a small number of them, but eventually, if you want to run larger jobs, you may need a larger cluster all networked together like this,” Beals said.

Google has worked on Project Suncatcher for more than a year, according to Beals. In ground testing, engineers tested Google’s TPUs under a 67 MeV proton beam to simulate the total ionizing dose of radiation the chip would see over five years in orbit. Now, it’s time to demonstrate Google’s AI chips, and everything else needed for Project Suncatcher will actually work in the real environment.

Google is partnering with Planet, the Earth-imaging company, to develop a pair of small prototype satellites for launch in early 2027. Planet builds its own satellites, so Google has tapped it to manufacture each spacecraft, test them, and arrange for their launch. Google’s parent company, Alphabet, also has an equity stake in Planet.

“We have the TPUs and the associated hardware, the compute payload… and we’re bringing that to Planet,” Beals said. “For this prototype mission, we’re really asking them to help us do everything to get that ready to operate in space.”

Beals declined to say how much the demo slated for launch in 2027 will cost but said Google is paying Planet for its role in the mission. The goal of the demo mission is to show whether space-based computing is a viable enterprise.

“Does it really hold up in space the way we think it will, the way we’ve tested on Earth?” Beals said.

Engineers will test an inter-satellite laser link and verify Google’s AI chips can weather the rigors of spaceflight.

“We’re envisioning scaling by building lots of satellites and connecting them together with ultra-high bandwidth inter-satellite links,” Beals said. “That’s why we want to launch a pair of satellites, because then we can test the link between the satellites.”

Evolution of a free-fall (no thrust) constellation under Earth’s gravitational attraction, modeled to the level of detail required to obtain Sun-synchronous orbits, in a non-rotating coordinate system. Credit: Google

Getting all this data to users on the ground is another challenge. Optical data links could also route enormous amounts of data between the satellites in orbit and ground stations on Earth.

Aside from the technical feasibility, there have long been economic hurdles to fielding large satellite constellations. But SpaceX’s experience with its Starlink broadband network, now with more than 8,000 active satellites, is proof that times have changed.

Google believes the economic equation is about to change again when SpaceX’s Starship rocket comes online. The company’s learning curve analysis shows launch prices could fall to less than $200 per kilogram by around 2035, assuming Starship is flying about 180 times per year by then. This is far below SpaceX’s stated launch targets for Starship but comparable to SpaceX’s proven flight rate with its workhorse Falcon 9 rocket.

It’s possible there could be even more downward pressure on launch costs if SpaceX, Nvidia, and others join Google in the race for space-based computing. The demand curve for access to space may only be eclipsed by the world’s appetite for AI.

“The more people are doing interesting, exciting things in space, the more investment there is in launch, and in the long run, that could help drive down launch costs,” Beals said. “So, it’s actually great to see that investment in other parts of the space supply chain and value chain. There are a lot of different ways of doing this.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

If you want to satiate AI’s hunger for power, Google suggests going to space Read More »

space-junk-may-have-struck-a-chinese-crew-ship-in-low-earth-orbit

Space junk may have struck a Chinese crew ship in low-Earth orbit

Three Chinese astronauts were due to depart the Tiangong space station, reenter the atmosphere, and land in the remote desert of Inner Mongolia on Wednesday. Instead, officials ordered the crew to remain at the station while engineers investigate a potential problem with their landing craft.

The China Manned Space Agency, run by the country’s military, announced the change late Tuesday in a brief statement posted to Weibo, the Chinese social media platform.

“The Shenzhou 20 manned spacecraft is suspected of being impacted by small space debris,” the statement said. “Impact analysis and risk assessment are underway. To ensure the safety and health of the astronauts and the complete success of the mission, it has been decided that the Shenzhou 20 return mission, originally scheduled for November 5, will be postponed.”

What we know

The Shenzhou 20 astronauts arrived at the Tiangong station in April. Their replacements on the Shenzhou 21 mission docked with Tiangong on Friday, temporarily raising the station’s crew size to six people. After several days of joint operations, the six astronauts held a handover ceremony early Tuesday to formally transfer command of the outpost to the new crew.

Less than 24 hours later, Chinese officials decided to call off Shenzhou 20’s departure from Tiangong. The statement from the China Manned Space Agency did not say what part of the Shenzhou 20 spacecraft may have been damaged, what evidence led engineers to suspect space debris was the culprit, or how long Shenzhou 20’s departure might be postponed.

This view shows a Shenzhou spacecraft departing the Tiangong space station in 2023. Credit: China Manned Space Agency

The ship has three sections, with a landing capsule positioned between the crew living quarters and a power and propulsion module. The modules separate from one another before reentry, and the return craft heads for a parachute-assisted landing while the other elements burn up during atmospheric reentry.

Space junk may have struck a Chinese crew ship in low-Earth orbit Read More »

some-stinkbugs’-legs-carry-a-mobile-fungal-garden

Some stinkbugs’ legs carry a mobile fungal garden

Many insect species hear using tympanal organs, membranes roughly resembling our eardrums but located on their legs. Grasshoppers, mantises, and moths all have them, and for decades, we thought that female stinkbugs of the Dinidoridae family have them, too, although located a bit unusually on their hind rather than front legs.

Suspecting that they use their hind leg tympanal organs to listen to male courtship songs, a team of Japanese researchers took a closer look at the organs in Megymenum gracilicorne, a Dinidoridae stinkbug species native to Japan. They discovered that these “tympanal organs” were not what they seemed. They’re actually mobile fungal nurseries of a kind we’ve never seen before.

Portable gardens

Dinidoridae is a small stinkbug family that lives exclusively in Asia. The bug did attract some scientific attention, but not nearly as much as its larger relatives like Pentatomidae. Prior work looking specifically into organs growing on the hind legs of Dinidoridae females was thus somewhat limited. “Most research relied on taxonomic and morphological approaches. Some taxonomists did describe that female Dinidoridae stinkbugs have an enlarged part on the hind legs that looks like the tympanal organ you can find, for example, in crickets,” said Takema Fukatsu, an evolutionary biologist at the National Institute of Advanced Industrial Science and Technology in Tokyo.

Based on that appearance, these parts were classified as tympanal organs—the case was closed, and it stayed closed until Fukatsu’s team started examining them more closely. Most insects have tympanal organs on their front legs, not hind legs, or on abdominal segments. The initial goal of Fukatsu’s study was to figure out what impact this unusual position has on Dinidoridae females’ ability to hear sounds.

Early on in the study, it turned out that whatever Dinidoridae females have on their hind legs, they are not tympanal organs. “We found no tympanal membrane and no sensory neurons, so the enlarged parts on the hind legs had nothing to do with hearing,” Fukatsu explained. Instead, the organ had thousands of small pores filled with benign filamentous fungi. The pores were connected to secretory cells that released substances that Fukatsu’s team hypothesized were nutrients enabling the fungi to grow.

Some stinkbugs’ legs carry a mobile fungal garden Read More »

disruption-to-science-will-last-longer-than-the-us-government-shutdown

Disruption to science will last longer than the US government shutdown

President Donald Trump alongside Office of Management and Budget Director Russell Vought.

Credit: Brendan Smialowski/AFP via Getty Images

President Donald Trump alongside Office of Management and Budget Director Russell Vought. Credit: Brendan Smialowski/AFP via Getty Images

However, the full impact of the shutdown and the Trump administration’s broader assaults on science to US international competitiveness, economic security, and electoral politics could take years to materialize.

In parallel, the dramatic drop in international student enrollment, the financial squeeze facing research institutions, and research security measures to curb foreign interference spell an uncertain future for American higher education.

With neither the White House nor Congress showing signs of reaching a budget deal, Trump continues to test the limits of executive authority, reinterpreting the law—or simply ignoring it.

Earlier in October, Trump redirected unspent research funding to pay furloughed service members before they missed their Oct. 15 paycheck. Changing appropriated funds directly challenges the power vested in Congress—not the president—to control federal spending.

The White House’s promise to fire an additional 10,000 civil servants during the shutdown, its threat to withhold back pay from furloughed workers, and its push to end any programs with lapsed funding “not consistent with the President’s priorities” similarly move to broaden presidential power.

Here, the damage to science could snowball. If Trump and Vought chip enough authority away from Congress by making funding decisions or shuttering statutory agencies, the next three years will see an untold amount of impounded, rescinded, or repurposed research funds.

photo of empty science lab

The government shutdown has emptied many laboratories staffed by federal scientists. Combined with other actions by the Trump administration, more scientists could continue to lose funding.

Credit: Monty Rakusen/DigitalVision via Getty Images

The government shutdown has emptied many laboratories staffed by federal scientists. Combined with other actions by the Trump administration, more scientists could continue to lose funding. Credit: Monty Rakusen/DigitalVision via Getty Images

Science, democracy, and global competition

While technology has long served as a core pillar of national and economic security, science has only recently reemerged as a key driver of greater geopolitical and cultural change.

China’s extraordinary rise in science over the past three decades and its arrival as the United States’ chief technological competitor has upended conventional wisdom that innovation can thrive only in liberal democracies.

The White House’s efforts to centralize federal grantmaking, restrict free speech, erase public data, and expand surveillance mirror China’s successful playbook for building scientific capacity while suppressing dissent.

As the shape of the Trump administration’s vision for American science has come into focus, what remains unclear is whether, after the shutdown, it can outcompete China by following its lead.

Kenneth M. Evans is a Fellow in Science, Technology, and Innovation Policy at the Baker Institute for Public Policy, Rice University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disruption to science will last longer than the US government shutdown Read More »

research-roundup:-6-cool-science-stories-we-almost-missed

Research roundup: 6 cool science stories we almost missed


Also: the science of regular vs. gluten-free spaghetti, catching high-speed snake bites in action, etc.

Karnak Temple, Luxor, Egypt. Credit: Ben Pennington

It’s a regrettable reality that there is never enough time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. October’s list includes the microstructural differences between regular and gluten-free spaghetti, capturing striking snakes in action, the mystery behind the formation of Martian gullies, and—for all you word game enthusiasts—an intriguing computational proof of the highest possible scoring Boggle board.

Highest-scoring Boggle board

boggle board showing highest scoring selection of letters

Credit: Dan Vanderkam

Sometimes we get handy story tips from readers about quirkily interesting research projects. Sometimes those projects involve classic games like Boggle, in which players find as many words as they can from a 4×4 grid of 16 lettered cubic dice, within a given time limit. Software engineer Dan Vanderkam alerted us to a a preprint he posted to the physics arXiv, detailing his quest to find the Boggle board configuration that yields the highest possible score. It’s pictured above, with a total score of 3,625 points, according to Vanderkam’s first-ever computational proof. There are more than 1000 possible words, with “replastering” being the longest.

Vanderkam has documented his quest and its resolution (including the code he used) extensively on his blog, admitting to the Financial Times that, “As far as I can tell, I’m the only person who is actually interested in this problem.” That’s not entirely true: there was an attempt in 1982 that found an optimal board yielding 2,195 points. Vanderkam’s board was known as possibly being the highest scoring, it was just very difficult to prove using standard heuristic search methods. Vanderkam’s solution involved grouping board configurations with similar patterns into classes, and then finding upper bounds to discard clear losers, rather than trying to tally scores for each board individually—i.e., an old school “branch and bound” technique.

DOI: arXiv, 2025. 10.48550/arXiv.2507.02117  (About DOIs).

Origins of Egypt’s Karnak Temple

Core samples being extracted at Karnak Temple

Credit: Ben Pennington

Egypt’s Karnak Temple complex, located about 500 meters of the Nile River near Luxor, has long been of interest to archaeologists and millions of annual tourists alike. But its actual age has been a matter of much debate. The most comprehensive geological survey conducted to date is yielding fresh insights into the temple’s origins and evolution over time, according to a paper published in the journal Antiquity.

The authors analyzed sediment cores and thousands of ceramic fragments from within and around the site to map out how the surrounding landscape has changed. They concluded that early on, circa 2520 BCE, the site would have experienced regular flooding from the Nile; thus, the earliest permanent settlement at Karnak would have emerged between 2591 and 2152 BCE, in keeping with the earliest dated ceramic fragments.  This would have been after river channels essentially created an island of higher ground that served as the foundation for constructing the temple. As those channels diverged over millennia, the available area for the temple expanded and thus, so did the complex.

This might be supported by Egyptian creation myths. “It’s tempting to suggest the Theban elites chose Karnak’s location for the dwelling place of a new form of the creator god, ‘Ra-Amun,’ as it fitted the cosmogonical scene of high ground emerging from surrounding water,” said co-author Ben Pennington, a geoarchaeologist at the University of Southampton. “Later texts of the Middle Kingdom (c.1980–1760 BC) develop this idea, with the ‘primeval mound’ rising from the ‘Waters of Chaos.’ During this period, the abating of the annual flood would have echoed this scene, with the mound on which Karnak was built appearing to ‘rise’ and grow from the receding floodwaters.”

DOI: Antiquity, 2025. 10.15184/aqy.2025.10185  (About DOIs).

Gullies on Mars

Mars dune with gullies in the Russell crater. On their way down, the ice blocks threw up levees.

Credit: HiRISE/NASA/JPL/University of Arizon

Mars has many intriguing features but one of the more puzzling is the sinuous gullies that form on some its dunes. Scientists have proposed two hypotheses for how such gullies might form. The first is that they are the result of debris flow from an earlier time in the planet’s history where liquid water might have existed on the surface—evidence that the red planet might once have been habitable. The second is that the gullies form because of seasonal deposition and sublimation of CO2 ice on the surface in the present day. A paper published in the journal Geophysical Research Letters demonstrated strong evidence in favor of the latter hypothesis.

Building on her earlier research on how sublimation of CO2 ice can drive debris flows on Mars, earth scientist Lonneke Roelofs of Utrecht University in the Netherlands collaborated with scientists at the Open University in Milton Keynes, UK, which boasts a facility for simulating conditions on Mars. She ran several experiments with different sediment types, creating dune slopes of different angles and dropping blocks of CO2 ice from the top of the slope. At just the right angle, the blocks did indeed start digging into the sandy slope and moving downwards to create a gully. Roelofs likened the effect to a burrowing mole or the sandworms in Dune.

Per Roelofs, on Mars, CO2 ice forms over the surface during the winter and starts to sublimate in the spring. The ice blocks are remnants found on the shaded side of dune tops, where they break off once the temperature gets high enough and slide down the slope. At the bottom, they keep sublimating until all the CO2 has evaporated, leaving behind a hollow of sand.

DOI: Geophysical Research Letters, 2025. 10.1029/2024GL112860  (About DOIs).

Snake bites in action

S.G.C. Cleuren et al., 2025

Snakes can strike out and bite into prey in as little as 60 microseconds and until quite recently it just wasn’t technologically possible to capture those strikes in high definition. Researchers at Monash University in Australia decided to test 36 different species of snake in this way to learn more about their unique biting styles, detailing their results in a paper published in the Journal of Experimental Biology. And oh yes, there is awesome video footage.

Alistair Evans and Silke Cleuren traveled to Venomworld in Paris, France, where snake venom is harvested for medical and pharmaceutical applications.  For each snake species, they poked at said snake with a cylindrical piece of warm medical gel to mimic meaty muscle until the snake lunged and buried its fangs into the gel. Two cameras recorded the action at 1000 frames per second, capturing more than 100 individual strikes in great detail.

Among their findings: vipers moved the fastest when they struck, with the blunt-nosed viper accelerating up to 710 m/s2, landing a bite within 22 microseconds. All the vipers landed bites within 100 microseconds of striking. By contrast, the rough-scaled death adder only reached speeds of 2.5 m/s2. Vipers also sometimes pulled out and reinserted their fangs if they didn’t like the resulting angle; only then did they inject their venom. Elapids like the Cape coral cobra bit their prey repeatedly to inject their venom, while colubrids would tear gashes into their prey by sweeping their jaws from side to side, ensuing the maximum possible amount of venom was delivered.

DOI: Journal of Experimental Biology, 2025. 10.1242/jeb.250347  (About DOIs).

Spaghetti secrets

Spaghetti, like most pasta, is made of semolina flour, which is mixed with water to form a paste and then extruded to create a desired shape. The commercial products are then dried—an active area of research, since it’s easy for the strands to crack during the process. In fact, there have been a surprisingly large number of scientific papers seeking to understand the various properties of spaghetti, both cooking and eating it—the mechanics of slurping the pasta into one’s mouth, for instance, or spitting it out (aka, the “reverse spaghetti problem”); how to tell when it’s perfectly al dente; and how to get dry spaghetti strands to break neatly in two, rather than three or more scattered pieces.

Pasta also has a fairly low glycemic index, and is thus a good option for those with heart disease or type 2 diabetes. With the rise in the number of people with a gluten intolerance, gluten-free spaghetti has emerged as an alternative. The downside is that gluten-free pasta is harder to cook correctly and decidedly subpar in taste and texture (mouthfeel) compared to regular pasta. The reason for the latter lies in the microstructure, according to a paper published in the journal Food Hydrocolloids.

The authors used small-angle x-ray scattering and small-angle neutron scattering to analyze the microstructure of both regular and gluten-free pasta—i.e., the gluten matrix and its artificial counterpart—cooked al dente with varying salt concentrations in the water. They found that because of its gluten matrix, regular pasta has better resistance to structural degradation, and that adding just the right amount of salt further reinforces that matrix—so it’s not just a matter of salting to taste. This could lead to a better alternative matrix for gluten-free pasta that holds its structure better and has a taste and mouthfeel closer to that of regular pasta.

DOI: Food Hydrocolloids, 2025. 10.1016/j.foodhyd.2025.111855  (About DOIs).

Can machine learning identify ancient artists?

Dr Andrea Jalandoni studies finger flutings at a cave site in Australia

Credit: Andrea Jalandoni

Finger flutings are one of the oldest examples of prehistoric art, usually found carved into the walls of caves in southern Australia, New Guinea, and parts of Europe. They’re basically just marks made by human fingers drawn through the “moonmilk” (a soft mineral film) covering those walls. Very little is known about the people who left those flutings and while some have tried to draw inferences based on biometric finger ratios or hand size measurements—notably whether given marks were made by men or women—such methods produce inconsistent results and are prone to human error and bias.

That’s why digital archaeologist Andrea Jaladonia of Griffith University decided to experiment with machine learning image recognition methods as a possible tool, detailing her findings in a paper published the journal Scientific Reports. She recruited 96 adult volunteers to create their own finger flutings in two different settings: once in a virtual reality environment, and once on a substitute for the moonmilk clay that mimicked the look and feel of the real thing. Her team took images of those flutings and then used them to train two common image recognition models.

The results were decidedly mixed. The virtual reality images performed the worst, yielding highly unreliable attempts at classifying whether flutings were made by men or women. The images produced in actual clay produced better results, even reaching close to 84 percent accuracy in one model. But there were also signs the models were overfitting, i.e., memorizing patterns in the training data rather than more generalized patterns, so the approach needs more refinement before it is ready for actual deployment. As for why determining sex classifications matters, “This information has been used to decide who can access certain sites for cultural reasons,” Jalandoni explained.

DOI: Scientific Reports, 2025. 10.1038/s41598-025-18098-4  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 6 cool science stories we almost missed Read More »

neural-network-finds-an-enzyme-that-can-break-down-polyurethane

Neural network finds an enzyme that can break down polyurethane

You’ll often hear plastic pollution referred to as a problem. But the reality is that it’s multiple problems. Depending on the properties we need, we form plastics out of different polymers, each of which is held together by a distinct type of chemical bond. So the method we use to break down one type of polymer may be incompatible with the chemistry of another.

That problem is why, even though we’ve had success finding enzymes that break down common plastics like polyesters and PET, they’re only partial solutions to plastic waste. However, researchers aren’t sitting back and basking in the triumph of partial solutions, and they’ve now got very sophisticated protein design tools to help them out.

That’s the story behind a completely new enzyme that researchers developed to break down polyurethane, the polymer commonly used to make foam cushioning, among other things. The new enzyme is compatible with an industrial-style recycling process that breaks the polymer down into its basic building blocks, which can be used to form fresh polyurethane.

Breaking down polyurethane

Image of a set of chemical bonds. From left to right there is an X, then a single bond to an oxygen, then a single bond to an oxygen that's double-bonded to carbon, then a single bond to a nitrogen, then a single bond to another X.

The basics of the chemical bonds that link polyurethanes. The rest of the polymer is represented by X’s here.

The new paper that describes the development of this enzyme lays out the scale of the problem: In 2024, we made 22 million metric tons of polyurethane. The urethane bond that defines these involves a nitrogen bonded to a carbon that in turn is bonded to two oxygens, one of which links into the rest of the polymer. The rest of the polymer, linked by these bonds, can be fairly complex and often contains ringed structures related to benzene.

Digesting polyurethanes is challenging. Individual polymer chains are often extensively cross-linked, and the bulky structures can make it difficult for enzymes to get at the bonds they can digest. A chemical called diethylene glycol can partially break these molecules down, but only at elevated temperatures. And it leaves behind a complicated mess of chemicals that can’t be fed back into any useful reactions. Instead, it’s typically incinerated as hazardous waste.

Neural network finds an enzyme that can break down polyurethane Read More »

wear-marks-suggest-neanderthals-made-ocher-crayons

Wear marks suggest Neanderthals made ocher crayons

“The combination of shaping, wear, and resharpening indicates they were used to draw or mark on soft surfaces,” D’Errico told Ars in an email. “Although the material is too fragile to reveal the specific material on which they were used, such as hide, human skin, or stone, an experimental approach may, in the future, allow us at least to rule out their use on some materials.”

A 73,000-year-old drawing from Blombo Cave in South Africa looks like it was made with tools much like the ocher crayons from Crimea, which means that Neanderthals and Homo sapiens both invented crayons in their own little corners of the world at around the same time.

Image of a reddish-brown rock with a series of lines carved in its surface

The surface of this flat piece of orange ocher was carved over 47,000 years ago, then worn smooth, perhaps by carrying in a bag. Credit: D’Errico et al. 2025

Sometimes you’re the crayon, sometimes you’re the canvas

A third item from Zaskalnaya V is a flat piece of orange ocher. One side is covered with a thin layer of hard, dark rock. But more than 47,000 years ago, someone carefully cut several deep lines, regularly spaced and almost parallel, into its surface. The area of stone between the lines has been worn and polished smooth, suggesting that someone carried it and handled it for years.

“The polish smoothing the engraved lines suggest that the piece was curated, perhaps transported in a bag,” D’Errico told Ars. Whoever carved the lines into the piece of ocher also appears to have been right-handed, based on the angle of the incisions’ walls.

The finds join a host of other evidence of Neanderthal artwork and jewelry, from 57,000-year-old finger marks on a cave wall in France to 114,000-year-old ocher-painted shells in Spain.

“Traditionally viewed as lacking the cognitive flexibility and symbolic capacity of humans, the Neanderthals of Crimea demonstrate the opposite: They engaged in cultural practices that were not merely adaptive but deeply meaningful,” wrote D’Errico and his colleagues. “Their sophisticated use of ocher is one facet of their complex cultural life.”

photo of a reddish-brown pointed rock from four angles

The tip of this red ocher crayon was broken off. Credit: D’Errico et al. 2025

Coloring in some details of Neanderthal culture

It’s hard to say whether the rest of the ocher from the Zaskalnaya sites and other nearby rock shelters meant anything to the Neanderthals beyond the purely pragmatic. However, it’s unlikely that humans (of any stripe) could spend 70,000 years working with vividly colored pigment without developing a sense of aesthetics, assigning some meaning to the colors, or maybe doing both.

Wear marks suggest Neanderthals made ocher crayons Read More »

new-study-settles-40-year-debate:-nanotyrannus-is-a-new-species

New study settles 40-year debate: Nanotyrannus is a new species

For four decades, a frequently acrimonious debate has raged in paleontological circles about the correct taxonomy for a handful of rare fossil specimens. One faction insisted the fossils were juvenile Tyrannosaurus rex; the other argued that they represented a new species dubbed Nanotyrannus lancensis. Now, paleontologists believe they have settled the debate once and for all due to a new analysis of a well-preserved fossil.

The verdict: It is indeed a new species, according to a new paper published in the journal Nature. The authors also reclassified another specimen as a second new species, distinct from N. lancensis. In short, Nanotyrannus is a valid taxon and contains two species.

“This fossil doesn’t just settle the debate,” said Lindsay Zanno, a paleontologist at North Carolina State University and head of paleontology at North Carolina Museum of Natural Sciences. “It flips decades of T. rex research on its head.” That’s because paleontologists have relied on such fossils to model the growth and behavior of T. rex. The new findings suggest that there could have been multiple tyrannosaur species and that paleontologists have been underestimating the diversity of dinosaurs from this period.

Our story begins in 1942, when the fossilized skull of a Nanotyrannus, nicknamed Chomper, was excavated in Montana by a Cleveland Museum of Natural History expedition. Originally, paleontologists thought it belonged to a Gorgosaurus, but a 1965 paper challenged that identification and argued that the skull belonged to a juvenile T. rex. It wasn’t until 1988 that scientists proposed that the skull was actually that of a new species, Nanotyrannus. It’s been a constant back-and-forth ever since.

As recently as 2020, a highly influential paper claimed that Nanotyrannus was definitively a juvenile T. Rex. Yet a substantial number of paleontologists still believed it should be classified as a distinct species. A January 2024 paper, for instance, came down firmly on the Nanotyrannus side of the debate. Co-authors Nicholas Longrich of the University of Bath and Evan Saitta of the University of Chicago measured the growth rings in Nanotyrannus bones and concluded the animals were nearly fully grown.

Dueling dinosaurs

Lindsay Zanno, associate research professor at North Carolina State University and head of paleontology at the North Carolina Museum of Natural Sciences, with the Dueling Dinosaurs fossil.

Lindsay Zanno of North Carolina State University, who also heads paleontology at the North Carolina Museum of Natural Sciences, with the “dueling dinosaurs” fossil. Credit: N.C. State University/CC BY-NC-ND

Furthermore, there was no evidence of hybrid fossils combining features of both Nanotyrannus and T. rex, which one would expect if the former were a juvenile version of the latter. Longrich and Saitta had also discovered a skull bone, archived in a San Francisco museum, that did belong to a juvenile T. rex, and they were able to do an anatomical comparison. They argued that Nanotyrannus had a lighter build, longer limbs, and larger arms than a T. rex and likely was smaller, faster, and more agile.

New study settles 40-year debate: Nanotyrannus is a new species Read More »

falling-panel-prices-lead-to-global-solar-boom,-except-for-the-us

Falling panel prices lead to global solar boom, except for the US


The economic case for solar power is stronger than ever.

White clouds drift over a combined wind-solar installation in Shandong province, China. Beijing’s support for a rapid rollout of solar and wind power forms a stark contrast with the growing antipathy of the Trump administration towards renewables. Credit: CFOTO/Future Publishing/Getty Images

To the south of the Monte Cristo mountain range and west of Paymaster Canyon, a vast stretch of the Nevada desert has attracted modern-day prospectors chasing one of 21st-century America’s greatest investment booms.

Solar power developers want to cover an area larger than Washington, DC, with silicon panels and batteries, converting sunlight into electricity that will power air conditioners in sweltering Las Vegas along with millions of other homes and businesses.

But earlier this month, bureaucrats in charge of federal lands scrapped collective approval for the Esmeralda 7 projects, in what campaigners fear is part of an attack on renewable energy under President Donald Trump. “We will not approve wind or farmer destroying [sic] Solar,” he posted on his Truth Social platform in August. Developers will need to reapply individually, slowing progress.

Thousands of miles away on the other side of the Pacific Ocean, it is a different story. China has laid solar panels across an area the size of Chicago high up on the Tibetan Plateau, where the thin air helps more sunlight get through.

The Talatan Solar Park is part of China’s push to double its solar and wind generation capacity over the coming decade. “Green and low-carbon transition is the trend of our time,” President Xi Jinping told delegates at a UN summit in New York last month.

China’s vast production of solar panels and batteries has also pushed down the prices of renewables hardware for everyone else, meaning it has “become very difficult to make any other choice in some places,” according to Heymi Bahar, senior analyst at the International Energy Agency.

In 2010, the IEA estimated that there would be 410 gigawatts (GW) of solar panels installed around the world by 2035. There is already more than four times that capacity, with about half of it in China.

Many countries in Africa and the Middle East, even in petrostates such as Saudi Arabia, are rapidly developing solar power. “It’s a very cheap way to harness the sun,” says Kingsmill Bond, an energy strategist at think-tank Ember.

chart showing global renewables growth

Credit: FT

Its analysis suggests that, helped by rapid growth in solar and wind energy, renewables generated more electricity than coal-fired power plants during the first half of this year.

Progress in energy and other areas has damped some of the pessimism around global warming. In 2015, the UN predicted temperatures would rise by 4° C compared to pre-industrial levels by 2100. It now projects a rise of 2.6° C, if climate policies are followed through.

But for delegates set to gather in Belém, Brazil, next month for the COP30 climate summit, any jubilation will be tempered by the knowledge that the renewables revolution is a long way from being fulfilled. Emissions from the energy sector rose for the fourth straight year in 2024 to a record high, while the slower growth in US renewables means an ambitious target to triple global capacity by 2030 will probably be missed.

“It’s not job done, [IEA analysis] does throw some genuine caution out there,” says Mike Hemsley, deputy director at the Energy Transitions Commission think-tank.

Renewable energy has lowered wholesale power costs, but that has not necessarily fed through into the prices that consumers pay, while users in many countries have not yet switched to electricity for things like transport and domestic heating in the numbers required to reduce fossil fuel usage.

Calculations by the Energy Institute, the sector’s global body, show that the supply of oil, gas, and coal for energy—electricity generation, heating, industrial usage, and transport—in 2024 rose by more than the supply of energy from low-carbon sources, which also includes nuclear and hydropower. That has led some to argue that renewables are merely helping to meet climbing energy demand, rather than replacing fossil fuels.

“The world remains in an energy addition mode, rather than a clear transition,” said Andy Brown, president of the institute, as it launched its report in August.

“Renewables is the place to be”

At a solar farm operated by ReNew, one of India’s biggest green energy companies, hundreds of panels glint in the sharp desert sun of surrounding Rajasthan.

India, the world’s third largest carbon emitter, wants to develop 500 gigawatts of clean-energy capacity by 2030, and earlier this year reached 243 GW—meaning more than half of its current installed power capacity is now from renewables.

“Every group in India is now saying: ‘You know what, renewables is the place to be,” says Sumant Sinha, chair and chief executive of ReNew.

Saudi Arabia, blessed with both oil and sun, has developed around 4.34 GW of solar capacity as it tries to free up more oil for export, rather than burning it in its own power stations. It wants to build up to 130 GW by the end of the decade.

“It’s massive, what’s going on,” Marco Arcelli, chief executive of utility ACWA Power, which is part-owned by the kingdom’s sovereign wealth fund, told the FT earlier this year. The company is developing 30 GW of renewables in Saudi Arabia.

South Africa has authorized at least 6 GW of renewable energy capacity since President Cyril Ramaphosa removed the capacity limit on private electricity providers in 2022, breaking years of reluctance among the ruling African National Congress to challenge the dominance of state monopoly utility Eskom.

factory workers

Workers at the Ener-G-Africa factory in Cape Town test LED lights on solar panels. South Africans are increasingly installing such panels because of the unreliability of normal power supplies.

Credit: Esa Alexander/Reuters

Workers at the Ener-G-Africa factory in Cape Town test LED lights on solar panels. South Africans are increasingly installing such panels because of the unreliability of normal power supplies. Credit: Esa Alexander/Reuters

Middle-class households in the country have also rapidly installed solar panels on their roofs to cope with years of planned rolling blackouts due to power shortages. It is part of a worldwide trend for smaller installations as homes and businesses tire of waiting for governments or big utilities to fix power shortages.

Solar panel installations of less than 1MW accounted for about 42 percent of global installations last year, according to BloombergNEF, almost double the 22 percent recorded in 2015. Factories, mosques, and farms in Pakistan have covered their roofs in Chinese-made solar panels to try to avoid surging tariffs for state-provided power.

“We’ve displaced tens of thousands of diesel generators,” says William Brent, chief marketing officer at Husk Power Systems, which has installed about 400 “mini-grids” of solar and batteries across Nigeria and India. These are helping pharmacies store medicines and shopkeepers keep drinks cool at around half the cost of power from the grid.

The construction of vast solar arrays in deserts and small installations on rooftops have largely been driven by the same underlying trend: falling costs. The huge surfeit of production capacity in China, which produced about eight out of 10 of the world’s solar modules in 2024, has pushed the cost of panels down by almost 90 percent over the past decade and dragged overall capital expenditure costs down 70 percent, according to analysts.

Yet even in places like India, fossil fuels still hold sway. Coal still generates more than 70 percent of the country’s power output and remains politically protected, employing hundreds of thousands directly and many more indirectly in some of India’s poorest regions. “India still has a massive way to go,” says Hemsley at the ETC.

PM Prasad, chair of state-owned Coal India, told the FT earlier this year that it was reopening more than 30 mines and launching up to five new sites, arguing that renewables were not yet capable of meeting fast-growing energy demand.

The painful process of acquiring large tracts of land for solar arrays in a country with millions of smallholder farmers has also led to delays across the renewables sector, many Indian developers grumble. More than 50 GW of renewable power projects are waiting to connect to an overstretched transmission network, estimates the Institute for Energy Economics and Financial Analysis, a think-tank, and cleantech consultancy JMK Research.

Chart showing relative amount of small solar installations

Credit: FT

Even as solar panels become more popular in Sub-Saharan Africa, millions of homes and businesses still rely on expensive and polluting diesel generators, and roughly 600 million people lack access to power.

Many people also lack the means to pay commercial rates for electricity, even before factoring in the extra levies needed to finance the cost of new transmission lines, a key enabler of renewables projects around the world.

Electricity storage capabilities also need to dramatically improve if countries want to rely more heavily on intermittent wind and solar farms and phase out backup fossil-fuel capacity.

Large-scale batteries are being deployed rapidly—spurred again by China’s prolific manufacturing output. James Mittell, director at developer Actis Energy, says costs have fallen so much that it is already possible in many markets to build large-scale battery and solar systems, which can deliver power with similar consistency to gas-fired power plants, but at lower cost. “It’s a complete game-changer,” he says.

But progress is also mixed on the second phase of any “transition” to renewable power: persuading consumers and industries to switch to equipment that runs on electricity rather than combustion processes using fossil fuels.

The share of electricity in final energy demand has flatlined in the US and the EU over the past few years, with the growth of electric cars offset by the difficulty of getting people to switch away from gas or oil heating systems to low-carbon electric ones such as heat pumps.

“For electricity [generation] we have a success story,” says Bahar, at the IEA. “For other sectors, it’s way more complicated.”

Massive growth in China

China and some parts of Southeast Asia stand out in terms of the portion of energy supplied by electricity increasing—in China’s case, from about 12 percent in 2000 to about 30 percent in 2023—as millions of citizens start driving electric cars and factories switch away from fossil-fueled boilers.

Ember points to data showing that renewables met 84 percent of China’s new electricity demand last year as evidence that coal-powered generation in the country is nearing its peak. “We’re confident renewables can meet all China’s [power] demand growth,” adds Hemsley at the ETC.

But even here, challenges loom. Major electricity market reforms introduced by Beijing in July mean renewable energy developers no longer get a fixed price akin to that received by coal-fired generators and are instead more exposed to market forces.

“They clearly don’t want to harm the build out of renewables, but they just want it to be done on a more commercial basis,” says Neil Beveridge, who leads Bernstein’s energy analysis in Hong Kong.

But the IEA warns it will lower returns and cut the growth of renewables. “That [impact of the reform] is the biggest uncertainty in our outlook,” adds Bahar at the IEA.

A far sharper slowdown is already underway in the US, where incentives introduced as part of former President Joe Biden’s Inflation Reduction Act in 2022 are rolled back by the second Trump administration. Tax credits have been cut and major projects blocked—spooking investors and leaving existing developers trying to stay afloat.

workers carrying solar panels

Workers carry solar panels for a project in Lingwu, China. The country accounts for half the world’s installed solar capacity, but its fossil fuel usage also continues to grow.

Credit: Sara Hussein/AFP/Getty Images

Workers carry solar panels for a project in Lingwu, China. The country accounts for half the world’s installed solar capacity, but its fossil fuel usage also continues to grow. Credit: Sara Hussein/AFP/Getty Images

“It’s very difficult to make big capital decisions based on this,” says Reagan Farr, chief executive of Silicon Ranch, a solar developer. “We don’t have a bipartisan energy policy in the US, which is very bad for the industry and our economy.”

Ørsted, the world’s largest offshore wind company, has had to raise an extra $9 billion from investors after Trump’s hostility to the offshore wind sector prevented it from selling a stake in one of its major US projects.

His tariffs on products from China mean higher costs for solar projects. Analysts say more large-scale solar projects are likely to have their permits revoked or reviewed.

Developers are currently rushing to build, as they have until July 2026 to start construction to capture the tail-end of the tax credits. But some projects and companies are bound to fail. “We’re likely facing several more years of uphill battles for many large-scale projects,” says Abby Watson, president at Groundwire Group, a consultancy.

The IEA has halved its forecast for renewables growth by 2030 in the US to around 250 GW as a result of Trump’s policies. Analysts at Carbon Brief estimate the country will emit 7 billion tonnes more CO₂ equivalent by 2030 under Trump’s policies than if the country had met its obligations under the 2015 Paris agreement, which he is withdrawing from.

The reduction in renewables growth comes as the country’s electricity demand is rising due to the growth of data centers, many of which are looking to gas-fired or nuclear power stations because they need constant, steady power.

Gas turbine makers are struggling to keep up with demand, while new nuclear power plants are often delayed.

chart showing continued growth of fossil fuels

Credit: FT

Retail electricity prices have already risen by 5 percent since July, according to the Energy Information Administration, and some experts caution they could rise further if supplies are constrained. “The writing is on the wall,” says Pol Lezcano, director of energy and renewables at the CBRE real estate group.

Supporters of renewable electricity argue that the US is missing out on a revolution in cleaner, cheaper technology sweeping the world, with some likening it to the aging cars on Cuba’s roads.

But the relationship between renewable generation and consumer energy bills is complicated. The free energy from the sun or the wind means that the wholesale price of renewable-generated power is lower, but developers still need to make a return on their investment, and grid operators may need to step in to ensure continuity of supply when the wind and the sun are low.

“Even as the cost of producing electricity from renewables falls, consumers may not see immediate or proportional reductions in their bills, raising questions over the impact of renewables on power affordability,” the IEA said in its latest report.

More broadly, the US’s focus on fossil fuels and pullback of support for clean energy further cedes influence over the future global energy system to China.

The US is trying to tie its trading partners into fossil fuels, pressing the EU to buy $750 billion of American oil, natural gas, and nuclear technologies during his presidency as part of a trade deal, scuppering an initiative to begin decarbonizing world shipping and pressuring others to reduce their reliance on Chinese technology.

But the collapsing cost of solar panels in particular has spoken for itself in many parts of the world. Experts caution that the US’s attacks on renewables could cause lasting damage to its competitiveness against China, even if an administration more favorable to renewables were to follow Trump’s.

“China has run far away in terms of competitiveness,” says Antonio Cammisecra, chief executive of ContourGlobal, an independent power producer.

“The US is capable of rebuilding, but it will take time.”

Additional reporting by Ahmed Al Omran and David Pilling. Data visualization by Jana Tauschinski.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Falling panel prices lead to global solar boom, except for the US Read More »