Science

spacex’s-unmatched-streak-of-perfection-with-the-falcon-9-rocket-is-over

SpaceX’s unmatched streak of perfection with the Falcon 9 rocket is over

Numerous pieces of ice fell off the second stage of the Falcon 9 rocket during its climb into orbit from Vandenberg Space Force Base, California.

Enlarge / Numerous pieces of ice fell off the second stage of the Falcon 9 rocket during its climb into orbit from Vandenberg Space Force Base, California.

SpaceX

A SpaceX Falcon 9 rocket suffered an upper stage engine failure and deployed a batch of Starlink Internet satellites into a perilously low orbit after launch from California Thursday night, the first blemish on the workhorse launcher’s record in more than 300 missions since 2016.

Elon Musk, SpaceX’s founder and CEO, posted on X that the rocket’s upper stage engine failed when it attempted to reignite nearly an hour after the Falcon 9 lifted off from Vandenberg Space Force Base, California, at 7: 35 pm PDT (02: 35 UTC).

Frosty evidence

After departing Vandenberg to begin SpaceX’s Starlink 9-3 mission, the rocket’s reusable first stage booster propelled the Starlink satellites into the upper atmosphere, then returned to Earth for an on-target landing on a recovery ship parked in the Pacific Ocean. A single Merlin Vacuum engine on the rocket’s second stage fired for about six minutes to reach a preliminary orbit.

A few minutes after liftoff of SpaceX’s Starlink 9-3 mission, veteran observers of SpaceX launches noticed an unusual build-up of ice around the top of the Merlin Vacuum engine, which consumes a propellant mixture of super-chilled kerosene and cryogenic liquid oxygen. The liquid oxygen is stored at a temperature of several hundred degrees below zero.

Numerous chunks of ice fell away from the rocket as the upper stage engine powered into orbit, but the Merlin Vacuum, or M-Vac, engine appeared to complete its first burn as planned. A leak in the oxidizer system or a problem with insulation could lead to ice accumulation, although the exact cause, and its possible link to the engine malfunction later in flight, will be the focus of SpaceX’s investigation into the failure.

A second burn with the upper stage engine was supposed to raise the perigee, or low point, of the rocket’s orbit well above the atmosphere before releasing 20 Starlink satellites to continue climbing to their operational altitude with their own propulsion.

“Upper stage restart to raise perigee resulted in an engine RUD for reasons currently unknown,” Musk wrote in an update two hours after the launch. RUD (rapid unscheduled disassembly) is a term of art in rocketry that usually signifies a catastrophic or explosive failure.

“Team is reviewing data tonight to understand root cause,” Musk continued. “Starlink satellites were deployed, but the perigee may be too low for them to raise orbit. Will know more in a few hours.”

Telemetry from the Falcon 9 rocket indicated it released the Starlink satellites into an orbit with a perigee just 86 miles (138 kilometers) above Earth, roughly 100 miles (150 kilometers) lower than expected, according to Jonathan McDowell, an astrophysicist and trusted tracker of spaceflight activity. Detailed orbital data from the US Space Force was not immediately available.

Ripple effects

While ground controllers scramble to salvage the 20 Starlink satellites, SpaceX engineers began probing what went wrong with the second stage’s M-Vac engine. For SpaceX and its customers, the investigation into the rocket malfunction is likely the more pressing matter.

SpaceX could absorb the loss of 20 Starlink satellites relatively easily. The company’s satellite assembly line can produce 20 Starlink spacecraft in a few days. But the Falcon 9 rocket’s dependability and high flight rate have made it a workhorse for NASA, the US military, and the wider space industry. An investigation will probably delay several upcoming SpaceX flights.

The first in-flight failure for SpaceX’s Falcon rocket family since June 2015, a streak of 344 consecutive successful launches until tonight.

A lot of unusual ice was observed on the Falcon 9’s upper stage during its first burn tonight, some of it falling into the engine plume. https://t.co/1vc3P9EZjj pic.twitter.com/fHO73MYLms

— Stephen Clark (@StephenClark1) July 12, 2024

Depending on the cause of the problem and what SpaceX must do to fix it, it’s possible the company can recover from the upper stage failure and resume launching Starlink satellites soon. Most of SpaceX’s launches aren’t for external customers, but deploy satellites for the company’s own Starlink network. This gives SpaceX a unique flexibility to quickly return to flight with the Falcon 9 without needing to satisfy customer concerns.

The Federal Aviation Administration, which licenses all commercial space launches in the United States, will require SpaceX to conduct a mishap investigation before resuming Falcon 9 flights.

“The FAA will be involved in every step of the investigation process and must approve SpaceX’s final report, including any corrective actions,” an FAA spokesperson said. “A return to flight is based on the FAA determining that any system, process, or procedure related to the mishap does not affect public safety.”

Two crew missions are supposed to launch on SpaceX’s human-rated Falcon 9 rocket in the next six weeks, but those launch dates are now in doubt.

The all-private Polaris Dawn mission, commanded by billionaire Jared Isaacman, is scheduled to launch on a Falcon 9 rocket on July 31 from NASA’s Kennedy Space Center in Florida. Isaacman and three commercial astronaut crewmates will spend five days in orbit on a mission that will include the first commercial spacewalk outside their Crew Dragon capsule, using new pressure suits designed and built by SpaceX.

NASA’s next crew mission with SpaceX is slated to launch from Florida aboard a Falcon 9 rocket around August 19. This team of four astronauts will replace a crew of four who have been on the International Space Station since March.

Some customers, especially NASA’s commercial crew program, will likely want to see the results of an in-depth inquiry and require SpaceX to string together a series of successful Falcon 9 flights with Starlink satellites before clearing their own missions for launch. SpaceX has already launched 70 flights with its Falcon family of rockets since January 1, an average cadence of one launch every 2.7 days, more than the combined number of orbital launches by all other nations this year.

With this rapid-fire launch cadence, SpaceX could quickly demonstrate the fitness of any fixes engineers recommend to resolve the problem that caused Thursday night’s failure. But investigations into rocket failures often take weeks or months. It was too soon, early on Friday, to know the true impact of the upper stage malfunction on SpaceX’s launch schedule.

SpaceX’s unmatched streak of perfection with the Falcon 9 rocket is over Read More »

scientists-built-real-life-“stillsuit”-to-recycle-astronaut-urine-on-space-walks

Scientists built real-life “stillsuit” to recycle astronaut urine on space walks

shot of Fremen woman in a stillsuit kneeling

Enlarge / The Fremen on Arrakis wear full-body “stillsuits” that recycle absorbed sweat and urine into potable water.

Warner Bros.

The Fremen who inhabit the harsh desert world of Arrakis in Frank Herbert’s Dune must rely on full-body “stillsuits” for their survival, which recycle absorbed sweat and urine into potable water. Now science fiction is on the verge of becoming science fact: Researchers from Cornell University have designed a prototype stillsuit for astronauts that will recycle their urine into potable water during spacewalks, according to a new paper published in the journal Frontiers in Space Technologies.

Herbert provided specific details about the stillsuit’s design when planetologist Liet Kynes explained the technology to Duke Leto Atreides I:

It’s basically a micro-sandwich—a high-efficiency filter and heat-exchange system. The skin-contact layer’s porous. Perspiration passes through it, having cooled the body … near-normal evaporation process. The next two layers … include heat exchange filaments and salt precipitators. Salt’s reclaimed. Motions of the body, especially breathing and some osmotic action provide the pumping force. Reclaimed water circulates to catchpockets from which you draw it through this tube in the clip at your neck… Urine and feces are processed in the thigh pads. In the open desert, you wear this filter across your face, this tube in the nostrils with these plugs to ensure a tight fit. Breathe in through the mouth filter, out through the nose tube. With a Fremen suit in good working order, you won’t lose more than a thimbleful of moisture a day…

The Illustrated Dune Encyclopedia interpreted the stillsuit as something akin to a hazmat suit, without the full face covering. In David Lynch’s 1984 film, Dune, the stillsuits were organic and very form-fitting compared to the book description, almost like a second skin. The stillsuits in Denis Villeneuve’s most recent film adaptations (Dune Part 1 and Part 2) tried to hew more closely to the source material, with “micro-sandwiches” of acrylic fibers and porous cottons and embedded tubes for better flexibility.

Dune, the stillsuits were organic and very form-fitting.” height=”401″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/07/stillsuit2-640×401.jpg” width=”640″>

Enlarge / In David Lynch’s 1984 film, Dune, the stillsuits were organic and very form-fitting.

Universal Pictures

The Cornell team is not the first to try to build a practical stillsuit. Hacksmith Industries did a “one day build” of a stillsuit just last month, having previously tackled Thor’s Stormbreaker ax, Captain America’s electromagnetic shield, and a plasma-powered lightsaber, among other projects. The Hacksmith team dispensed with the icky urine and feces recycling aspects and focused on recycling sweat and moisture from breath.

Their version consists of a waterproof baggy suit (switched out for a more form-fitting bunny suit in the final version) with a battery-powered heat exchanger in the back. Any humidity condenses on the suit’s surface and drips into a bottle attached to a CamelBak bladder. There’s a filter mask attached to a tube that allows the wearer to breathe in filtered air, but it’s one way; the exhaled air is redirected to the condenser so the water content can be harvested into the CamelBak bladder and then sent back to the mask so the user can drink it. It’s not even close to achieving Herbert’s stated thimbleful a day in terms of efficiency since it mostly recycles moisture from sweat on the wearer’s back. But it worked.

Scientists built real-life “stillsuit” to recycle astronaut urine on space walks Read More »

nasa’s-flagship-mission-to-europa-has-a-problem:-vulnerability-to-radiation

NASA’s flagship mission to Europa has a problem: Vulnerability to radiation

Tripping transistors —

“What keeps me awake right now is the uncertainty.”

An artist's illustration of the Europa Clipper spacecraft during a flyby close to Jupiter's icy moon.

Enlarge / An artist’s illustration of the Europa Clipper spacecraft during a flyby close to Jupiter’s icy moon.

The launch date for the Europa Clipper mission to study the intriguing moon orbiting Jupiter, which ranks alongside the Cassini spacecraft to Saturn as NASA’s most expensive and ambitious planetary science mission, is now in doubt.

The $4.25 billion spacecraft had been due to launch in October on a Falcon Heavy rocket from Kennedy Space Center in Florida. However, NASA revealed that transistors on board the spacecraft may not be as radiation-hardened as they were believed to be.

“The issue with the transistors came to light in May when the mission team was advised that similar parts were failing at lower radiation doses than expected,” the space agency wrote in a blog post Thursday afternoon. “In June 2024, an industry alert was sent out to notify users of this issue. The manufacturer is working with the mission team to support ongoing radiation test and analysis efforts in order to better understand the risk of using these parts on the Europa Clipper spacecraft.”

The moons orbiting Jupiter, a massive gas giant planet, exist in one of the harshest radiation environments in the Solar System. NASA’s initial testing indicates that some of the transistors, which regulate the flow of energy through the spacecraft, could fail in this environment. NASA is currently evaluating the possibility of maximizing the transistor lifetime at Jupiter and expects to complete a preliminary analysis in late July.

To delay or not to delay

NASA’s update is silent on whether the spacecraft could still make its approximately three-week launch window this year, which gets Clipper to the Jovian system in 2030.

Ars reached out to several experts familiar with the Clipper mission to gauge the likelihood that it would make the October launch window, and opinions were mixed. The consensus view was between a 40 to 60 percent chance of becoming comfortable enough with the issue to launch this fall. If NASA engineers cannot become confident with the existing setup, the transistors would need to be replaced.

The Clipper mission has launch opportunities in 2025 and 2026, but these could lead to additional delays. This is due to the need for multiple gravitational assists. The 2024 launch follows a “MEGA” trajectory, including a Mars flyby in 2025 and an Earth flyby in late 2026—Mars-Earth Gravitational Assist. If Clipper launches a year late, it would necessitate a second Earth flyby. A launch in 2026 would revert to a MEGA trajectory. Ars has asked NASA for timelines of launches in 2025 and 2026 and will update if they provide this information.

Another negative result of delays would be costs, as keeping the mission on the ground for another year likely would result in another few hundred million dollars in expenses for NASA, which would blow a hole in its planetary science budget.

NASA’s blog post this week is not the first time the space agency has publicly mentioned these issues with the metal-oxide-semiconductor field-effect transistor, or MOSFET. At a meeting of the Space Studies Board in early June, Jordan Evans, project manager for the Europa Clipper Mission, said it was his No. 1 concern ahead of launch.

“What keeps me awake at night”

“The most challenging thing we’re dealing with right now is an issue associated with these transistors, MOSFETs, that are used as switches in the spacecraft,” he said. “Five weeks ago today, I got an email that a non-NASA customer had done some testing on these rad-hard parts and found that they were going before (the specifications), at radiation levels significantly lower than what we qualified them to as we did our parts procurement, and others in the industry had as well.”

At the time, Evans said things were “trending in the right direction” with regard to the agency’s analysis of the issue. It seems unlikely that NASA would have put out a blog post five weeks later if the issue were still moving steadily toward a resolution.

“What keeps me awake right now is the uncertainty associated with the MOSFETs and the residual risk that we will take on with that,” Evans said in June. “It’s difficult to do the kind of low-dose rate testing in the timeframes that we have until launch. So we’re gathering as much data as we can, including from missions like Juno, to better understand what residual risk we might launch with.”

These are precisely the kinds of issues that scientists and engineers don’t want to find in the final months before the launch of such a consequential mission. The stakes are incredibly high—imagine making the call to launch Clipper only to have the spacecraft fail six years later upon arrival at Jupiter.

NASA’s flagship mission to Europa has a problem: Vulnerability to radiation Read More »

much-of-neanderthal-genetic-diversity-came-from-modern-humans

Much of Neanderthal genetic diversity came from modern humans

A large, brown-colored skull seen in profile against a black background.

The basic outline of the interactions between modern humans and Neanderthals is now well established. The two came in contact as modern humans began their major expansion out of Africa, which occurred roughly 60,000 years ago. Humans picked up some Neanderthal DNA through interbreeding, while the Neanderthal population, always fairly small, was swept away by the waves of new arrivals.

But there are some aspects of this big-picture view that don’t entirely line up with the data. While it nicely explains the fact that Neanderthal sequences are far more common in non-African populations, it doesn’t account for the fact that every African population we’ve looked at has some DNA that matches up with Neanderthal DNA.

A study published on Thursday argues that much of this match came about because an early modern human population also left Africa and interbred with Neanderthals. But in this case, the result was to introduce modern human DNA to the Neanderthal population. The study shows that this DNA accounts for a lot of Neanderthals’ genetic diversity, suggesting that their population was even smaller than earlier estimates had suggested.

Out of Africa early

This study isn’t the first to suggest that modern humans and their genes met Neanderthals well in advance of our major out-of-Africa expansion. The key to understanding this is the genome of a Neanderthal from the Altai region of Siberia, which dates from roughly 120,000 years ago. That’s well before modern humans expanded out of Africa, yet its genome has some regions that have excellent matches to the human genome but are absent from the Denisovan lineage.

One explanation for this is that these are segments of Neanderthal DNA that were later picked up by the population that expanded out of Africa. The problem with that view is that most of these sequences also show up in African populations. So, researchers advanced the idea that an ancestral population of modern humans left Africa about 200,000 years ago, and some of its DNA was retained by Siberian Neanderthals. That’s consistent with some fossil finds that place anatomically modern humans in the Mideast at roughly the same time.

There is, however, an alternative explanation: Some of the population that expanded out of Africa 60,000 years ago and picked up Neanderthal DNA migrated back to Africa, taking the Neanderthal DNA with them. That has led to a small bit of the Neanderthal DNA persisting within African populations.

To sort this all out, a research team based at Princeton University focused on the Neanderthal DNA found in Africans, taking advantage of the fact that we now have a much larger array of completed human genomes (approximately 2,000 of them).

The work was based on a simple hypothesis. All of our work on Neanderthal DNA indicates that their population was relatively small, and thus had less genetic diversity than modern humans did. If that’s the case, then the addition of modern human DNA to the Neanderthal population should have boosted its genetic diversity. If so, then the stretches of “Neanderthal” DNA found in African populations should include some of the more diverse regions of the Neanderthal genome.

Much of Neanderthal genetic diversity came from modern humans Read More »

giant-salamander-species-found-in-what-was-thought-to-be-an-icy-ecosystem

Giant salamander species found in what was thought to be an icy ecosystem

Feeding time —

Found after its kind were thought extinct, and where it was thought to be too cold.

A black background with a brown fossil at the center, consisting of the head and a portion of the vertebral column.

C. Marsicano

Gaiasia jennyae, a newly discovered freshwater apex predator with a body length reaching 4.5 meters, lurked in the swamps and lakes around 280 million years ago. Its wide, flattened head had powerful jaws full of huge fangs, ready to capture any prey unlucky enough to swim past.

The problem is, to the best of our knowledge, it shouldn’t have been that large, should have been extinct tens of millions of years before the time it apparently lived, and shouldn’t have been found in northern Namibia. “Gaiasia is the first really good look we have at an entirely different ecosystem we didn’t expect to find,” says Jason Pardo, a postdoctoral fellow at Field Museum of Natural History in Chicago. Pardo is co-author of a study on the Gaiasia jennyae discovery recently published in Nature.

Common ancestry

“Tetrapods were the animals that crawled out of the water around 380 million years ago, maybe a little earlier,” Pardo explains. These ancient creatures, also known as stem tetrapods, were the common ancestors of modern reptiles, amphibians, mammals, and birds. “Those animals lived up to what we call the end of Carboniferous, about 370–300 million years ago. Few made it through, and they lasted longer, but they mostly went extinct around 370 million ago,” he adds.

This is why the discovery of Gaiasia jennyae in the 280 million-year-old rocks of Namibia was so surprising. Not only wasn’t it extinct when the rocks it was found in were laid down, but it was dominating its ecosystem as an apex predator. By today’s standards, it was like stumbling upon a secluded island hosting animals that should have been dead for 70 million years, like a living, breathing T-rex.

“The skull of gaiasia we have found is about 67 centimeters long. We also have a front end of her upper body. We know she was at minimum 2.5 meters long, probably 3.5, 4.5 meters—big head and a long, salamander-like body,” says Pardo. He told Ars that gaiasia was a suction feeder: she opened her jaws under water, which created a vacuum that sucked her prey right in. But the large, interlocked fangs reveal that a powerful bite was also one of her weapons, probably used to hunt bigger animals. “We suspect gaiasia fed on bony fish, freshwater sharks, and maybe even other, smaller gaiasia,” says Pardo, suggesting it was a rather slow, ambush-based predator.

But considering where it was found, the fact that it had enough prey to ambush is perhaps even more of a shocker than the animal itself.

Location, location, location

“Continents were organized differently 270–280 million years ago,” says Pardo. Back then, one megacontinent called Pangea had already broken into two supercontinents. The northern supercontinent called Laurasia included parts of modern North America, Russia, and China. The southern supercontinent, the home of gaiasia, was called Gondwana, which consisted of today’s India, Africa, South America, Australia, and Antarctica. And Gondwana back then was pretty cold.

“Some researchers hypothesize that the entire continent was covered in glacial ice, much like we saw in North America and Europe during the ice ages 10,000 years ago,” says Pardo. “Others claim that it was more patchy—there were those patches where ice was not present,” he adds. Still, 280 million years ago, northern Namibia was around 60 degrees southern latitude—roughly where the northernmost reaches of Antarctica are today.

“Historically, we thought tetrapods [of that time] were living much like modern crocodiles. They were cold-blooded, and if you are cold-blooded the only way to get large and maintain activity would be to be in a very hot environment. We believed such animals couldn’t live in colder environments. Gaiasia shows that it is absolutely not the case,” Pardo claims. And this turned upside-down lots of what we knew about life on Earth back in gaiasia’s time.

Giant salamander species found in what was thought to be an icy ecosystem Read More »

frozen-mammoth-skin-retained-its-chromosome-structure

Frozen mammoth skin retained its chromosome structure

Artist's depiction of a large mammoth with brown fur and huge, curving tusks in an icy, tundra environment.

One of the challenges of working with ancient DNA samples is that damage accumulates over time, breaking up the structure of the double helix into ever smaller fragments. In the samples we’ve worked with, these fragments scatter and mix with contaminants, making reconstructing a genome a large technical challenge.

But a dramatic paper released on Thursday shows that this isn’t always true. Damage does create progressively smaller fragments of DNA over time. But, if they’re trapped in the right sort of material, they’ll stay right where they are, essentially preserving some key features of ancient chromosomes even as the underlying DNA decays. Researchers have now used that to detail the chromosome structure of mammoths, with some implications for how these mammals regulated some key genes.

DNA meets Hi-C

The backbone of DNA’s double helix consists of alternating sugars and phosphates, chemically linked together (the bases of DNA are chemically linked to these sugars). Damage from things like radiation can break these chemical linkages, with fragmentation increasing over time. When samples reach the age of something like a Neanderthal, very few fragments are longer than 100 base pairs. Since chromosomes are millions of base pairs long, it was thought that this would inevitably destroy their structure, as many of the fragments would simply diffuse away.

But that will only be true if the medium they’re in allows diffusion. And some scientists suspected that permafrost, which preserves the tissue of some now-extinct Arctic animals, might block that diffusion. So, they set out to test this using mammoth tissues, obtained from a sample termed YakInf that’s roughly 50,000 years old.

The challenge is that the molecular techniques we use to probe chromosomes take place in liquid solutions, where fragments would just drift away from each other in any case. So, the team focused on an approach termed Hi-C, which specifically preserves information about which bits of DNA were close to each other. It does this by exposing chromosomes to a chemical that will link any pieces of DNA that are close physical proximity. So, even if those pieces are fragments, they’ll be stuck to each other by the time they end up in a liquid solution.

A few enzymes are then used to convert these linked molecules to a single piece of DNA, which is then sequenced. This data, which will contain sequence information from two different parts of the genome, then tells us that those parts were once close to each other inside a cell.

Interpreting Hi-C

On its own, a single bit of data like this isn’t especially interesting; two bits of genome might end up next to each other at random. But when you have millions of bits of data like this, you can start to construct a map of how the genome is structured.

There are two basic rules governing the pattern of interactions we’d expect to see. The first is that interactions within a chromosome are going to be more common than interactions between two chromosomes. And, within a chromosome, parts that are physically closer to each other on the molecule are more likely to interact than those that are farther apart.

So, if you are looking at a specific segment of, say, chromosome 12, most of the locations Hi-C will find it interacting with will also be on chromosome 12. And the frequency of interactions will go up as you move to sequences that are ever closer to the one you’re interested in.

On its own, you can use Hi-C to help reconstruct a chromosome even if you start with nothing but fragments. But the exceptions to the expected pattern also tell us things about biology. For example, genes that are active tend to be on loops of DNA, with the two ends of the loop held together by proteins; the same is true for inactive genes. Interactions within these loops tend to be more frequent than interactions between them, subtly altering the frequency with which two fragments end up linked together during Hi-C.

Frozen mammoth skin retained its chromosome structure Read More »

to-help-with-climate-change,-carbon-capture-will-have-to-evolve

To help with climate change, carbon capture will have to evolve

gotta catch more —

The technologies are useful tools but have yet to move us away from fossil fuels.

Image of a facility filled with green-colored tubes.

Enlarge / Bioreactors that host algae would be one option for carbon sequestration—as long as the carbon is stored somehow.

More than 200 kilometers off Norway’s coast in the North Sea sits the world’s first offshore carbon capture and storage project. Built in 1996, the Sleipner project strips carbon dioxide from natural gas—largely made up of methane—to make it marketable. But instead of releasing the CO2 into the atmosphere, the greenhouse gas is buried.

The effort stores around 1 million metric tons of CO2 per year—and is praised by many as a pioneering success in global attempts to cut greenhouse gas emissions.

Last year, total global CO2 emissions hit an all-time high of around 35.8 billion tons, or gigatons. At these levels, scientists estimate, we have roughly six years left before we emit so much CO2 that global warming will consistently exceed 1.5° Celsius above average preindustrial temperatures, an internationally agreed-upon limit. (Notably, the global average temperature for the past 12 months has exceeded this threshold.)

Phasing out fossil fuels is key to cutting emissions and fighting climate change. But a suite of technologies collectively known as carbon capture, utilization and storage, or CCUS, are among the tools available to help meet global targets to cut CO2 emissions in half by 2030 and to reach net-zero emissions by 2050. These technologies capture, use or store away CO2 emitted by power generation or industrial processes, or suck it directly out of the air. The Intergovernmental Panel on Climate Change (IPCC), the United Nations body charged with assessing climate change science, includes carbon capture and storage among the actions needed to slash emissions and meet temperature targets.

Carbon capture, utilization and storage technologies often capture CO2 from coal or natural gas power generation or industrial processes, such as steel manufacturing. The CO2 is compressed into a liquid under high pressure and transported through pipelines to sites where it may be stored, in porous sedimentary rock formations containing saltwater, for example, or used for other purposes. The captured CO2 can be injected into the ground to extract oil dregs or used to produce cement and other products.

Enlarge / Carbon capture, utilization and storage technologies often capture CO2 from coal or natural gas power generation or industrial processes, such as steel manufacturing. The CO2 is compressed into a liquid under high pressure and transported through pipelines to sites where it may be stored, in porous sedimentary rock formations containing saltwater, for example, or used for other purposes. The captured CO2 can be injected into the ground to extract oil dregs or used to produce cement and other products.

Governments and industry are betting big on such projects. Last year, for example, the British government announced 20 billion pounds (more than $25 billion) in funding for CCUS, often shortened to CCS. The United States allocated more than $5 billion between 2011 and 2023 and committed an additional $8.2 billion from 2022 to 2026. Globally, public funding for CCUS projects rose to $20 billion in 2023, according to the International Energy Agency (IEA), which works with countries around the world to forge energy policy.

Given the urgency of the situation, many people argue that CCUS is necessary to move society toward climate goals. But critics don’t see the technology, in its current form, shifting the world away from oil and gas: In a lot of cases, they point out, the captured CO2 is used to extract more fossil fuels in a process known as enhanced oil recovery. They contend that other existing solutions such as renewable energy offer deeper and quicker CO2 emissions cuts. “It’s better not to emit in the first place,” says Grant Hauber, an energy finance adviser at the Institute for Energy Economics and Financial Analysis, a nonpartisan organization in Lakewood, Ohio.

What’s more, fossil fuel companies provide funds to universities and researchers—which some say could shape what is studied and what is not, even if the work of individual scientists is legitimate. For these reasons, some critics say CCUS shouldn’t be pursued at all.

“Carbon capture and storage essentially perpetuates fossil fuel reliance. It’s a distraction and a delay tactic,” says Jennie Stephens, a climate justice researcher at Northeastern University in Boston. She adds that there is little focus on understanding the psychological, social, economic, and political barriers that prevent communities from shifting away from fossil fuels and forging solutions to those obstacles.

According to the Global CCS Institute, an industry-led think tank headquartered in Melbourne, Australia, of the 41 commercial projects operational as of July 2023, most were part of efforts that produce, extract, or burn fossil fuels, such as coal- and gas-fired power plants. That’s true of the Sleipner project, run by the energy company Equinor. It’s the case, too, with the world’s largest CCUS facility, operated by ExxonMobil in Wyoming, in the United States, which also captures CO2 as part of the production of methane.

Granted, not all CCUS efforts further fossil fuel production, and many projects now in the works have the sole goal of capturing and locking up CO2. Still, some critics doubt whether these greener approaches could ever lock away enough CO2 to meaningfully contribute to climate mitigation, and they are concerned about the costs.

Others are more circumspect. Sally Benson, an energy researcher at Stanford University, doesn’t want to see CCUS used as an excuse to carry on with fossil fuels. But she says the technology is essential for capturing some of the CO2 from fossil fuel production and usage, as well as from industrial processes, as society transitions to new energy sources. “If we can get rid of those emissions with carbon capture and sequestration, that sounds like success to me,” says Benson, who codirects an institute that receives funding from fossil fuel companies.

To help with climate change, carbon capture will have to evolve Read More »

nasa-update-on-starliner-thruster-issues:-this-is-fine

NASA update on Starliner thruster issues: This is fine

Boeing's Starliner spacecraft on final approach to the International Space Station last month.

Enlarge / Boeing’s Starliner spacecraft on final approach to the International Space Station last month.

Before clearing Boeing’s Starliner crew capsule to depart the International Space Station and head for Earth, NASA managers want to ensure the spacecraft’s problematic control thrusters can help guide the ship’s two-person crew home.

The two astronauts who launched June 5 on the Starliner spacecraft’s first crew test flight agree with the managers, although they said Wednesday that they’re comfortable with flying the capsule back to Earth if there’s any emergency that might require evacuation of the space station.

NASA astronauts Butch Wilmore and Suni Williams were supposed to return to Earth weeks ago, but managers are keeping them at the station as engineers continue probing thruster problems and helium leaks that have plagued the mission since its launch.

“This is a tough business that we’re in,” Wilmore, Starliner’s commander, told reporters Wednesday in a news conference from the space station. “Human spaceflight is not easy in any regime, and there have been multiple issues with any spacecraft that’s ever been designed, and that’s the nature of what we do.”

Five of the 28 reaction control system thrusters on Starliner’s service module dropped offline as the spacecraft approached the space station last month. Starliner’s flight software disabled the five control jets when they started overheating and losing thrust. Four of the thrusters were later recovered, although some couldn’t reach their full power levels as Starliner came in for docking.

Wilmore, who took over manual control for part of Starliner’s approach to the space station, said he could sense the spacecraft’s handling qualities diminish as thrusters temporarily failed. “You could tell it was degraded, but still, it was impressive,” he said. Starliner ultimately docked to the station in autopilot mode.

In mid-June, the Starliner astronauts hot-fired the thrusters again, and their thrust levels were closer to normal.

“What we want to know is that the thrusters can perform; if whatever their percentage of thrust is, we can put it into a package that will get us a deorbit burn,” said Williams, a NASA astronaut serving as Starliner’s pilot. “That’s the main purpose that we need [for] the service module: to get us a good deorbit burn so that we can come back.”

These small thrusters aren’t necessary for the deorbit burn itself, which will use a different set of engines to slow Starliner’s velocity enough for it to drop out of orbit and head for landing. But Starliner needs enough of the control jets working to maneuver into the proper orientation for the deorbit firing.

This test flight is the first time astronauts have flown in space on Boeing’s Starliner spacecraft, following years of delays and setbacks. Starliner is NASA’s second human-rated commercial crew capsule, and it’s poised to join SpaceX’s Crew Dragon in a rotation of missions ferrying astronauts to and from the space station through the rest of the decade.

But first, Boeing and NASA need to safely complete the Starliner test flight and resolve the thruster problems and helium leaks plaguing the spacecraft before moving forward with operational crew rotation missions. There’s a Crew Dragon spacecraft currently docked to the station, but Steve Stich, NASA’s commercial crew program manager, told reporters Wednesday that, right now, Wilmore and Williams still plan to come home on Starliner.

“The beautiful thing about the commercial crew program is that we have two vehicles, two different systems, that we could use to return crew,” Stich said. “So we have a little bit more time to go through the data and then make a decision as to whether we need to do anything different. But the prime option today is to return Butch and Suni on Starliner. Right now, we don’t see any reason that wouldn’t be the case.”

Mark Nappi, Boeing’s Starliner program manager, said officials identified more than 30 actions to investigate five “small” helium leaks and the thruster problems on Starliner’s service module. “All these items are scheduled to be completed by the end of next week,” Nappi said.

“It’s a test flight, and the first with crew, and we’re just taking a little extra time to make sure that we understand everything before we commit to deorbit,” Stich said.

NASA update on Starliner thruster issues: This is fine Read More »

congress-apparently-feels-a-need-for-“reaffirmation”-of-sls-rocket

Congress apparently feels a need for “reaffirmation” of SLS rocket

Stuart Smalley is here to help with daily affirmations of SLS.

Enlarge / Stuart Smalley is here to help with daily affirmations of SLS.

Aurich Lawson | SNL

There is a curious section in the new congressional reauthorization bill for NASA that concerns the agency’s large Space Launch System rocket.

The section is titled “Reaffirmation of the Space Launch System,” and in it Congress asserts its commitment to a flight rate of twice per year for the rocket. The reauthorization legislation, which cleared a House committee on Wednesday, also said NASA should identify other customers for the rocket.

“The Administrator shall assess the demand for the Space Launch System by entities other than NASA and shall break out such demand according to the relevant Federal agency or nongovernment sector,” the legislation states.

Congress directs NASA to report back, within 180 days of the legislation passing, on several topics. First, the legislators want an update on NASA’s progress toward achieving a flight rate of twice per year for the SLS rocket, and the Artemis mission by which this capability will be in place.

Additionally, Congress is asking for NASA to study demand for the SLS rocket and estimate “cost and schedule savings for reduced transit times” for deep space missions due to the “unique capabilities” of the rocket. The space agency also must identify any “barriers or challenges” that could impede use of the rocket by other entities other than NASA, and estimate the cost of overcoming those barriers.

Is someone afraid?

There is a fair bit to unpack here, but the inclusion of this section—there is no “reaffirmation” of the Orion spacecraft, for example—suggests that either the legacy space companies building the SLS rocket, local legislators, or both feel the need to protect the SLS rocket. As one source on Capitol Hill familiar with the legislation told Ars, “It’s a sign that somebody’s afraid.”

Congress created the SLS rocket 14 years ago with the NASA Authorization Act of 2010. The large rocket kept a river of contracts flowing to large aerospace companies, including Boeing and Northrop Grumman, who had been operating the Space Shuttle. Congress then lavished tens of billions of dollars on the contractors over the years for development, often authorizing more money than NASA said it needed. Congressional support was unwavering, at least in part because the SLS program boasts that it has jobs in every state.

Under the original law, the SLS rocket was supposed to achieve “full operational capability” by the end of 2016. The first launch of the SLS vehicle did not take place until late 2022, six years later. It was entirely successful. However, due to various reasons, the rocket will not fly again until September 2025 at the earliest.

Congress apparently feels a need for “reaffirmation” of SLS rocket Read More »

nearby-star-cluster-houses-unusually-large-black-hole

Nearby star cluster houses unusually large black hole

Big, but not that big —

Fast-moving stars imply that there’s an intermediate-mass black hole there.

Three panel image, with zoom increasing from left to right. Left most panel is a wide view of the globular cluster; right is a zoom in to the area where its central black hole must reside.

Enlarge / From left to right, zooming in from the globular cluster to the site of its black hole.

ESA/Hubble & NASA, M. Häberle

Supermassive black holes appear to reside at the center of every galaxy and to have done so since galaxies formed early in the history of the Universe. Currently, however, we can’t entirely explain their existence, since it’s difficult to understand how they could grow quickly enough to reach the cutoff for supermassive as quickly as they did.

A possible bit of evidence was recently found by using about 20 years of data from the Hubble Space Telescope. The data comes from a globular cluster of stars that’s thought to be the remains of a dwarf galaxy and shows that a group of stars near the cluster’s core are moving so fast that they should have been ejected from it entirely. That implies that something massive is keeping them there, which the researchers argue is a rare intermediate-mass black hole, weighing in at over 8,000 times the mass of the Sun.

Moving fast

The fast-moving stars reside in Omega Centauri, the largest globular cluster in the Milky Way. With an estimated 10 million stars, it’s a crowded environment, but observations are aided by its relative proximity, at “only” 17,000 light-years away. Those observations have been hinting that there might be a central black hole within the globular cluster, but the evidence has not been decisive.

The new work, done by a large international team, used over 500 images of Omega Centauri, taken by the Hubble Space Telescope over the course of 20 years. This allowed them to track the motion of stars within the cluster, allowing an estimate of their speed relative to the cluster’s center of mass. While this has been done previously, the most recent data allowed an update that reduced the uncertainty in the stars’ velocity.

Within the update data, a number of stars near the cluster’s center stood out for their extreme velocities: seven of them were moving fast enough that the gravitational pull of the cluster isn’t enough to keep them there. All seven should have been lost from the cluster within 1,000 years, although the uncertainties remained large for two of them. Based on the size of the cluster, there shouldn’t even be a single foreground star between the Hubble and the Omega Cluster, so these really seem to be within the cluster despite their velocity.

The simplest explanation for that is that there’s an additional mass holding them in place. That could potentially be several massive objects, but the close proximity of all these stars to the center of the cluster favor a single, compact object. Which means a black hole.

Based on the velocities, the researchers estimate that the object has a mass of at least 8,200 times that of the Sun. A couple of stars appear to be accelerating; if that holds up based on further observations, it would indicate that the black hole is over 20,000 solar masses. That places it firmly within black hole territory, though smaller than supermassive black holes, which are viewed as those with roughly a million solar masses or more. And it’s considerably larger than you’d expect from black holes formed through the death of a star, which aren’t expected to be much larger than 100 times the Sun’s mass.

This places it in the category of intermediate-mass black holes, of which there are only a handful of potential sightings, none of them universally accepted. So, this is a significant finding if for no other reason than it may be the least controversial spotting of an intermediate-mass black hole yet.

What’s this telling us?

For now, there are still considerable uncertainties in some of the details here—but prospects for improving the situation exist. Observations with the Webb Space Telescope could potentially pick up the faint emissions from gas that’s falling into the black hole. And it can track the seven stars identified here. Its spectrographs could also potentially pick up the red and blue shifts in light caused by the star’s motion. Its location at a considerable distance from Hubble could also provide a more detailed three-dimensional picture of Omega Centauri’s central structure.

Figuring this out could potentially tell us more about how black holes grow to supermassive scales. Earlier potential sightings of intermediate-mass black holes have also come in globular clusters, which may suggest that they’re a general feature of large gatherings of stars.

But Omega Centauri differs from many other globular clusters, which often contain large populations of stars that all formed at roughly the same time, suggesting the clusters formed from a single giant cloud of materials. Omega Centauri has stars with a broad range of ages, which is one of the reasons why people think it’s the remains of a dwarf galaxy that was sucked into the Milky Way.

If that’s the case, then its central black hole is an analog of the supermassive black holes found in actual dwarf galaxies—which raises the question of why it’s only intermediate-mass. Did something about its interactions with the Milky Way interfere with the black hole’s growth?

And, in the end, none of this sheds light on how any black hole grows to be so much more massive than any star it could conceivably have formed from. Getting a better sense of this black hole’s history could provide more perspective on some questions that are currently vexing astronomers.

Nature, 2024. DOI: 10.1038/s41586-024-07511-z  (About DOIs).

Nearby star cluster houses unusually large black hole Read More »

beryl-is-just-the-latest-disaster-to-strike-the-energy-capital-of-the-world

Beryl is just the latest disaster to strike the energy capital of the world

Don’t know what you’ve got until it’s gone —

It’s pretty weird to use something I’ve written about in the abstract for so long.

Why yes, that Starlink dish is precariously perched to get around tree obstructions.

Enlarge / Why yes, that Starlink dish is precariously perched to get around tree obstructions.

Eric Berger

I’ll readily grant you that Houston might not be the most idyllic spot in the world. The summer heat is borderline unbearable. The humidity is super sticky. We don’t have mountains or pristine beaches—we have concrete.

But we also have a pretty amazing melting pot of culture, wonderful cuisine, lots of jobs, and upward mobility. Most of the year, I love living here. Houston is totally the opposite of, “It’s a nice place to visit, but you wouldn’t want to live there.” Houston is not a particularly nice place to visit, but you might just want to live here.

Except for the hurricanes.

Houston is the largest city in the United States to be highly vulnerable to hurricanes. At a latitude of 29.7 degrees, the city is solidly in the subtropics, and much of it is built within 25 to 50 miles of the Gulf of Mexico. Every summer, with increasing dread, we watch tropical systems develop over the Atlantic Ocean and then move into the Gulf.

For some meteorologists and armchair forecasters, tracking hurricanes is fulfilling work and a passionate hobby. For those of us who live near the water along the upper Texas coast, following the movements of these storms is gut-wrenching stuff. A few days before a potential landfall, I’ll find myself jolting awake in the middle of the night by the realization that new model data must be available. When you see a storm turning toward you, or intensifying, it’s psychologically difficult to process.

Beryl the Bad

It felt like we were watching Beryl forever. It formed into a tropical depression on June 28, became a hurricane the next day, and by June 30, it was a major hurricane storming into the Caribbean Sea. Beryl set all kinds of records for a hurricane in late June and early July. Put simply, we have never seen an Atlantic storm intensify so rapidly, or so much, this early in the hurricane season. Beryl behaved as if it were the peak of the Atlantic season, in September, rather than the beginning of July—normally a pretty sleepy time for Atlantic hurricane activity. I wrote about this for Ars Technica a week ago.

At the time, it looked as though the greater Houston area would be completely spared by Beryl, as the most reliable modeling data took the storm across the Yucatan Peninsula and into the southern Gulf of Mexico before a final landfall in northern Mexico. But over time, the forecast began to change, with the track moving steadily up the Texas coast.

I was at a dinner to celebrate the birthday of my cousin’s wife last Friday when I snuck a peek at my phone. It was about 7 pm local time. We were at a Mexican restaurant in Galveston, and I knew the latest operational run of the European model was about to come out. This was a mistake, as the model indicated a landfall about 80 miles south of Houston, which would bring the core of the storm’s strongest winds over Houston.

I had to fake joviality for the rest of the night, while feeling sick to my stomach.

Barreling inland

The truth is, Beryl could have been much worse. After weakening due to interaction with the Yucatan Peninsula on Friday, Beryl moved into the Gulf of Mexico just about when I was having that celebratory dinner on Friday evening. At that point, it was a strong tropical storm with 60 mph sustained winds. It had nearly two and a half days over open water to re-organize, and that seemed likely. Beryl had Saturday to shrug off dry air and was expected to intensify significantly on Sunday. It was due to make landfall on Monday morning.

The track for Beryl continued to look grim over the weekend—although its landfall would occur well south of Houston, Beryl’s track inland would bring its center and core of strongest winds over the most densely populated part of the city. However, we took some solace from a lack of serious intensification on Saturday and Sunday. Even at 10 pm local time on Sunday, less than six hours before Beryl’s landfall near Matagorda, it was still not a hurricane.

However, in those final hours Beryl did finally start to get organized in a serious way. We have seen this before as hurricanes start to run up on the Texas coast, where frictional effects from its outer bands aid intensification. In the last six hours Beryl intensified into a Category 1 hurricane, with 80-mph sustained winds. The eyewall of the storm closed, and Beryl was poised for rapid intensification. Then it ran aground.

Normally, as a hurricane traverses land it starts to weaken fairly quickly. But Beryl didn’t. Instead, the storm maintained much of its strength and bulldozed right into the heart of Houston with near hurricane-force sustained winds and higher gusts. I suspect what happened is that Beryl, beginning to deepen, had a ton of momentum at landfall, and it took time for interaction with land to reverse that momentum and begin slowing down its winds.

First the lights went out. Then the Internet soon followed. Except for storm chasers, hurricanes are miserable experiences. There is the torrential rainfall and rising water. But most ominous of all, at least for me, are the howling winds. When stronger gusts come through, even sturdily built houses shake. Trees whip around violently. It is such an uncontrolled, violent fury that one must endure. Losing a connection to the outside world magnifies one’s sense of helplessness.

In the end, Beryl knocked out power to about 2.5 million customers across the Houston region, including yours truly. Because broadband Internet service providers generally rely on these electricity services to deliver Internet, many customers lost connectivity. Even cell phone towers, reduced to batteries or small generators, were often only capable of delivering text and voice services.

Beryl is just the latest disaster to strike the energy capital of the world Read More »

why-every-quantum-computer-will-need-a-powerful-classical-computer

Why every quantum computer will need a powerful classical computer

Image of a set of spheres with arrows within them, with all the arrows pointing in the same direction.

Enlarge / A single logical qubit is built from a large collection of hardware qubits.

One of the more striking things about quantum computing is that the field, despite not having proven itself especially useful, has already spawned a collection of startups that are focused on building something other than qubits. It might be easy to dismiss this as opportunism—trying to cash in on the hype surrounding quantum computing. But it can be useful to look at the things these startups are targeting, because they can be an indication of hard problems in quantum computing that haven’t yet been solved by any one of the big companies involved in that space—companies like Amazon, Google, IBM, or Intel.

In the case of a UK-based company called Riverlane, the unsolved piece that is being addressed is the huge amount of classical computations that are going to be necessary to make the quantum hardware work. Specifically, it’s targeting the huge amount of data processing that will be needed for a key part of quantum error correction: recognizing when an error has occurred.

Error detection vs. the data

All qubits are fragile, tending to lose their state during operations, or simply over time. No matter what the technology—cold atoms, superconducting transmons, whatever—these error rates put a hard limit on the amount of computation that can be done before an error is inevitable. That rules out doing almost every useful computation operating directly on existing hardware qubits.

The generally accepted solution to this is to work with what are called logical qubits. These involve linking multiple hardware qubits together and spreading the quantum information among them. Additional hardware qubits are linked in so that they can be measured to monitor errors affecting the data, allowing them to be corrected. It can take dozens of hardware qubits to make a single logical qubit, meaning even the largest existing systems can only support about 50 robust logical qubits.

Riverlane’s founder and CEO, Steve Brierley, told Ars that error correction doesn’t only stress the qubit hardware; it stresses the classical portion of the system as well. Each of the measurements of the qubits used for monitoring the system needs to be processed to detect and interpret any errors. We’ll need roughly 100 logical qubits to do some of the simplest interesting calculations, meaning monitoring thousands of hardware qubits. Doing more sophisticated calculations may mean thousands of logical qubits.

That error-correction data (termed syndrome data in the field) needs to be read between each operation, which makes for a lot of data. “At scale, we’re talking a hundred terabytes per second,” said Brierley. “At a million physical qubits, we’ll be processing about a hundred terabytes per second, which is Netflix global streaming.”

It also has to be processed in real time, otherwise computations will get held up waiting for error correction to happen. To avoid that, errors must be detected in real time. For transmon-based qubits, syndrome data is generated roughly every microsecond, so real time means completing the processing of the data—possibly Terabytes of it—with a frequency of around a Megahertz. And Riverlane was founded to provide hardware that’s capable of handling it.

Handling the data

The system the company has developed is described in a paper that it has posted on the arXiv. It’s designed to handle syndrome data after other hardware has already converted the analog signals into digital form. This allows Riverlane’s hardware to sit outside any low-temperature hardware that’s needed for some forms of physical qubits.

That data is run through an algorithm the paper terms a “Collision Clustering decoder,” which handles the error detection. To demonstrate its effectiveness, they implement it based on a typical Field Programmable Gate Array from Xilinx, where it occupies only about 5 percent of the chip but can handle a logical qubit built from nearly 900 hardware qubits (simulated, in this case).

The company also demonstrated a custom chip that handled an even larger logical qubit, while only occupying a tiny fraction of a square millimeter and consuming just 8 milliwatts of power.

Both of these versions are highly specialized; they simply feed the error information for other parts of the system to act on. So, it is a highly focused solution. But it’s also quite flexible in that it works with various error-correction codes. Critically, it also integrates with systems designed to control a qubit based on very different physics, including cold atoms, trapped ions, and transmons.

“I think early on it was a bit of a puzzle,” Brierley said. “You’ve got all these different types of physics; how are we going to do this?” It turned out not to be a major challenge. “One of our engineers was in Oxford working with the superconducting qubits, and in the afternoon he was working with the iron trap qubits. He came back to Cambridge and he was all excited. He was like, ‘They’re using the same control electronics.'” It turns out that, regardless of the physics involved in controlling the qubits, everybody had borrowed the same hardware from a different field (Brierley said it was a Xilinx radiofrequency system-on-a-chip built for 5G base stationed prototyping.) That makes it relatively easy to integrate Riverlane’s custom hardware with a variety of systems.

Why every quantum computer will need a powerful classical computer Read More »