Tracing the lineages of agricultural ants to their most recent common ancestor revealed that the ancestor probably lived through the end-Cretaceous mass extinction—the one that killed off the dinosaurs. The researchers argue that the two were almost certainly related. Current models suggest that there was so much dust in the atmosphere after the impact that set off the mass extinction that photosynthesis shut down for nearly two years, meaning minimal plant life. By contrast, the huge amount of dead material would allow fungi to flourish. So, it’s not surprising that ants started to adapt to use what was available to them.
That explains the huge cluster of species that cooperate with fungi. However, most of the species that engage in organized farming don’t appear until roughly 35 million years after the mass extinction, at the end of the Eocene (that’s about 33 million years before the present period). The researchers suggest that the climate changes that accompanied the transition to the Oligocene included a drying out of the tropical Americas, where the fungus-farming ants had evolved. This would cut down on the availability of fungi in the wild, potentially selecting for the ability of species that could propagate fungal species on their own.
This also corresponds to the origins of the yeast strains used by farming ants, as well as the most specialized agricultural fungal species. But it doesn’t account for the origin of coral fungus farmers, which seems to have occurred roughly 10 million years later.
The work gives us a much clearer picture of the origin of agriculture in ants and some reasonable hypotheses regarding the selective pressures that might have led to its evolution. In the long term, however, the biggest advance here may be the resources generated during this study. Ultimately, we’d like to understand the genetic basis for the changes in the ants’ behavior, as well as how the fungi have adapted to better provide for their farmers. To do that, we’ll need to compare the genomes of agricultural species with their free-living relatives. The DNA gathered for this study will ultimately be needed to pursue those questions.
Any striking marketing claims in companies’ ads about the gut benefits of a popular probiotic may be full of, well, the same thing that has their target audience backed up.
In a randomized controlled trial, the probiotic Bifidobacterium animalis subsp. lactis—used in many probiotic products, including Dannon’s Activia yogurts—did nothing to improve bowel health in people with constipation, according to data from a randomized triple-blind placebo-controlled clinical trial published Wednesday in JAMA Network Open.
The study adds to a mixed and mostly unconvincing body of scientific literature on the bowel benefits of the bacterium, substrains of which are sometimes sold with faux scientific-sounding names in products. Dannon, for instance, previously marketed its substrain, DN-173 010, as “Bifidus regularis.”
Digested data
For the new study, researchers in China recruited 228 middle-aged adults, 85 percent of whom were women. The participants, all from Shanghai, were considered healthy based on medical testing and records, except for reporting functional constipation. This is a condition defined by having two or more signs of difficulty evacuating the bowels, such as frequent straining and having rock-like stool. For the study, the researchers included the additional criterion that participants have three or fewer complete, spontaneous bowel movements (CSBMs) per week.
The participants were randomized to take either a placebo (117 participants) or the probiotic (112 participants) every day for eight weeks. Both groups got packets of sweetened powder that participants added to a glass of water taken before breakfast each morning. In addition to a sweetener, the daily probiotic packets contained freeze-dried Bifidobacterium animalis subsp. lactis substrain HN019, which is used in some commercial probiotic products. The first dose had a concentration of 7 × 109 colony-forming units (CFUs), then participants shifted to a daily dose of 4.69 × 109 CFUs. Many probiotic products have doses of B. lactis in ranges from 1 × 109 to 17 × 109.
On Friday’s launch, United Launch Alliance will test the limits of its Centaur upper stage.
United Launch Alliance’s second Vulcan rocket underwent a countdown dress rehearsal Tuesday. Credit: United Launch Alliance
The second flight of United Launch Alliance’s Vulcan rocket, planned for Friday morning, has a primary goal of validating the launcher’s reliability for delivering critical US military satellites to orbit.
Tory Bruno, ULA’s chief executive, told reporters Wednesday that he is “supremely confident” the Vulcan rocket will succeed in accomplishing that objective. The Vulcan’s second test flight, known as Cert-2, follows a near-flawless debut launch of ULA’s new rocket on January 8.
“As I come up on Cert-2, I’m pretty darn confident I’m going to have a good day on Friday, knock on wood,” Bruno said. “These are very powerful, complicated machines.”
The Vulcan launcher, a replacement for ULA’s Atlas V and Delta IV rockets, is on contract to haul the majority of the US military’s most expensive national security satellites into orbit over the next several years. The Space Force is eager to certify Vulcan to launch these payloads, but military officials want to see two successful test flights before committing one of its satellites to flying on the new rocket.
If Friday’s test flight goes well, ULA is on track to launch at least one—and perhaps two—operational missions for the Space Force by the end of this year. The Space Force has already booked 25 launches on ULA’s Vulcan rocket for military payloads and spy satellites for the National Reconnaissance Office. Including the launch Friday, ULA has 70 Vulcan rockets in its backlog, mostly for the Space Force, the NRO, and Amazon’s Kuiper satellite broadband network.
The Vulcan rocket is powered by two methane-fueled BE-4 engines produced by Jeff Bezos’ space company Blue Origin, and ULA can mount zero, two, four, or six strap-on solid rocket boosters from Northrop Grumman around the Vulcan’s first stage to propel heavier payloads to space. The rocket’s Centaur V upper stage is fitted with a pair of hydrogen-burning RL10 engines from Aerojet Rocketdyne.
The second Vulcan rocket will fly in the same configuration as the first launch earlier this year, with two strap-on solid-fueled boosters. The only noticeable modification to the rocket is the addition of some spray-on foam insulation around the outside of the first stage methane tank, which will keep the cryogenic fuel at the proper temperature as Vulcan encounters aerodynamic heating on its ascent through the atmosphere.
“This will give us just over one second more usable propellant,” Bruno wrote on X.
There is one more change from Vulcan’s first launch, which boosted a commercial lunar lander for Astrobotic on a trajectory toward the Moon. This time, there are no real spacecraft on the Vulcan rocket. Instead, ULA mounted a dummy payload to the Centaur V upper stage to simulate the mass of a functioning satellite.
ULA originally planned to launch Sierra Space’s first Dream Chaser spaceplane on the second Vulcan rocket. But the Dream Chaser won’t be ready to fly its first mission to resupply the International Space Station until next year. Under pressure from the Pentagon, ULA decided to move ahead with the second Vulcan launch without a payload at the company’s own expense, which Bruno tallied in the “high tens of millions of dollars.”
Heliocentricity
The test flight will begin with liftoff from Cape Canaveral Space Force Station, Florida, during a three-hour launch window opening at 6 am EDT (10: 00 UTC). The 202-foot-tall (61.6-meter) Vulcan rocket will head east over the Atlantic Ocean, shedding its boosters, first stage, and payload fairing in the first few minutes of flight.
The Centaur upper stage will fire its RL10 engines two times, completing the primary mission within about 35 minutes of launch. The rocket will then continue on for a series of technical demonstrations before ending up on an Earth escape trajectory into a heliocentric orbit around the Sun.
“We have a number of experiments that we’re conducting that are really technology demonstrations and measurements that are associated with our high-performance, longer-duration version of Centaur V that we’ll be introducing in the future,” Bruno said. “And these will help us go a little bit faster on that development. And, of course, because we don’t have an active spacecraft as a payload, we also have more instrumentation that we’re able to use for just characterizing the vehicle.”
ULA engineers have worked on the design of a long-lived upper stage for more than a decade. Their vision was to develop an upper stage fed by super-efficient cryogenic liquid hydrogen and liquid oxygen propellants that could generate its own power and operate in space for days, weeks, or longer rather than an upper stage’s usual endurance limit of several hours. This would allow the rocket to not only deliver satellites into bespoke high-altitude orbits but also continue on to release more payloads at different altitudes or provide longer-term propulsion in support of other missions.
The concept was called the Advanced Cryogenic Evolved Stage (ACES). ULA’s corporate owners, Boeing and Lockheed Martin, never authorized the full development of ACES, and the company said in 2020 that it was no longer pursuing the ACES concept.
The Centaur V upper stage currently used on the Vulcan rocket is a larger version of the thin-walled, pressure-stabilized Centaur upper stage that has been flying since the 1960s. Bruno said the Centaur V design, as it is today, offers as much as 12 hours of operating life in space. This is longer than any other existing rocket using cryogenic propellants, which can boil off over time.
ULA’s chief executive still harbors an ambition for regaining some of the same capabilities promised by ACES.
“What we are looking to do is to extend that by orders of magnitude,” Bruno said. “And what that would allow us to do is have a in-space transportation capability for in-space mobility and servicing and things like that.”
Space Force leaders have voiced a desire for future spacecraft to freely maneuver between different orbits, a concept the military calls “dynamic space operations.” This would untether spacecraft operations from fuel limitations and eventually require the development of in-orbit refueling, propellant depots, or novel propulsion technologies.
No one has tried to store large amounts of super-cold propellants in space for weeks or longer. Accomplishing this is a non-trivial thermal problem, requiring insulation to keep heat from the Sun from reaching the liquid cryogenic propellant, stored at temperatures of several hundred degrees below zero.
Bruno hesitated to share details of the experiments ULA plans for the Centaur V upper stage on Friday’s test flight, citing proprietary concerns. He said the experiments will confirm analytical models about how the upper stage performs in space.
“Some of these are devices, some of these are maneuvers because maneuvers make a difference, and some are related to performance in a way,” he said. “In some cases, those maneuvers are helping us with the thermal load that tries to come in and boil off the propellants.”
Eventually, ULA would like to eliminate hydrazine attitude control fuel and battery power from the Centaur V upper stage, Bruno said Wednesday. This sounds a lot like what ULA wanted to do with ACES, which would have used an internal combustion engine called Integrated Vehicle Fluids (IVF) to recycle gasified waste propellants to pressurize its propellant tanks, generate electrical power, and feed thrusters for attitude control. This would mean the upper stage wouldn’t need to rely on hydrazine, helium, or batteries.
ULA hasn’t talked much about the IVF system in recent years, but Bruno said the company is still developing it. “It’s part of all of this, but that’s all I will say, or I’ll start revealing what all the gadgets are.”
A comparison between ULA’s legacy Centaur upper stage and the new Centaur V.
A comparison between ULA’s legacy Centaur upper stage and the new Centaur V. Credit: United Launch Alliance
George Sowers, former vice president and chief scientist at ULA, was one of the company’s main advocates for extending the lifetime of upper stages and developing technologies for refueling and propellant depot. He retired from ULA in 2017 and is now a professor at the Colorado School of Mines and an independent aerospace industry consultant.
In an interview with Ars earlier this year, Sowers said ULA solved many of the problems with keeping cryogenic propellants at the right temperature in space.
“We had a lot of data on boil-off, just from flying Centaurs all the way to geosynchronous orbit, which doesn’t involve weeks, but it involves maybe half a day or so, which is plenty of time to get all the temperatures to stabilize at deep space levels,” Sowers said. “So you have to understand the heat transfer very well. Good models are very important.”
ULA experimented with different types of insulation and vapor cooling, which involves taking cold gas that boiled off of cryogenic fuel and blowing it on heat penetration points into the tanks.
“There are tricks to managing boil-off,” he said. “One of the tricks is that you never want to boil oxygen. You always want to boil hydrogen. So you size your propellant tanks and your propellant loads, assuming you’re going to have that extra hydrogen boil-off. Then what you can do is use the hydrogen to keep the oxygen cold to keep it from boiling.
“The amount of heat that you can reject by boiling off one kilogram of hydrogen is about five times what you would reject by boiling off one kilogram of oxygen. So those are some of the thermodynamic tricks,” Sowers said. “The way ULA accomplished that is by having a common bulkhead, so the hydrogen tank and the oxygen tank are in thermal contact. So hydrogen keeps the oxygen cold.”
ULA’s experiments showed it could get the hydrogen boil-off rate down to about 10 percent per year, based on thermodynamic models calibrated by data from flying older versions of the Centaur upper stage on Atlas V rockets, according to Sowers.
“In my mind, that kind of cemented the idea that distribution depots and things like that are very well in hand without having to have exotic cryocoolers, which tend to use a lot of power,” Sowers said. “It’s about efficiency. If you can do it passively, you don’t have to expend energy on cryocoolers.”
“We’re going to go to days, and then we’re going to go to weeks, and then we think it’s possible to take us to months,” Bruno said. “That’s a game changer.”
However, ULA’s corporate owners haven’t yet fully bought into this vision. Bruno said the Vulcan rocket and its supporting manufacturing and launch infrastructure cost between $5 billion and $7 billion to develop. ULA also plans to eventually recover and reuse BE-4 main engines from the Vulcan rocket, but that is still at least several years away.
But ULA is reportedly up for sale, and a well-capitalized buyer might find the company’s long-duration cryogenic upper stage more attractive and worth the investment.
“There’s a whole lot of missions that enables,” Bruno said. “So that’s a big step in capability, both for the United States and also commercially.”
Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.
Swierk et al. use various methods, including Raman spectroscopy, nuclear magnetic resonance spectroscopy, and electron microscopy, to analyze a broad range of commonly used tattoo inks. This enables them to identify specific pigments and other ingredients in the various inks.
Earlier this year, Swierk’s team identified 45 out of 54 inks (90 percent) with major labeling discrepancies in the US. Allergic reactions to the pigments, especially red inks, have already been documented. For instance, a 2020 study found a connection between contact dermatitis and how tattoos degrade over time. But additives can also have adverse effects. More than half of the tested inks contained unlisted polyethylene glycol—repeated exposure could cause organ damage—and 15 of the inks contained a potential allergen called propylene glycol.
Meanwhile, across the pond…
That’s a major reason why the European Commission has recently begun to crack down on harmful chemicals in tattoo ink, including banning two widely used blue and green pigments (Pigment Blue 15 and Pigment Green 7), claiming they are often of low purity and can contain hazardous substances. (US regulations are less strict than those adopted by the EU.) Swierk’s team has now expanded its chemical analysis to include 10 different tattoo inks from five different manufacturers supplying the European market.
According to Swierk et al., nine of those 10 inks did not meet EU regulations; five simply failed to list all the components, but four contained prohibited ingredients. The other main finding was that Raman spectroscopy is not very reliable for figuring out which of three common structures of Pigment Blue 15 has been used. (Only one has been banned.) Different instruments failed to reliably distinguish between the three forms, so the authors concluded that the current ban on Pigment Blue 15 is simply unenforceable.
“There are regulations on the book that are not being complied with, at least in part because enforcement is lagging,” said Swierk. “Our work cannot determine whether the issues with inaccurate tattoo ink labeling is intentional or unintentional, but at a minimum, it highlights the need for manufacturers to adopt better manufacturing standards. At the same time, the regulations that are on the books need to be enforced and if they cannot be enforced, like we argue in the case of Pigment Blue 15, they need to be reevaluated.”
Enlarge/ Residents line up for COVID-19 testing on November 30, 2020 in Chicago.
The co-owner of a Chicago-based lab has pleaded guilty for his role in a COVID testing scam that raked in millions—which he used to buy stocks, cryptocurrency, and several luxury cars while still squirreling away over $6 million in his personal bank account.
Zishan Alvi, 45, of Inverness, Illinois, co-owned LabElite, which federal prosecutors say billed the federal government for COVID-19 tests that were either never performed or were performed with purposefully inadequate components to render them futile. Customers who sought testing from LabElite—sometimes for clearance to travel or have contact with vulnerable people—received either no results or results indicating they were negative for the deadly virus.
The scam, which ran from around February 2021 to about February 2022, made over $83 million total in fraudulent payments from the federal government’s Health Resources and Services Administration (HRSA), which covered the cost of COVID-19 testing for people without insurance during the height of the pandemic. Local media coverage indicated that people who sought testing at LabElite were discouraged from providing health insurance information.
The list included five vehicles: a 2021 Mercedes-Benz, a 2021 Land Rover Range Rover HSE, a 2021 Lamborghini Urus, A 2021 Bentley, and a 2022 Tesla X. There was also about $810,000 in an E*Trade account, approximately $500,000 in a Fidelity Investments account, and $245,814 in a Coinbase account. Last, there was $6,825,089 in Alvi’s personal bank account.
On Monday, the Department of Justice announced a deal in which Alvi pleaded guilty to one count of wire fraud, taking responsibility for $14 million worth of fraudulent HRSA claims. He now faces up to 20 years in prison and will be sentenced on February 7, 2025.
Enlarge/ This video screenshot released by the US National Transportation Safety Board (NTSB) shows the site of a derailed freight train in East Palestine, Ohio.
On February 3, 2023, a train carrying chemicals jumped the tracks in East Palestine, Ohio, rupturing railcars filled with hazardous materials and fueling chemical fires at the foothills of the Appalachian Mountains.
The disaster drew global attention as the governors of Ohio and Pennsylvania urged evacuations for a mile around the site. Flames and smoke billowed from burning chemicals, and an acrid odor radiated from the derailment area as chemicals entered the air and spilled into a nearby creek.
Three days later, at the urging of the rail company Norfolk Southern, about 1 million pounds of vinyl chloride, a chemical that can be toxic to humans at high doses, was released from the damaged train cars and set aflame.
As environmental engineers, I and my colleagues are often asked to assist with public health decisions after disasters by government agencies and communities. After the evacuation order was lifted, community members asked for help.
In a new study, we describe the contamination we found, along with problems with the response and cleanup that, in some cases, increased the chances that people would be exposed to hazardous chemicals. It offers important lessons to better protect communities in the future.
Enlarge/ A computer model shows how chemicals from the train may have spread, given wind patterns. The star on the Ohio-Pennsylvania line is the site of the derailment.
Air pollution can find its way into buildings through cracks, windows, doors, and other portals. Once inside, the chemicals can penetrate home items like carpets, drapes, furniture, counters, and clothing. When the air is stirred up, those chemicals can be released again.
Evacuation order lifted, but buildings were contaminated
Three weeks after the derailment, we began investigating the safety of the area near 17 buildings in Ohio and Pennsylvania. The highest concentration of air pollution occurred in the 1-mile evacuation zone and a shelter-in-place band another mile beyond that. But the chemical plume also traveled outside these areas.
In and outside East Palestine, evidence indicated that chemicals from the railcars had entered buildings. Many residents complained about headaches, rashes, and other health symptoms after reentering the buildings.
At one building 0.2 miles away from the derailment site, the indoor air was still contaminated more than four months later.
Nine days after the derailment, sophisticated air testing by a business owner showed the building’s indoor air was contaminated with butyl acrylate and other chemicals carried by the railcars. Butyl acrylate was found above the two-week exposure level, a level at which measures should be taken to protect human health.
When rail company contractors visited the building 11 days after the wreck, their team left after just 10 minutes. They reported an “overwhelming/unpleasent odor” even though their government-approved handheld air pollution detectors detected no chemicals. This building was located directly above Sulphur Run creek, which had been heavily contaminated by the spill. Chemicals likely entered from the initial smoke plumes and also rose from the creek into the building.
Our tests weeks later revealed that railcar chemicals had even penetrated the business’s silicone wristband products on its shelves. We also detected several other chemicals that may have been associated with the spill.
Enlarge/ Homes and businesses were mere feet from the contaminated waterways in East Palestine.
Weeks after the derailment, government officials discovered that air in the East Palestine Municipal Building, about 0.7 miles away from the derailment site, was also contaminated. Airborne chemicals had entered that building through an open drain pipe from Sulphur Run.
More than a month after the evacuation order was lifted, the Ohio Environmental Protection Agency acknowledged that multiple buildings in East Palestine were being contaminated as contractors cleaned contaminated culverts under and alongside buildings. Chemicals were entering the buildings.
Enlarge/ The empty chair of Steward Health Care System Chief Executive Officer, Dr. Ralph de la Torre who did not show up during the US Senate Committee on Health, Education, Labor, & Pensions Examining the Bankruptcy of Steward Health Care: How Management Decisions Have Impacted Patient Care.
In a federal lawsuit filed Monday, Steward CEO Ralph de la Torre claimed the senators “bulldozed over [his] constitutional rights” as they tried to “pillory and crucify him as a loathsome criminal” in a “televised circus.”
The Senate committee—the Committee on Health, Education, Labor, and Pensions (HELP), led by Bernie Sanders (I-Vt.)—issued a rare subpoena to de la Torre in July, compelling him to testify before the lawmakers. They sought to question the CEO on the deterioration of his hospital system, which previously included more than 30 hospitals across eight states. Steward filed for bankruptcy in May.
Imperiled patients
The committee alleges that de la Torre and Steward executives reaped millions in personal profits by hollowing out the health care facilities, even selling the land out from under them. The mismanagement left them so financially burdened that one doctor in a Steward-owned hospital in Louisiana said they were forced to perform “third-world medicine.” A lawmaker in that state who investigated the conditions at the hospital described Steward executives as “health care terrorists.”
Further, the financial strain on the hospitals is alleged to have led to the preventable deaths of 15 patients and put more than 2,000 other patients in “immediate peril.” As hospitals cut services, closed wards, or shuttered entirely, hundreds of health care workers were laid off, and communities were left without access to care. Nurses who remained in faltering facilities testified of harrowing conditions, including running out of basic supplies like beds. In one Massachusetts hospital, nurses were forced to place the remains of newborns in cardboard shipping boxes because Steward failed to pay a vendor for bereavement boxes.
Meanwhile, records indicate de la Torre and his companies were paid at least $250 million in recent years and he bought a 190-foot yacht for $40 million. Steward also owned two private jets collectively worth $95 million.
While de la Torre initially agreed to testify before the committee at the September 12 hearing, the wealthy CEO backed out the week beforehand. He claimed that a federal court order linked to the bankruptcy case prevented him from speaking on the matter; additionally, he invoked his Fifth Amendment right to avoid self-incrimination.
The HELP committee rejected de la Torre’s arguments, saying there were still relevant topics he could safely discuss without violating the order and that his Fifth Amendment rights did not permit him to refuse to appear before Congress when summoned by a subpoena. Still, the CEO was a no-show, and the Senate moved forward with the contempt charges.
“Not the way this works”
In the lawsuit filed today, de la Torre argues that the senators are attempting to punish him for invoking his Constitutional rights and that the hearing “was simply a device for the Committee to attack [him] and try to publicly humiliate and condemn him.”
The suit describes de la Torre as having a “distinguished career, bedecked by numerous accomplishments,” while accusing the senators of painting him as “a villain and scapegoat[ing] him for the company’s problems, even those caused by systemic deficiencies in Massachusetts’ health care system.” If he had appeared at the Congressional hearing, he would not have been able to defend himself from the personal attacks without being forced to abandon his Constitutional rights, the suit argues.
“Indeed, the Committee made it abundantly clear that they would put Dr. de la Torre’s invocation [of the Fifth Amendment] itself at the heart of their televised circus and paint him as guilty for the sin of remaining silent in the face of these assaults on his character and integrity,” the suit reads.
De la Torre seeks to have the federal court quash the Senate committee’s subpoena, enjoin both contempt charges, and declare that the Senate committee violated his Fifth Amendment rights.
Outside lawyers are skeptical that will occur. The lawsuit is a “Hail Mary play,” according to Stan M. Brand, an attorney who represented former Trump White House official Peter Navarro in a contempt of Congress case. De la Torre’s case “has very little chance of succeeding—I would say no chance of succeeding,” Brand told the Boston Globe.
“Every time that someone has tried to sue the House or Senate directly to challenge a congressional subpoena, the courts have said, ‘That that’s not the way this works,’” Brand said.
Enlarge/ The Ratcliffe-on-Soar plant is set to shut down for good today.
On Monday, the UK will see the closure of its last operational coal power plant, Ratcliffe-on-Soar, which has been operating since 1968. The closure of the plant, which had a capacity of 2,000 megawatts, will bring an end to the history of the country’s coal use, which started with the opening of the first coal-fired power station in 1882. Coal played a central part in the UK’s power system in the interim, in some years providing over 90 percent of its total electricity.
But a number of factors combined to place coal in a long-term decline: the growth of natural gas-powered plants and renewables, pollution controls, carbon pricing, and a government goal to hit net-zero greenhouse gas emissions by 2050.
From boom to bust
It’s difficult to overstate the importance of coal to the UK grid. It was providing over 90 percent of the UK’s electricity as recently as 1956. The total amount of power generated continued to climb well after that, reaching a peak of 212 terawatt hours of production by 1980. And the construction of new coal plants was under consideration as recently as the late 2000s. According to the organization Carbon Brief’s excellent timeline of coal use in the UK, continuing the use of coal with carbon capture was given consideration.
But several factors slowed the use of fuel ahead of any climate goals set out by the UK, some of which have parallels to the US’s situation. The European Union, which included the UK at the time, instituted new rules to address acid rain, which raised the cost of coal plants. In addition, the exploitation of oil and gas deposits in the North Sea provided access to an alternative fuel. Meanwhile, major gains in efficiency and the shift of some heavy industry overseas cut demand in the UK significantly.
Through their effect on coal use, these changes also lowered employment in coal mining. The mining sector has sometimes been a significant force in UK politics, but the decline of coal reduced the number of people employed in the sector, reducing its political influence.
These had all reduced the use of coal even before governments started taking any aggressive steps to limit climate change. But, by 2005, the EU implemented a carbon trading system that put a cost on emissions. By 2008, the UK government adopted national emissions targets, which have been maintained and strengthened since then by both Labour and Conservative governments up until Rishi Sunak, who was voted out of office before he had altered the UK’s trajectory. What started as a pledge for a 60 percent reduction in greenhouse gas emissions by 2050 now requires the UK to hit net zero by that date.
Enlarge/ Renewables, natural gas, and efficiency have all squeezed coal off the UK grid.
These have included a floor on the price of carbon that ensures fossil-powered plants pay a cost for emissions that’s significant enough to promote the transition to renewables, even if prices in the EU’s carbon trading scheme are too low for that. And that transition has been rapid, with the total generations by renewables nearly tripling in the decade since 2013, heavily aided by the growth of offshore wind.
How to clean up the power sector
The trends were significant enough that, in 2015, the UK announced that it would target the end of coal in 2025, despite the fact that the first coal-free day on the grid wouldn’t come until two years after. But two years after that landmark, however, the UK was seeing entire weeks where no coal-fired plants were active.
To limit the worst impacts of climate change, it will be critical for other countries to follow the UK’s lead. So it’s worthwhile to consider how a country that was committed to coal relatively recently could manage such a rapid transition. There are a few UK-specific factors that won’t be possible to replicate everywhere. The first is that most of its coal infrastructure was quite old—Ratcliffe-on-Soar dates from the 1960s—and so it required replacement in any case. Part of the reason for its aging coal fleet was the local availability of relatively cheap natural gas, something that might not be true elsewhere, which put economic pressure on coal generation.
Another key factor is that the ever-shrinking number of people employed by coal power didn’t exert significant pressure on government policies. Despite the existence of a vocal group of climate contrarians in the UK, the issue never became heavily politicized. Both Labour and Conservative governments maintained a fact-based approach to climate change and set policies accordingly. That’s notably not the case in countries like the US and Australia.
But other factors are going to be applicable to a wide variety of countries. As the UK was moving away from coal, renewables became the cheapest way to generate power in much of the world. Coal is also the most polluting source of electrical power, providing ample reasons for regulation that have little to do with climate. Forcing coal users to pay even a fraction of its externalized costs on human health and the environment serve to make it even less economical compared to alternatives.
If these later factors can drive a move away from coal despite government inertia, then it can pay significant dividends in the fight to limit climate change. Inspired in part by the success in moving its grid off coal, the new Labour government in the UK has moved up its timeline for decarbonizing its power sector to 2030 (up from the previous Conservative government’s target of 2035).
Enlarge/ 3D rendering of an NK Cell destroying a cancer cell.
Billions of cells die in your body every day. Some go out with a bang, others with a whimper.
They can die by accident if they’re injured or infected. Alternatively, should they outlive their natural lifespan or start to fail, they can carefully arrange for a desirable demise, with their remains neatly tidied away.
Originally, scientists thought those were the only two ways an animal cell could die, by accident or by that neat-and-tidy version. But over the past couple of decades, researchers have racked up many more novel cellular death scenarios, some specific to certain cell types or situations. Understanding this panoply of death modes could help scientists save good cells and kill bad ones, leading to treatments for infections, autoimmune diseases, and cancer.
“There’s lots and lots of different flavors here,” says Michael Overholtzer, a cell biologist at Memorial Sloan Kettering Cancer Center in New York. He estimates that there are now more than 20 different names to describe cell death varieties.
Here, Knowable Magazine profiles a handful of classic and new modes by which cells kick the bucket.
Unplanned cell death: Necrosis
Lots of bad things can happen to cells: They get injured or burned, poisoned or starved of oxygen, infected by microbes or otherwise diseased. When a cell dies by accident, it’s called necrosis.
There are several necrosis types, none of them pretty: In the case of gangrene, when cells are starved for blood, cells rot away. In other instances, dying cells liquefy, sometimes turning into yellow goop. Lung cells damaged by tuberculosis turn smushy and white — the technical name for this type, “caseous” necrosis, literally means “cheese-like.”
Any form of death other than necrosis is considered “programmed,” meaning it’s carried out intentionally by the cell because it’s damaged or has outlived its usefulness.
A good, clean death: Apoptosis
The two main categories of programmed cell death are “silent and violent,” says Thirumala-Devi Kanneganti, an immunologist at St. Jude Children’s Research Hospital in Memphis, Tennessee. Apoptosis, first named in 1972, is the original silent type: It’s a neat, clean form of cell death that doesn’t wake the immune system.
That’s handy when cells are damaged or have served out their purpose. Apoptosis allows tadpoles to discard tail cells when they become frogs, for example, or human embryos to dispose of the webbing between developing fingers.
The cell shrinks and detaches from its neighbors. Genetic material in the nucleus breaks into pieces that scrunch together, and the nucleus itself fragments. The membrane bubbles and blisters, and the cell disintegrates. Other cells gobble up the bits, keeping the tissue tidy.
Enlarge/ In necrosis, a cell dies by accident, releasing its contents and drawing immune cells to the site of damage by creating inflammation. In apoptosis, the cell collapses in on itself and the bits are cleared away without causing damaging inflammation.
Enlarge/ SpaceX’s Crew Dragon spacecraft climbs away from Cape Canaveral Space Force Station, Florida, on Saturday atop a Falcon 9 rocket.
NASA/Keegan Barber
NASA astronaut Nick Hague and Russian cosmonaut Aleksandr Gorbunov lifted off Saturday from Florida’s Space Coast aboard a SpaceX Dragon spacecraft, heading for a five-month expedition on the International Space Station.
The two-man crew launched on top of SpaceX’s Falcon 9 rocket at 1: 17 pm EDT (17: 17 UTC), taking an advantage of a break in stormy weather to begin a five-month expedition in space. Nine kerosene-fueled Merlin engines powered the first stage of the flight on a trajectory northeast from Cape Canaveral Space Force Station, then the booster detached and returned to landing at Cape Canaveral as the Falcon 9’s upper stage accelerated SpaceX’s Crew Dragon Freedom spacecraft into orbit.
“It was a sweet ride,” Hague said after arriving in space. With a seemingly flawless launch, Hague and Gorbunov are on track to arrive at the space station around 5: 30 pm EDT (2130 UTC) Sunday.
Empty seats
This is SpaceX’s 15th crew mission since 2020, and SpaceX’s 10th astronaut launch for NASA, but Saturday’s launch was unusual in a couple of ways.
“All of our missions have unique challenges and this one, I think, will be memorable for a lot of us,” said Ken Bowersox, NASA’s associate administrator for space operations.
First, only two people rode into orbit on SpaceX’s Crew Dragon spacecraft, rather than the usual complement of four astronauts. This mission, known as Crew-9, originally included Hague, Gorbunov, commander Zena Cardman, and NASA astronaut Stephanie Wilson.
But the troubled test flight of Boeing’s Starliner spacecraft threw a wrench into NASA’s plans. The Starliner mission launched in June with NASA astronauts Butch Wilmore and Suni Williams. Boeing’s spacecraft reached the space station, but thruster failures and helium leaks plagued the mission, and NASA officials decided last month it was too risky to being the crew back to Earth on Starliner.
NASA selected SpaceX and Boeing for multibillion-dollar commercial crew contracts in 2014, with each company responsible for developing human-rated spaceships to ferry astronauts to and from the International Space Station. SpaceX flew astronauts for the first time in 2020, and Boeing reached the same milestone with the test flight that launched in June.
Ultimately, the Starliner spacecraft safely returned to Earth on September 6 with a successful landing in New Mexico. But it left Wilmore and Williams behind on the space station with the lab’s long-term crew of seven astronauts and cosmonauts. The space station crew rigged two temporary seats with foam inside a SpaceX Dragon spacecraft currently docked at the outpost, where the Starliner astronauts would ride home if they needed to evacuate the complex in an emergency.
Enlarge/ NASA astronaut Nick Hague and Russian cosmonaut Aleksandr Gorbunov in their SpaceX pressure suits.
NASA/Kim Shiflett
This is a temporary measure to allow the Dragon spacecraft to return to Earth with six people instead of the usual four. NASA officials decided to remove two of the astronauts from the next SpaceX crew mission to free up normal seats for Wilmore and Williams to ride home in February, when Crew-9 was already slated to end its mission.
The decision to fly the Starliner spacecraft back to Earth without its crew had several second order effects on space station operations. Managers at NASA’s Johnson Space Center in Houston had to decide who to bump from the Crew-9 mission, and who to keep on the crew.
Nick Hague and Aleksandr Gorbunov ended up keeping their seats on the Crew-9 flight. Hague originally trained as the pilot on Crew-9, and NASA decided he would take Zena Cardman’s place as commander. Hague, a 49-year-old Space Force colonel, is a veteran of one long-duration mission on the International Space Station, and also experienced a rare in-flight launch abort in 2018 due to a failure of a Russian Soyuz rocket.
NASA announced the original astronaut assignments for the Crew-9 mission in January. Cardman, a 36-year-old geobiologist, would have been the first rookie astronaut without test pilot experience to command a NASA spaceflight. Three-time space shuttle flier Stephanie Wilson, 58, was the other astronaut removed from the Crew-9 mission.
The decision on who to fly on Crew-9 was a “really close call,” said Bowersox, who oversees NASA’s spaceflight operations directorate. “They were thinking very hard about flying Zena, but in this situation, it made sense to have somebody who had at least one flight under their belt.”
Gorbunov, a 34-year-old Russian aerospace engineer making his first flight to space, moved over to take pilot’s seat in the Crew Dragon spacecraft, although he remains officially designated a mission specialist. His remaining presence on the crew was preordained because of an international agreement between NASA and Russia’s space agency that provides seats for Russian cosmonauts on US crew missions and US astronauts on Russian Soyuz flights to the space station.
Bowersox said NASA will reassign Cardman and Wilson to future flights.
Enlarge/ NASA astronauts Suni Williams and Butch Wilmore, seen in their Boeing flight suits before their launch.
Operational flexibility
This was also the first launch of astronauts from Space Launch Complex-40 (SLC-40) at Cape Canaveral, SpaceX’s busiest launch pad. SpaceX has outfitted the launch pad with the equipment necessary to support launches of human spaceflight missions on the Crew Dragon spacecraft, including a more than 200-foot-tall tower and a crew access arm to allow astronauts to board spaceships on top of Falcon 9 rockets.
SLC-40 was previously based on a “clean pad” architecture, without any structures to service or access Falcon 9 rockets while they were vertical on the pad. SpaceX also installed slide chutes to give astronauts and ground crews an emergency escape route away from the launch pad in an emergency.
SpaceX constructed the crew tower last year and had it ready for the launch of a Dragon cargo mission to the space station in March. Saturday’s launch demonstrated the pad’s ability to support SpaceX astronaut missions, which have previously all departed from Launch Complex-39A (LC-39A) at NASA’s Kennedy Space Center, a few miles north of SLC-40.
Bringing human spaceflight launch capability online at SLC-40 gives SpaceX and NASA additional flexibility in their scheduling. For example, LC-39A remains the only launch pad configured to support flights of SpaceX’s Falcon Heavy rocket. SpaceX is now preparing LC-39A for a Falcon Heavy launch October 10 with NASA’s Europa Clipper mission, which only has a window of a few weeks to depart Earth this year and reach its destination at Jupiter in 2030.
With SLC-40 now certified for astronaut launches, SpaceX and NASA teams are able to support the Crew-9 and Europa Clipper missions without worrying about scheduling conflicts. The Florida spaceport now has three launch pads certified for crew flights—two for SpaceX’s Dragon and one for Boeing’s Starliner—and NASA will add a fourth human-rated launch pad with the Artemis II mission to the Moon late next year.
“That’s pretty exciting,” said Pam Melroy, NASA’s deputy administrator. “I think it’s a reflection of where we are in our space program at NASA, but also the capabilities that the United States has developed.”
Earlier this week, Hague and Gorbunov participated in a launch day dress rehearsal, when they had the opportunity to familiarize themselves with SLC-40. The launch pad has the same capabilities as LC-39A, but with a slightly different layout. SpaceX also test-fired the Falcon 9 rocket Tuesday evening, before lowering the rocket horizontal and moving it back into a hangar for safekeeping as the outer bands of Hurricane Helene moved through Central Florida.
Inside the hangar, SpaceX technicians discovered sooty exhaust from the Falcon 9’s engines accumulated on the outside of the Dragon spacecraft during the test-firing. Ground teams wiped the soot off of the craft’s solar arrays and heat shield, then repainted portions of the capsule’s radiators around the edge of Dragon’s trunk section before rolling the vehicle back to the launch pad Friday.
“It’s important that the radiators radiate heat in the proper way to space, so we had to put some some new paint on to get that back to the right emissivity and the right reflectivity and absorptivity of the solar radiation that hit those panels so it will reject the heat properly,” said Bill Gerstenmaier, SpaceX’s vice president of build and flight reliability.
Gerstenmaier also outlined a new backup ability for the Crew Dragon spacecraft to safely splash down even if all of its parachutes fail to deploy on final descent back to Earth. This involves using the capsule’s eight powerful SuperDraco thrusters, normally only used in the unlikely instance of a launch abort, to fire for a few seconds and slow Dragon’s speed for a safe splashdown.
Enlarge/ A hover test using SuperDraco thrusters on a prototype Crew Dragon spacecraft in 2015.
SpaceX
“The way it works is, in the case where all the parachutes totally fail, this essentially fires the thrusters at the very end,” Gerstenmaier said. “That essentially gives the crew a chance to land safely, and essentially escape the vehicle. So it’s not used in any partial conditions. We can land with one chute out. We can land with other failures in the chute system. But this is only in the case where all four parachutes just do not operate.”
When SpaceX first designed the Crew Dragon spacecraft more than a decade ago, the company wanted to use the SuperDraco thrusters to enable the capsule to perform propulsive helicopter-like landings. Eventually, SpaceX and NASA agreed to change to a more conventional parachute-assisted splashdown.
The SuperDracos remained on the Crew Dragon spacecraft to push the capsule away from its Falcon 9 rocket during a catastrophic launch failure. The eight high-thrust engines burn hydrazine and nitrogen tetroxide propellants that combust when making contact with one another.
The backup option has been activated for some previous commercial Crew Dragon missions, but not for a NASA flight, according to Gerstenmaier. The capability “provides a tolerable landing for the crew,” he added. “So it’s a true deep, deep contingency. I think our philosophy is, rather than have a system that you don’t use, even though it’s not maybe fully certified, it gives the crew a chance to escape a really, really bad situation.”
Steve Stich, NASA’s commercial crew program manager, said the emergency propulsive landing capability will be enabled for the return of the Crew-8 mission, which has been at the space station since March. With the arrival of Hague and Gorbunov on Crew-9—and the extension of Wilmore and Williams’ mission—the Crew-8 mission is slated to depart the space station and splash down in early October.
This story was updated after confirmation of a successful launch.
Four years after the outbreak of the COVID-19 pandemic, doctors and researchers are still seeking ways to help patients with long COVID, the persistent and often debilitating symptoms that can continue long after a COVID-19 infection.
In adults, the most common long COVID symptoms include fatigue and brain fog, but for children the condition can look different. A study published last month suggests preteens are more likely to experience symptoms such as headaches, stomach pain, trouble sleeping, and attention difficulties. Even among children, effects seem to vary by age. “There seems to be some differences between age groups, with less signs of organ damage in younger children and more adultlike disease in adolescents,” says Petter Brodin, professor of pediatric immunology at Imperial College London.
While vast sums have been devoted to long COVID research—the US National Institutes of Health have spent more than a billion dollars on research projects and clinical trials—research into children with the condition has been predominantly limited to online surveys, calls with parents, and studies of electronic health records. This is in spite of a recent study suggesting that between 10 and 20 percent of children may have developed long COVID following an acute infection, and another report finding that while many have recovered, some still remain ill three years later.
Now, what’s believed to be the first clinical trial specifically aimed at children and young adults with long COVID is underway, recruiting subjects aged 7 to 21 on which to test a potential treatment. It builds on research that suggests long COVID in children may be linked to the gut.
In May 2021, Lael Yonker, a pediatric pulmonologist at Massachusetts General Hospital in Boston, published a study of multisystem inflammatory syndrome in children (MIS-C), which she says is now regarded as a more severe and acute version of long COVID. It showed that these children had elevated levels of a protein called zonulin, a sign of a so-called leaky gut. Higher levels of zonulin are associated with greater permeability in the intestine, which could enable SARS-CoV-2 viral particles to leak out of the intestines and into the bloodstream instead of being excreted out of the body. From there, they could trigger inflammation.
As Yonker began to see more and more children with long COVID, she theorized that many of the gastrointestinal and neurological symptoms they were experiencing might be linked. But her original study also pointed to a possible solution. When she gave the children with MIS-C a drug called larazotide, an existing treatment for people with issues relating to a leaky gut, the levels of viral particles in their blood decreased and their symptoms improved.
Enlarge/ The small quantum processor (center) surrounded by cables that carry microwave signals to it, and the refrigeration hardware.
As we described earlier this year, operating a quantum computer will require a significant investment in classical computing resources, given the amount of measurements and control operations that need to be executed and interpreted. That means that operating a quantum computer will also require a software stack to control and interpret the flow of information from the quantum side.
But software also gets involved well before anything gets executed. While it’s possible to execute algorithms on quantum hardware by defining the full set of commands sent to the hardware, most users are going to want to focus on algorithm development, rather than the details of controlling any single piece of quantum hardware. “If everyone’s got to get down and know what the noise is, [use] performance management tools, they’ve got to know how to compile a quantum circuit through hardware, you’ve got to become an expert in too much to be able to do the algorithm discovery,” said IBM’s Jay Gambetta. So, part of the software stack that companies are developing to control their quantum hardware includes software that converts abstract representations of quantum algorithms into the series of commands needed to execute them.
IBM’s version of this software is called Qiskit (although it was made open source and has since been adopted by other companies). Recently, IBM made a couple of announcements regarding Qiskit, both benchmarking it in comparison to other software stacks and opening it up to third-party modules. We’ll take a look at what software stacks do before getting into the details of what’s new.
What’s the software stack do?
It’s tempting to view IBM’s Qiskit as the equivalent of a compiler. And at the most basic level, that’s a reasonable analogy, in that it takes algorithms defined by humans and converts them to things that can be executed by hardware. But there are significant differences in the details. A compiler for a classical computer produces code that the computer’s processor converts to internal instructions that are used to configure the processor hardware and execute operations.
Even when using what’s termed “machine language,” programmers don’t directly control the hardware; programmers have no control over where on the hardware things are executed (ie, which processor or execution unit within that processor), or even the order instructions are executed in.
Things are very different for quantum computers, at least at present. For starters, everything that happens on the processor is controlled by external hardware, which typically act by generating a series of laser or microwave pulses. So, software like IBM’s Qiskit or Microsoft’s Q# act by converting the code they’re given into commands that are sent to hardware that’s external to the processor.
These “compilers” must also keep track of exactly which part of the processor things are happening on. Quantum computers act by performing specific operations (called gates) on individual or pairs of qubits; to do that, you have to know exactly which qubit you’re addressing. And, for things like superconducting qubits, where there can be device-to-device variations, which hardware qubits you end up using can have a significant effect on the outcome of the calculations.
As a result, most things like Qiskit provide the option of directly addressing the hardware. If a programmer chooses not to, however, the software can transform generic instructions into a precise series of actions that will execute whatever algorithm has been encoded. That involves the software stack making choices about which physical qubits to use, what gates and measurements to execute, and what order to execute them in.
The role of the software stack, however, is likely to expand considerably over the next few years. A number of companies are experimenting with hardware qubit designs that can flag when one type of common error occurs, and there has been progress with developing logical qubits that enable error correction. Ultimately, any company providing access to quantum computers will want to modify its software stack so that these features are enabled without requiring effort on the part of the people designing the algorithms.