Science

tick-killing-pill-shows-promising-results-in-human-trial

Tick-killing pill shows promising results in human trial

Ticked off —

Should it pan out, the pill would be a new weapon against Lyme disease.

A tick on a human

If you have a dog or cat, chances are you’ve given your pet a flavored chewable tablet for tick prevention at some point. What if you could take a similar pill to protect yourself from getting Lyme disease?

Tarsus Pharmaceuticals is developing such a pill for humans—minus the tasty flavoring—that could provide protection against the tick-borne disease for several weeks at a time. In February, the Irvine, California–based biotech company announced results from a small, early-stage trial showing that 24 hours after taking the drug, it can kill ticks on people, with the effects lasting for up to 30 days.

“What we envision is something that would protect you before the tick would even bite you,” says Bobby Azamian, CEO of Tarsus.

Lyme disease is a fast-growing problem in the United States, where approximately 476,000 people are diagnosed and treated for it each year, according to the most recent data from the Centers for Disease Control and Prevention. That number is likely an overestimate, because many patients are treated after a tick bite even if an infection isn’t confirmed, but it underscores the burden of Lyme disease on the health care system—which researchers at the CDC and Yale University put at nearly $1 billion per year.

The disease is caused by the bacteria Borrelia burgdorferi, which gets passed to humans through the bite of an infected tick. In most cases, a tick has to be attached for around 36 to 48 hours before the bacteria can be transmitted. Symptoms include fever, headache, fatigue, and a characteristic skin rash that looks like a bullseye.

Without a vaccine for Lyme disease on the market, current prevention includes using insect repellents such as DEET and permethrin and wearing closed shoes, long pants, and long sleeves when in a tick-infested area.

“We’ve seen increasing rates of tick-borne diseases over the years, despite being told to do tick checks, use DEET, and impregnate your clothes with permethrin,” says Paul Auwaerter, a professor of medicine at the Johns Hopkins University School of Medicine who studies Lyme disease.

A more effective treatment strategy would be welcome, Auwaerter says, especially because Lyme disease can sometimes cause serious health issues. Antibiotics are usually effective when taken early, although about 5 to 10 percent of patients can have lingering symptoms for weeks or months. If left untreated, the infection can spread to the joints and cause arthritis. It can also become established in the heart and nervous system, causing persistent fatigue, numbness, or weakness.

The experimental pill that Tarsus Pharmaceuticals is testing is a formulation of lotilaner, a drug that paralyzes and kills parasites by interfering with the way that signals are passed between their nerve cells. Lotilaner is already approved as a veterinary medicine under the brand name Credelio to control fleas and ticks in dogs and cats.

“Our animals have better options than we do for tick prevention,” says Linden Hu, a professor of immunology at Tufts Medical School who led the Tarsus trial. “There are quite a few drugs and vaccines available for dogs and cats, but there’s nothing for us.”

Tarsus first developed lotilaner for human use as an eye drop to treat blepharitis, or inflammation of the eyelid, which is caused by tiny mites. That drug, Xdemvy, was approved by the US Food and Drug Administration in July 2023. It stuns and kills mites present in the eyelid. Azamian and his team had the idea to test it against ticks in people. The oral version of the drug enters the bloodstream and is passed to a tick when it bites and starts sucking blood.

“A lot of drugs are tested in animals, but very few are commercialized for animal use and then go to human use,” Azamian says.

In a Phase II trial, 31 healthy adults took either a low or high dose of the Tarsus pill, or a placebo. Researchers then placed sterile ticks on participants’ arms and, 24 hours later, measured how many died. They also observed tick death 30 days after a single dose of the pill. At day one, 97 percent of ticks in the high-dose group and 92 percent in the low-dose group had died, while only 5 percent of ticks in the placebo group had. One month out, both doses of the pill killed around 90 percent of ticks. The company reported no serious adverse events from the pill, and none of the participants dropped out due to side effects.

“The takeaway is that it killed the ticks really quickly,” Hu says. “And the effect lasted for a long time.”

The fact that the drug targets ticks, rather than the bacteria that causes Lyme disease, means that it could protect against other tick-borne diseases that are spreading in the US, including babesiosis and anaplasmosis. Thanks to climate change and exploding deer populations, ticks are expanding their ranges—and carrying diseases with them.

Tarsus has not proven that its pill can actually prevent Lyme disease. That will require testing the drug in hundreds of people who are at high risk of contracting the disease. But Hu is cautiously optimistic: “This pill is potentially a pre-exposure prophylaxis that you don’t have to think about.”

Azamian imagines it as something people would take before going hiking or on a camping trip or just going outside in any tick-infested area.

“There is that subset of people that truly have persistent symptoms after Lyme disease that can really be devastating,” Auwaerter says, “so preventing that would be an amazing opportunity.”

This story originally appeared on wired.com.

Tick-killing pill shows promising results in human trial Read More »

lawsuit-opens-research-misconduct-report-that-may-get-a-harvard-prof-fired

Lawsuit opens research misconduct report that may get a Harvard prof fired

Image of a campus of red brick buildings with copper roofs.

Enlarge / Harvard’s got a lawsuit on its hands.

Glowimages

Accusations of research misconduct often trigger extensive investigations, typically performed by the institution where the misconduct allegedly took place. These investigations are internal employment matters, and false accusations have the potential to needlessly wreck someone’s career. As a result, most of these investigations are kept completely confidential, even after their completion.

But all the details of a misconduct investigation performed by Harvard University became public this week through an unusual route. The professor who had been accused of misconduct, Francesca Gino, had filed a multi-million dollar lawsuit, targeting both Harvard and a team of external researchers who had accused her of misconduct. Harvard submitted its investigator’s report as part of its attempt to have part of the suit dismissed, and the judge overseeing the case made it public.

We covered one of the studies at issue at the time of its publication. It has since been retracted, and we’ll be updating our original coverage accordingly.

Misconduct allegations lead to lawsuit

Gino, currently on administrative leave, had been faculty at Harvard Business School, where she did research on human behavior. One of her more prominent studies (the one we covered) suggested that signing a form before completing it caused people to fill in its contents more accurately than if they filled out the form first and then signed it.

Oddly, for a paper about honesty, it had a number of issues. Some of its original authors had attempted to go back and expand on the paper but found they were unable to replicate the results. That seems to have prompted a group of behavioral researchers who write at the blog Data Colada to look more carefully at the results that didn’t replicate, at which point they found indications that the data was fabricated. That got the paper retracted.

Gino was not implicated in the fabrication of the data. But the attention of the Data Colada team (Uri Simonsohn, Leif Nelson, and Joe Simmons) had been drawn to the paper. They found additional indications of completely independent problems in other data from the paper that did come from her work, which caused them to examine additional papers from Gino, coming up with evidence for potential research fraud in four of them.

Before posting it on their blog, however, the Data Colada team had provided their evidence to Harvard, which launched its own investigation. Their posts came out after Harvard’s investigation concluded that Gino’s research had serious issues, and she was placed on administrative leave as the university looked into revoking her tenure. It also alerted the journals that had published the three yet-to-be-retracted papers about the issues.

Things might have ended there, except that Gino filed a defamation lawsuit against Harvard and the Data Colada team, claiming they “worked together to destroy my career and reputation despite admitting they have no evidence proving their allegations.” As part of the $25 million suit, she also accused Harvard of mishandling its investigation and not following proper procedures.

Lawsuit opens research misconduct report that may get a Harvard prof fired Read More »

dna-parasite-now-plays-key-role-in-making-critical-nerve-cell-protein

DNA parasite now plays key role in making critical nerve cell protein

Domesticated viruses —

An RNA has been adopted to help the production of myelin, a key nerve protein.

Graphic depiction of a nerve cell with a myelin coated axon.

Human brains (and the brains of other vertebrates) are able to process information faster because of myelin, a fatty substance that forms a protective sheath over the axons of our nerve cells and speeds up their impulses. How did our neurons evolve myelin sheaths? Part of the answer—which was unknown until now—almost sounds like science fiction.

Led by scientists from Altos Labs-Cambridge Institute of Science, a team of researchers has uncovered a bit of the gnarly past of how myelin ended up covering vertebrate neurons: a molecular parasite has been messing with our genes. Sequences derived from an ancient virus help regulate a gene that encodes a component of myelin, helping explain why vertebrates have an edge when it comes to their brains.

Prehistoric infection

Myelin is a fatty material produced by oligodendrocyte cells in the central nervous system and Schwann cells in the peripheral nervous system. Its insulating properties allow neurons to zap impulses to one another at faster speeds and greater lengths. Our brains can be complex in part because myelin enables longer, narrower axons, which means more nerves can be stacked together.

The un-myelinated brain cells of many invertebrates often need to rely on wider—and therefore fewer—axons for impulse conduction. Rapid impulse conduction makes quicker reactions possible, whether that means fleeing danger or capturing prey.

So, how do we make myelin? A key player in its production appears to be a type of molecular parasite called a retrotransposon.

Like other transposons, retrotransposons can move to new locations in the genome through an RNA intermediate. However, most retrotransposons in our genome have picked up too many mutations to move about anymore.

RNLTR12-int is a retrotransposon that is thought to have originally entered our ancestors’ genome as a virus. Rat genomes now have over 100 copies of the retrotransposon.

An RNA made by RNLTR12-int helps produce myelin by binding to a transcription factor or a protein that regulates the activity of other genes. The RNA/protein combination binds to DNA near the gene for myelin basic protein, or MBP, a major component of myelin.

“MBP is essential for the membrane growth and compression of [central nervous system] myelin,” the researchers said in a study recently published in Cell.

Technical knockout

To find out whether RNLTR12-int really was behind the regulation of MBP and, therefore, myelin production, the research team had to knock its level down and see if myelination still happened. They first experimented on rat brains before moving on to zebrafish and frogs.

When they inhibited RNLTR12-int, the results were drastic. In the central nervous system, genetically edited rats produced 98 percent less MBP than those where the gene was left unedited. The absence of RNLTR12-int also caused the oligodendrocytes that produce myelin to develop much simpler structures than they would normally form. When RNLTR12-int was knocked out in the peripheral nervous system, it reduced myelin produced by Schwann cells.

The researchers used a SOX10 antibody to show that SOX10 bound to the RNLTR12-int transcript in vivo. This was an important result, since there are lots of non-coding RNAs made by cells, and it wasn’t clear whether any RNA would work or if it was specific to RNLTR12-int.

Do these results hold up in other jawed vertebrates? Using CRISPR-CAS9 to perform knockout tests with retrotransposons related to RNLTR12-int in frogs and zebrafish showed similar results.

Myelination has enriched the vertebrate brain so it can work like never before. This is why the term “brain food” is literal. Healthy fats are so important for our brains; they help form myelin since it is a fatty acid. Think about that next time you’re pulling an all-nighter while reaching for a handful of nuts.

Cell, 2024. DOI: 10.1016/j.cell.2024.01.011

DNA parasite now plays key role in making critical nerve cell protein Read More »

deadly-morel-mushroom-outbreak-highlights-big-gaps-in-fungi-knowledge

Deadly morel mushroom outbreak highlights big gaps in fungi knowledge

This fungi’s not fun, guys —

Prized morels are unpredictably and puzzlingly deadly, outbreak report shows.

Mature morel mushrooms in a greenhouse at an agriculture garden in Zhenbeibu Town of Xixia District of Yinchuan, northwest China's Ningxia Hui Autonomous Region.

Enlarge / Mature morel mushrooms in a greenhouse at an agriculture garden in Zhenbeibu Town of Xixia District of Yinchuan, northwest China’s Ningxia Hui Autonomous Region.

True morel mushrooms are widely considered a prized delicacy, often pricey and surely safe to eat. But these spongey, earthy forest gems have a mysterious dark side—one that, on occasion, can turn deadly, highlighting just how little we know about morels and fungi generally.

On Thursday, Montana health officials published an outbreak analysis of poisonings linked to the honeycombed fungi in March and April of last year. The outbreak sickened 51 people who ate at the same restaurant, sending four to the emergency department. Three were hospitalized and two died. Though the health officials didn’t name the restaurant in their report, state and local health departments at the time identified it as Dave’s Sushi in Bozeman. The report is published in the Centers for Disease Control and Prevention’s Morbidity and Mortality Weekly Report.

The outbreak coincided with the sushi restaurant introducing a new item: a “special sushi roll” that contained salmon and morel mushrooms. The morels were a new menu ingredient for Dave’s. They were served two ways: On April 8, the morels were served partially cooked, with a hot, boiled sauce poured over the raw mushrooms and left to marinate for 75 minutes; and on April 17, they were served uncooked and cold-marinated.

The mystery poison worked fast. Symptoms began, on average, about an hour after eating at the restaurant. And it was brutal. “Vomiting and diarrhea were reportedly profuse,” the health officials wrote, “and hospitalized patients had clinical evidence of dehydration. The two patients who died had chronic underlying medical conditions that might have affected their ability to tolerate massive fluid loss.”

Of the 51 sickened, 46 were restaurant patrons and five were employees. Among them, 45 (88 percent) recalled eating morels. While that’s a high percentage for such an outbreak investigation, certainly enough to make the morels the prime suspect, the health officials went further. With support from the CDC, they set up a matched case-control study, having people complete a detailed questionnaire with demographic information, food items they ate at the restaurant, and symptoms.

Mysterious poison

Forty-one of the poisoned people filled out the questionnaire, as did 22 control patrons who ate at the restaurant but did not report subsequent illness. The analysis indicated that the odds of recalling eating the special sushi roll were nearly 16 times higher among the poisoned patrons than among the controls. The odds of reporting any morel consumption were nearly 11 times higher than controls.

The detailed consumption data also allowed the health officials to model a dose response, which suggested that with each additional piece of the special roll a person recalled eating, their odds of sickness increased nearly threefold compared with people who reported eating none. Those who ate four or more pieces of the roll had odds nearly 22.5 times higher. A small analysis focusing on the five employees sickened, which was not included in the published study but was noted by the Food and Drug Administration, echoed the dose-response finding, indicating that sickness was linked with larger amounts of morel consumption.

When the officials broke down the analysis by people who ate at the restaurant on April 17, when the morels were served uncooked, and those who ate at the restaurant on April 8, when the mushrooms were slightly cooked, the cooking method seemed to matter. People who ate the uncooked rather than the slightly cooked mushrooms had much higher odds of sickness.

This all strongly points to the morels being responsible. At the time, the state and local health officials engaged the FDA, as well as the CDC, to help tackle the outbreak investigation. But the FDA reported that “samples of morel mushrooms collected from the restaurant were screened for pesticides, heavy metals, toxins, and pathogens. No significant findings were identified.” In addition, the state and local health officials noted that DNA sequencing identified the morels used by the restaurant as Morchella sextelata, a species of true morel. This rules out the possibility that the mushrooms were look-alike morels, called “false morels,” which are known to contain a toxin called gyromitrin.

The health officials and the FDA tracked down the distributor of the mushrooms, finding they were cultivated and imported fresh from China. Records indicated that 12 other locations in California also received batches of the mushrooms. Six of those facilities responded to inquiries from the California health department and the FDA, and all six reported no illnesses. They also all reported cooking the morels or at least thoroughly heating them.

Deadly morel mushroom outbreak highlights big gaps in fungi knowledge Read More »

amid-paralyzing-ransomware-attack,-feds-probe-unitedhealth’s-hipaa-compliance

Amid paralyzing ransomware attack, feds probe UnitedHealth’s HIPAA compliance

most significant and consequential incident —

UnitedHealth said it will cooperate with the probe as it works to restore services.

Multistory glass-and-brick building with UnitedHealthcare logo on exterior.

As health systems around the US are still grappling with an unprecedented ransomware attack on the country’s largest health care payment processor, the US Department of Health and Human Services is opening an investigation into whether that processor and its parent company, UnitedHealthcare Group, complied with federal rules to protect private patient data.

The attack targeted Change Healthcare, a unit of UnitedHealthcare Group (UHG) that provides financial services to tens of thousands of health care providers around the country, including doctors, dentists, hospitals, and pharmacies. According to an antitrust lawsuit brought against UHG by the Department of Justice in 2022, 50 percent of all medical claims in the US pass through Change Healthcare’s electronic data interchange clearinghouse. (The DOJ lost its case to prevent UHG’s acquisition of Change Healthcare and last year abandoned plans for an appeal.)

As Ars reported previously, the attack was disclosed on February 21 by UHG’s subsidiary, Optum, which now runs Change Healthcare. On February 29, UHG accused the notorious Russian-speaking ransomware gang known both as AlphV and BlackCat of being responsible. According to The Washington Post, the attack involved stealing patient data, encrypting company files, and demanding money to unlock them. The result is a paralysis of claims processing and payments, causing hospitals to run out of cash for payroll and services and preventing patients from getting care and prescriptions. Additionally, the attack is believed to have exposed the health data of millions of US patients.

Earlier this month, Rick Pollack, the president and CEO of the American Hospital Association, called the ransomware attack on Change Healthcare “the most significant and consequential incident of its kind against the US health care system in history.”

Now, three weeks into the attack, many health systems are still struggling. On Tuesday, members of the Biden administration met with UHG CEO Andrew Witty and other health industry leaders at the White House to demand they do more to stabilize the situation for health care providers and services and provide financial assistance. Some improvements may be in sight; on Wednesday, UHG posted an update saying that “all major pharmacy and payment systems are up and more than 99 percent of pre-incident claim volume is flowing.”

HIPAA compliance

Still, the data breach leaves big questions about the extent of the damage to patient privacy, and the adequacy of protections moving forward. In an additional development Wednesday, the health department’s Office for Civil Rights (OCR) announced that it is opening an investigation into UHG and Change Healthcare over the incident. It noted that such an investigation was warranted “given the unprecedented magnitude of this cyberattack, and in the best interest of patients and health care providers.”

In a “Dear Colleague” letter dated Wednesday, the OCR explained that the investigation “will focus on whether a breach of protected health information occurred and Change Healthcare’s and UHG’s compliance with the HIPAA Rules.” HIPAA is the Health Insurance Portability and Accountability Act, which establishes privacy and security requirements for protected health information, as well as breach notification requirements.

In a statement to the press, UHG said it would cooperate with the investigation. “Our immediate focus is to restore our systems, protect data and support those whose data may have been impacted,” the statement read. “We are working with law enforcement to investigate the extent of impacted data.”

The Post notes that the federal government does have a history of investigating and penalizing health care organizations for failing to implement adequate safeguards to prevent data breaches. For instance, health insurance provider Anthem paid a $16 million settlement in 2020 over a 2015 data breach that exposed the private data of almost 79 million people. The exposed data included names, Social Security numbers, medical identification numbers, addresses, dates of birth, email addresses, and employment information. The OCR investigation into the breach discovered that the attack began with spear phishing emails that at least one employee of an Anthem subsidiary fell for, opening the door to further intrusions that went undetected between December 2, 2014, and January 27, 2015.

“Unfortunately, Anthem failed to implement appropriate measures for detecting hackers who had gained access to their system to harvest passwords and steal people’s private information,” OCR Director Roger Severino said at the time. “We know that large health care entities are attractive targets for hackers, which is why they are expected to have strong password policies and to monitor and respond to security incidents in a timely fashion or risk enforcement by OCR.”

Amid paralyzing ransomware attack, feds probe UnitedHealth’s HIPAA compliance Read More »

blue-cheese-shows-off-new-colors,-but-the-taste-largely-remains-the-same

Blue cheese shows off new colors, but the taste largely remains the same

Am I blue? —

Future varieties could be yellow-green, reddish-brown-pink, or light blue.

Scientists at University of the Nottingham have discovered how to create different colours of blue cheese.

Enlarge / Scientists at the University of Nottingham have discovered how to create different colors of blue cheese.

University of Nottingham

Gourmands are well aware of the many varieties of blue cheese, known by the blue-green veins that ripple through the cheese. Different kinds of blue cheese have distinctive flavor profiles: they can be mild or strong, sweet or salty, for example. Soon we might be able to buy blue cheeses that belie the name and sport veins of different colors: perhaps yellow-green, reddish-brown-pink, or lighter/darker shades of blue, according to a recent paper published in the journal Science of Food.

“We’ve been interested in cheese fungi for over 10 years, and traditionally when you develop mould-ripened cheeses, you get blue cheeses such as Stilton, Roquefort, and Gorgonzola, which use fixed strains of fungi that are blue-green in color,” said co-author Paul Dyer of the University of Nottingham of this latest research. “We wanted to see if we could develop new strains with new flavors and appearances.”

Blue cheese has been around for a very long time. Legend has it that a young boy left his bread and ewe’s milk cheese in a nearby cave to pursue a lovely young lady he’d spotted in the distance. Months later, he came back to the cave and found it had molded into Roquefort. It’s a fanciful tale, but scholars think the basic idea is sound: people used to store cheeses in caves because their temperature and moisture levels were especially hospitable to harmless molds. That was bolstered by a 2021 analysis of paleofeces that found evidence that Iron Age salt miners in Hallstatt (Austria) between 800 and 400 BCE were already eating blue cheese and quaffing beer.

Color derivatives.

Enlarge / Color derivatives.

The manufacturing process for blue cheese is largely the same as for any cheese, with a few crucial additional steps. It requires cultivation of Penicillium roqueforti, a mold that thrives on exposure to oxygen. The P. roqueforti is added to the cheese, sometimes before curds form and sometimes mixed in with curds after they form. The cheese is then aged in a temperature-controlled environment. Lactic acid bacteria trigger the initial fermentation but eventually die off, and the P. roqueforti take over as secondary fermenters. Piercing the curds forms air tunnels in the cheese, and the mold grows along those surfaces to produce blue cheese’s signature veining.

Once scientists published the complete genome for P. roqueforti, it opened up opportunities for studying this blue cheese fungus, per Dyer et al. Different strains “can have different colony cultures and textures, with commercial strains being sold partly on the basis of color development,” they wrote. This coloration comes from pigments in the coatings of the spores that form as the colony grows. Dyer and his co-authors set out to determine the genetic basis of this pigment formation in the hopes of producing altered strains with different spore coat colors.

The team identified a specific biochemical pathway, beginning with a white color that gradually goes from yellow-green, red-brown-pink, dark brown, light blue, and ultimately that iconic dark blue-green. They used targeted gene deletion to block pigment biosynthesis genes at various points in this pathway. This altered the spore color, providing a proof of principle without adversely affecting the production of flavor volatiles and levels of secondary metabolites called mycotoxins. (The latter are present in low enough concentrations in blue cheese so as not to be a health risk for humans, and the team wanted to ensure those concentrations remained low.)

Pencillium roqueforti. (right) Cross sections of cheeses made with the original (dark blue-green) or new color (red-brown, bright green, white albino) strains of the fungus.” height=”371″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/02/bluecheese3-640×371.jpg” width=”640″>

Enlarge / (left) Spectrum of color strains produced in Pencillium roqueforti. (right) Cross sections of cheeses made with the original (dark blue-green) or new color (red-brown, bright green, white albino) strains of the fungus.

University of Nottingham

However, food industry regulations prohibit gene-deletion fungal strains for commercial cheese production. So Dyer et al. used UV mutagenesis—essentially “inducing sexual reproduction in the fungus,” per Dyer—to produce non-GMO mutant strains of the fungi to create “blue” cheeses of different colors, without increasing mycotoxin levels or impacting the volatile compounds responsible for flavor.

“The interesting part was that once we went on to make some cheese, we then did some taste trials with volunteers from across the wider university, and we found that when people were trying the lighter colored strains they thought they tasted more mild,” said Dyer. “Whereas they thought the darker strain had a more intense flavor. Similarly, with the more reddish-brown and a light green one, people thought they had a fruity, tangy element to them—whereas, according to the lab instruments, they were very similar in flavor. This shows that people do perceive taste not only from what they taste but also by what they see.”

Dyer’s team is hoping to work with local cheese makers in Nottingham and Scotland, setting up a spinoff company in hopes of commercializing the mutant strains. And there could be other modifications on the horizon. “Producers could almost dial up their list of desirable characteristics—more or less color, faster or slower growth rate, acidity differences,” Donald Glover of the University of Queensland in Australia, who was not involved in the research, told New Scientist.

Science of Food, 2024. DOI: 10.1038/s41538-023-00244-9  (About DOIs).

Blue cheese shows off new colors, but the taste largely remains the same Read More »

seeding-steel-frames-brings-destroyed-coral-reefs-back-to-life

Seeding steel frames brings destroyed coral reefs back to life

Image of a large school of fish above a reef.

Coral reefs, some of the most stunningly beautiful marine ecosystems on Earth, are dying. Ninety percent of them will likely be gone by 2050 due to rising ocean temperatures and pollution. “But it’s not that when they are gone, they are gone forever. We can rebuild them,” said Dr. Timothy Lamont, a marine biologist working at Lancaster University.

Lamont’s team evaluated coral reef restoration efforts done through the MARS Coral Reef Restoration Program on the coast of Indonesia and found that planting corals on a network of sand-coated steel frames brought a completely dead reef back to life in just four years. It seems like we can fix something for once.

Growing up in rubble

The restored reef examined by Lamont’s team was damaged by blast fishing done 30–40 years ago. “People were using dynamite to blow up the reef. It kills all the fish, the fish float to the surface, and you can scoop them all up. Obviously, this is very damaging to the habitat and leaves behind loose rubble fields with lots of coral skeletons,” said Lamont.

Because this loose ruble is in constant motion, tumbling and rolling around, coral larvae don’t have enough time to grow before they get squashed. So the first step to bringing damaged reefs back to life was stabilizing the rubble. The people running the MARS program did this using Reef Stars, hexagonal steel structures coated with sand. “These structures are connected into networks and pinned to the seabed to reduce the movement of the rubble,” Lamont said.

Before the reef stars were placed on the seabed, though, the MARS team manually tied little corals around them. This was meant to speed up recovery compared to letting coral larvae settle on the steel structures naturally. Based on some key measures, it worked. But there are questions about whether those measures capture everything we need to know.

Artificial coral reefs

The metric Lamont’s team used to measure the success of the MARS program restoration was a carbonate budget, which describes an overall growth of the whole reef structure. According to Lamont, a healthy coral reef has a positive carbonate budget and produces roughly 20 kilograms of limestone per square meter per year. This is exactly what his team measured in restored sites on the Indonesian reef. But while the recovered reef had the same carbonate budget as a healthy one, the organisms contributing to this budget were different.

An untouched natural reef is a diverse mixture including massive, encrusting, and plating coral species like Isopora or Porites, which contribute roughly a third of the carbonate budget. Restored reefs were almost completely dominated by smaller, branching corals like Stylophora, Acropora, and Pocillopora, which are all fast-growing species initially tied onto reef stars. The question was whether the MARS program achieved its astounding four-year reef recovery time by sacrificing biodiversity and specifically choosing corals that grow faster.

Seeding steel frames brings destroyed coral reefs back to life Read More »

some-states-are-now-trying-to-ban-lab-grown-meat

Some states are now trying to ban lab-grown meat

A franken-burger and a side of fries —

Spurious “war on ranching” cited as reason for legislation.

tanks for growing cell-cultivated chicken

Enlarge / Cell-cultivated chicken is made in the pictured tanks at the Eat Just office on July 27, 2023, in Alameda, Calif.

Justin Sullivan/Getty Images

Months in jail and thousands of dollars in fines and legal fees—those are the consequences Alabamians and Arizonans could soon face for selling cell-cultured meat products that could cut into the profits of ranchers, farmers, and meatpackers in each state.

State legislators from Florida to Arizona are seeking to ban meat grown from animal cells in labs, citing a “war on our ranching” and a need to protect the agriculture industry from efforts to reduce the consumption of animal protein, thereby reducing the high volume of climate-warming methane emissions the sector emits.

Agriculture accounts for about 11 percent of the country’s greenhouse gas emissions, according to federal data, with livestock such as cattle making up a quarter of those emissions, predominantly from their burps, which release methane—a potent greenhouse gas that’s roughly 80 times more effective at warming the atmosphere than carbon dioxide over 20 years. Globally, agriculture accounts for about 37 percent of methane emissions.

For years, climate activists have been calling for more scrutiny and regulation of emissions from the agricultural sector and for nations to reduce their consumption of meat and dairy products due to their climate impacts. Last year, over 150 countries pledged to voluntarily cut emissions from food and agriculture at the United Nations’ annual climate summit.

But the industry has avoided increased regulation and pushed back against efforts to decrease the consumption of meat, with help from local and state governments across the US.

Bills in Alabama, Arizona, Florida, and Tennessee are just the latest legislation passed in statehouses across the US that have targeted cell-cultured meat, which is produced by taking a sample of an animal’s muscle cells and growing them into edible products in a lab. Sixteen states—Alabama, Arkansas, Georgia, Kansas, Kentucky, Louisiana, Maine, Mississippi, Missouri, Montana, North Dakota, Oklahoma, South Carolina, South Dakota, Texas, and Wyoming—have passed laws addressing the use of the word “meat” in such products’ packaging, according to the National Agricultural Law Center at the University of Arkansas, with some prohibiting cell-cultured, plant-based, or insect-based food products from being labeled as meat.

“Cell-cultured meat products are so new that there’s not really a framework for how state and federal labeling will work together,” said Rusty Rumley, a senior staff attorney with the National Agricultural Law Center, resulting in no standardized requirements for how to label the products, though legislation has been proposed that could change that.

At the federal level, Rep. Mark Alford (R-Mo.) introduced the Fair and Accurate Ingredient Representation on Labels Act of 2024, which would authorize the United States Department of Agriculture to regulate imitation meat products and restrict their sale if they are not properly labeled, and US Sens. Jon Tester (D-Mont.) and Mike Rounds (R-S.D.) introduced a bill to ban schools from serving cell-cultured meat.

But while plant-based meat substitutes are widespread, cell-cultivated meats are not widely available, with none currently being sold in stores. Just last summer, federal agencies gave their first-ever approvals to two companies making cell-cultivated poultry products, which are appearing on restaurant menus. The meat substitutes have garnered the support of some significant investors, including billionaire Bill Gates, who has been the subject of attacks from supporters of some of the state legislation proposed.

“Let me start off by explaining why I drafted this bill,” said Rep. David Marshall, an Arizona Republican who proposed legislation to ban cell-cultured meat from being sold or produced in the state, during a hearing on the bill. “It’s because of organizations like the FDA and the World Economic Forum, also Bill Gates and others, who have openly declared war on our ranching.”

In Alabama, fear of “franken-meat” competition spurs legislation

In Alabama, an effort to ban lab-grown meat is winding its way through the State House in Montgomery.

There, state senators have already passed a bill that would make it a misdemeanor, punishable by up to three months in jail and a $500 fine, to sell, manufacture, or distribute what the proposed legislation labels “cultivated food products.” An earlier version of the bill called lab-grown protein “meat,” but it was quickly revised by lawmakers. The bill passed out of committee and through the Senate without opposition from any of its members.

Now, the bill is headed toward a vote in the Alabama House of Representatives, where the body’s health committee recently held a public hearing on the issue. Rep. Danny Crawford, who is carrying the bill in the body, told fellow lawmakers during that hearing that he’s concerned about two issues: health risks and competition for Alabama farmers.

“Lab-grown meat or whatever you want to call it—we’re not sure of all of the long-term problems with that,” he said. “And it does compete with our farming industry.”

Crawford said that legislators had heard from NASA, which expressed concern about the bill’s impact on programs to develop alternative proteins for astronauts. An amendment to the bill will address that problem, Crawford said, allowing an exemption for research purposes.

Some states are now trying to ban lab-grown meat Read More »

daily-telescope:-gigantic-new-stars-stir-up-a-nebula

Daily Telescope: Gigantic new stars stir up a nebula

It’s full of red —

Astronomers know of no other region so packed with large stars as this nebula.

Behold, the star-forming region of NGC 604.

Enlarge / Behold, the star-forming region of NGC 604.

NASA, ESA, CSA, STScI

Welcome to the Daily Telescope. There is a little too much darkness in this world and not enough light, a little too much pseudoscience and not enough science. We’ll let other publications offer you a daily horoscope. At Ars Technica, we’re going to take a different route, finding inspiration from very real images of a universe that is filled with stars and wonder.

Good morning. It’s March 12, and today’s photo comes from the James Webb Space Telescope.

Astronomers have long been fascinated by a nebula, NGC 604, in the relatively nearby Triangulum Galaxy. That’s because this nebula contains about 200 of the hottest and largest types of stars, most of which are in the early stages of their lives. Some of these stars are 100 times or more massive than the Sun. Astronomers know of no other region in the Universe so densely packed with large stars as this nebula.

In this image, captured by the Near-Infrared Camera on the Webb telescope, there are brilliant reds and oranges. Here’s the explanation from astronomers for these colors:

The most noticeable features are tendrils and clumps of emission that appear bright red, extending out from areas that look like clearings, or large bubbles in the nebula. Stellar winds from the brightest and hottest young stars have carved out these cavities, while ultraviolet radiation ionizes the surrounding gas. This ionized hydrogen appears as a white and blue ghostly glow. The bright orange streaks in the Webb near-infrared image signify the presence of carbon-based molecules known as polycyclic aromatic hydrocarbons.

The nebula is only about 3.5 million years old.

Source: NASA, ESA, CSA, STScI

Do you want to submit a photo for the Daily Telescope? Reach out and say hello.

Daily Telescope: Gigantic new stars stir up a nebula Read More »

study:-conflicting-values-for-hubble-constant-not-due-to-measurement-error

Study: Conflicting values for Hubble Constant not due to measurement error

A long-standing tension —

Something else is influencing the expansion rate of the Universe.

This image of NGC 5468, a galaxy located about 130 million light-years from Earth, combines data from the Hubble and James Webb space telescopes.

Enlarge / This image of NGC 5468, about 130 million light-years from Earth, combines data from the Hubble and Webb space telescopes.

NASA/ESA/CSA/STScI/A. Riess (JHU)

Astronomers have made new measurements of the Hubble Constant, a measure of how quickly the Universe is expanding, by combining data from the Hubble Space Telescope and the James Webb Space Telescope. Their results confirmed the accuracy of Hubble’s earlier measurement of the Constant’s value, according to their recent paper published in The Astrophysical Journal Letters, with implications for a long-standing discrepancy in values obtained by different observational methods known as the “Hubble tension.”

There was a time when scientists believed the Universe was static, but that changed with Albert Einstein’s general theory of relativity. Alexander Friedmann published a set of equations in 1922 showing that the Universe might actually be expanding, with Georges Lemaitre later making an independent derivation to arrive at that same conclusion. Edwin Hubble confirmed this expansion with observational data in 1929. Prior to this, Einstein had been trying to modify general relativity by adding a cosmological constant in order to get a static universe from his theory; after Hubble’s discovery, legend has it, he referred to that effort as his biggest blunder.

As previously reported, the Hubble Constant is a measure of the Universe’s expansion expressed in units of kilometers per second per megaparsec. So, each second, every megaparsec of the Universe expands by a certain number of kilometers. Another way to think of this is in terms of a relatively stationary object a megaparsec away: Each second, it gets a number of kilometers more distant.

How many kilometers? That’s the problem here. There are basically three methods scientists use to measure the Hubble Constant: looking at nearby objects to see how fast they are moving, gravitational waves produced by colliding black holes or neutron stars, and measuring tiny deviations in the afterglow of the Big Bang known as the Cosmic Microwave Background (CMB). However, the various methods have come up with different values. For instance, tracking distant supernovae produced a value of 73 km/s Mpc, while measurements of the CMB using the Planck satellite produced a value of 67 km/s Mpc.

Just last year, researchers made a third independent measure of the Universe’s expansion by tracking the behavior of a gravitationally lensed supernova, where the distortion in space-time caused by a massive object acts as a lens to magnify an object in the background. The best fits of those models all ended up slightly below the value of the Hubble Constant derived from the CMB, with the difference being within the statistical error. Values closer to those derived from measurements of other supernovae were a considerably worse fit for the data. The method is new, with considerable uncertainties, but it did provide an independent means of getting at the Hubble Constant.

Comparison of Hubble and Webb views of a Cepheid variable star.

Enlarge / Comparison of Hubble and Webb views of a Cepheid variable star.

NASA/ESA/CSA/STScI/A. Riess (JHU)

“We’ve measured it using information in the cosmic microwave background and gotten one value,” Ars Science Editor John Timmer wrote. “And we’ve measured it using the apparent distance to objects in the present-day Universe and gotten a value that differs by about 10 percent. As far as anyone can tell, there’s nothing wrong with either measurement, and there’s no obvious way to get them to agree.” One hypothesis is that the early Universe briefly experienced some kind of “kick” from repulsive gravity (akin to the notion of dark energy) that then mysteriously turned off and vanished. But it remains a speculative idea, albeit a potentially exciting one for physicists.

This latest measurement builds on last year’s confirmation based on Webb data that Hubble’s measurements of the expansion rate were accurate, at least for the first few “rungs” of the “cosmic distance ladder.” But there was still the possibility of as-yet-undetected errors that might increase the deeper (and hence further back in time) one looked into the Universe, particularly for brightness measurements of more distant stars.

So a new team made additional observations of Cepheid variable stars—a total of 1,000 in five host galaxies as far out as 130 million light-years—and correlated them with the Hubble data. The Webb telescope is able to see past the interstellar dust that has made Hubble’s own images of those stars more blurry and overlapping, so astronomers could more easily distinguish between individual stars.

The results further confirmed the accuracy of the Hubble data. “We’ve now spanned the whole range of what Hubble observed, and we can rule out a measurement error as the cause of the Hubble Tension with very high confidence,” said co-author and team leader Adam Riess, a physicist at Johns Hopkins University. “Combining Webb and Hubble gives us the best of both worlds. We find that the Hubble measurements remain reliable as we climb farther along the cosmic distance ladder. With measurement errors negated, what remains is the real and exciting possibility that we have misunderstood the Universe.”

The Astrophysical Journal Letters, 2024. DOI: 10.3847/2041-8213/ad1ddd  (About DOIs).

Study: Conflicting values for Hubble Constant not due to measurement error Read More »

after-coming-back-from-the-dead,-the-world’s-largest-aircraft-just-flew-a-real-payload

After coming back from the dead, the world’s largest aircraft just flew a real payload

Roc-n-roll —

Falling just short of hypersonic velocity.

The world's largest aircraft takes off with the Talon A vehicle on Saturday.

Enlarge / The world’s largest aircraft takes off with the Talon A vehicle on Saturday.

Stratolaunch/Matt Hartman

Built and flown by Stratolaunch, the massive Roc aircraft took off from Mojave Air and Space Port in California on Saturday. The airplane flew out over the Pacific Ocean, where it deployed the Talon-A vehicle, which looks something like a mini space shuttle.

This marked the first time this gargantuan airplane released an honest-to-goodness payload, the first Talon-A vehicle, TA-1, which is intended to fly at hypersonic speed. During the flight, TA-1 didn’t quite reach hypersonic velocity, which begins at Mach 5, or five times greater than the speed of sound.

“While I can’t share the specific altitude and speed TA-1 reached due to proprietary agreements with our customers, we are pleased to share that in addition to meeting all primary and customer objectives of the flight, we reached high supersonic speeds approaching Mach 5 and collected a great amount of data at an incredible value to our customers,” said Zachary Krevor, chief executive of Stratolaunch, in a statement.

In essence, the TA-1 vehicle is a pathfinder for subsequent versions of the vehicle that will be both reusable and capable of reaching hypersonic speeds. The flight of the company’s next vehicle, TA-2, could come later this year, Krevor said.

A long road

It has been a long, strange road for Stratolaunch to reach this moment. The company was founded in 2011 to build a super-sized carrier aircraft from which rockets would be launched mid-air. It was bankrolled by Microsoft cofounder and airplane enthusiast Paul Allen, who put at least hundreds of millions of dollars into the private project.

As the design of the vehicle evolved, its wingspan grew to 117 meters, nearly double the size of a Boeing 747 aircraft. It far exceeded the wingspan of the Spruce Goose, built by Howard Hughes in the 1940s, which had a wingspan of 97.5 meters. The Roc aircraft was so large that it seemed impractical to fly on a regular basis.

At the same time, the company was struggling to identify a rocket that could be deployed from the aircraft. At various times, Stratolaunch worked with SpaceX and Orbital ATK to develop a launch vehicle. But both of those partnerships fell through, and eventually, the company said it would develop its own line of rockets.

Allen would never see his large plane fly, dying of septic shock in October 2018 due to his non-Hodgkin lymphoma. Roc did finally take flight for the first time in April 2019, but it seemed like a Pyrrhic victory. Following the death of Allen, for whom Stratolaunch was a passion project, the company’s financial future was in doubt. Later in 2019, Allen’s family put the company’s assets up for sale and said it would cease to exist.

However, Stratolaunch did not die. Rather, the aircraft was acquired by the private equity firm Cerberus, and in 2020, the revitalized Stratolaunch changed course. Instead of orbital rockets, it would now launch hypersonic vehicles to test the technology—a priority for the US military. China, Russia, and the United States are all racing to develop hypersonic missiles, as well as new countermeasure technology as high-speed missiles threaten to penetrate most existing defenses.

Featuring a new engine

This weekend’s flight also marked an important moment for another US aerospace company, Ursa Major Technologies. The TA-1 vehicle was powered by the Hadley rocket engine designed and built by Ursa Major, which specializes in the development of rocket propulsion engines.

Hadley is a 5,000-lb-thrust liquid oxygen and kerosene, oxygen-rich staged combustion cycle rocket engine for small vehicles. Its known customers include Stratolaunch and a vertical launch company, Phantom Space, which is developing a small orbital rocket.

Founded in 2015, Ursa Major seeks to provide off-the-shelf propulsion solutions to launch customers. While Ursa Major started small, the company is already well into the development of its much larger Ripley engine. With 50,000 pounds of thrust, Ripley is aimed at the medium-launch market. The company completed a hot-fire test campaign of Ripley last year. For Ursa Major, it must feel pretty good to finally see an engine in flight.

After coming back from the dead, the world’s largest aircraft just flew a real payload Read More »

shields-up:-new-ideas-might-make-active-shielding-viable

Shields up: New ideas might make active shielding viable

Shields up: New ideas might make active shielding viable

Aurich Lawson | Getty Images | NASA

On October 19, 1989, at 12: 29 UT, a monstrous X13 class solar flare triggered a geomagnetic storm so strong that auroras lit up the skies in Japan, America, Australia, and even Germany the following day. Had you been flying around the Moon at that time, you would have absorbed well over 6 Sieverts of radiation—a dose that would most likely kill you within a month or so.

This is why the Orion spacecraft that is supposed to take humans on a Moon fly-by mission this year has a heavily shielded storm shelter for the crew. But shelters like that aren’t sufficient for a flight to Mars—Orion’s shield is designed for a 30-day mission.

To obtain protection comparable to what we enjoy on Earth would require hundreds of tons of material, and that’s simply not possible in orbit. The primary alternative—using active shields that deflect charged particles just like the Earth’s magnetic field does—was first proposed in the 1960s. Today, we’re finally close to making it work.

Deep-space radiation

Space radiation comes in two different flavors. Solar events like flares or coronal mass ejections can cause very high fluxes of charged particles (mostly protons). They’re nasty when you have no shelter but are relatively easy to shield against since solar protons are mostly low energy. The majority of solar particle events flux is between 30 Mega-electronVolts to 100 MeV and could be stopped by Orion-like shelters.

Then there are galactic cosmic rays: particles coming from outside the Solar System, set in motion by faraway supernovas or neutron stars. These are relatively rare but are coming at you all the time from all directions. They also have high energies, starting at 200 MeV and going to several GeVs, which makes them extremely penetrating. Thick masses don’t provide much shielding against them. When high-energy cosmic ray particles hit thin shields, they produce many lower-energy particles—you’d be better off with no shield at all.

The particles with energies between 70 MeV and 500 MeV are responsible for 95 percent of the radiation dose that astronauts get in space. On short flights, solar storms are the main concern because they can be quite violent and do lots of damage very quickly. The longer you fly, though, GCRs become more of an issue because their dose accumulates over time, and they can go through pretty much everything we try to put in their way.

What keeps us safe at home

The reason nearly none of this radiation can reach us is that Earth has a natural, multi-stage shielding system. It begins with its magnetic field, which deflects most of the incoming particles toward the poles. A charged particle in a magnetic field follows a curve—the stronger the field, the tighter the curve. Earth’s magnetic field is very weak and barely bends incoming particles, but it is huge, extending thousands of kilometers into space.

Anything that makes it through the magnetic field runs into the atmosphere, which, when it comes to shielding, is the equivalent of an aluminum wall that’s 3 meters thick. Finally, there is the planet itself, which essentially cuts the radiation in half since you always have 6.5 billion trillion tons of rock shielding you from the bottom.

To put that in perspective, the Apollo crew module had on average 5 grams of mass per square centimeter standing between the crew and radiation. A typical ISS module has twice that, about 10 g/cm2. The Orion shelter has 35–45 g/cm2, depending on where you sit exactly, and it weighs 36 tons. On Earth, the atmosphere alone gives you 810 g/cm2—roughly 20 times more than our best shielded spaceships.

The two options are to add more mass—which gets expensive quickly—or to shorten the length of the mission, which isn’t always possible. So solving radiation with passive mass won’t cut it for longer missions, even using the best shielding materials like polyethylene or water. This is why making a miniaturized, portable version of the Earth’s magnetic field was on the table from the first days of space exploration. Unfortunately, we discovered it was far easier said than done.

Shields up: New ideas might make active shielding viable Read More »