syndication

texas-suit-alleging-anti-coal-“cartel”-of-top-wall-street-firms-could-reshape-esg

Texas suit alleging anti-coal “cartel” of top Wall Street firms could reshape ESG


It’s a closely watched test of whether corporate alliances on climate efforts violate antitrust laws.

This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.

Since 2022, Republican lawmakers in Congress and state attorneys general have sent letters to major banks, pension funds, asset managers, accounting firms, companies, nonprofits, and business alliances, putting them on notice for potential antitrust violations and seeking information as part of the Republican pushback against “environmental, social and governance” efforts such as corporate climate commitments.

“This caused a lot of turmoil and stress obviously across the whole ecosystem,” said Denise Hearn, a senior fellow at the Columbia Center on Sustainable Investment. “But everyone wondered, ‘OK, when are they actually going to drop a lawsuit?’”

That came in November, filed by Texas Attorney General Ken Paxton and 10 other Republican AGs, accusing three of the biggest asset managers on Wall Street—BlackRock, Vanguard and State Street—of running “an investment cartel” to depress the output of coal and boosting their revenues while pushing up energy costs for Americans. The Trump administration’s Department of Justice and Federal Trade Commission filed a supporting brief in May.

The overall pressure campaign aimed at what’s known as “ESG” is having an impact.

“Over the past several months, through this [lawsuit] and other things, letters from elected officials, state and federal, there has been a chilling effect of what investors are saying,” said Steven Maze Rothstein, chief program officer of Ceres, a nonprofit that advocates for more sustainable business practices and was among the earliest letter recipients. Still, “investors understand that Mother Nature doesn’t know who’s elected governor, attorney general, president.”

Earlier this month, a US District Court judge in Tyler, Texas, declined to dismiss the lawsuit against the three asset managers, though he did dismiss three of the 21 counts. The judge was not making a final decision in the case, only that there was enough evidence to go to trial.

BlackRock said in a statement: “This case is not supported by the facts, and we will demonstrate that.” Vanguard said it will “vigorously defend against plaintiffs’ claims.” State Street called the lawsuit “baseless and without merit.”

The Texas attorney general’s office did not respond to requests for comment.

The three asset managers built substantial stakes in major US coal producers, the suit alleges, and “announced their common commitment” to cut US coal output by joining voluntary alliances to collaborate on climate issues, including the Net Zero Asset Managers Initiative and, in the case of two of the firms, the Climate Action 100+. (All of them later pulled out of the alliances.)

The lawsuit alleges that the coal companies succumbed to the defendants’ collective influence, mining for less coal and disclosing more climate-related information. The suit claimed that resulted in “cartel-level revenues and profits” for the asset managers.

“You could say, ‘Well, if the coal companies were all colluding together to restrict output, then shouldn’t they also be violating antitrust?’” Hearn asked. But the attorneys general “are trying to say that it was at the behest of these concentrated index funds and the concentrated ownership.”

Index funds, which are designed to mirror the returns of specific market indices, are the most common mode of passive investment—when investors park their money somewhere for long-term returns.

The case is being watched closely, not only by climate alliances and sustainability nonprofits, but by the financial sector at large.

If the three asset managers ultimately win, it would turn down the heat on other climate alliances and vindicate those who pressured financial players to line up their business practices with the Paris agreement goals as well as national and local climate targets. The logic of those efforts: Companies in the financial sector have a big impact on climate change, for good or ill—and climate change has a big impact on those same companies.

If the red states instead win on all counts, that “could essentially totally reconstitute the industry as we understand it,” said Hearn, who has co-authored a paper on the lawsuit. At stake is how the US does passive investing.

The pro-free-market editorial board of The Wall Street Journal in June called the Texas-led lawsuit “misconceived,” its logic “strained” and its theories “bizarre.”

The case breaks ground on two fronts. It challenges collaboration between financial players on climate action. It also makes novel claims around “common ownership,” where a shareholder—in this case, an asset manager—holds stakes in competing firms within the same sector.

“Regardless of how the chips fall in the case, those two things will absolutely be precedent-setting,” Hearn said.

Even though this is the first legal test of the theory that business climate alliances are anti-competitive, the question was asked in a study by Harvard Business School economists that came out in May. That study, which empirically examines 11 major climate alliances and 424 listed financial institutions over 10 years, turned up no evidence of traditional antitrust violations. The study was broad and did not look at particular allegations against specific firms.

“To the extent that there are valid legal arguments that can be made, they have to be tested,” said study co-author Peter Tufano, a Harvard Business School professor, noting that his research casts doubt on many of the allegations made by critics of these alliances.

Financial firms that joined climate alliances were more likely to adopt emissions targets and climate-aligned management practices, cut their own emissions and engage in pro-climate lobbying, the study found.

”The range of [legal] arguments that are made, and the passion with which they’re being advanced, suggests that these alliances must be doing something meaningful,” said Tufano, who was previously the dean of the Saïd Business School at the University of Oxford.

Meanwhile, most of the world is moving the other way.

According to a tally by CarbonCloud, a carbon emissions accounting platform that serves the food industry, at least 35 countries that make up more than half of the world’s gross domestic product now mandate climate-related disclosures of some kind.

In the US, California, which on its own would be the world’s fourth-largest economy, will begin requiring big businesses to measure and report their direct and indirect emissions next year.

Ceres’ Rothstein notes that good data about companies is necessary for informed investment decisions. “Throughout the world,” he said, “there’s greater recognition and, to be honest, less debate about the importance of climate information.” Ceres is one of the founders of Climate Action 100+, which now counts more than 600 investor members around the world, including Europe, Asia, and Australia.

For companies that operate globally, the American political landscape is in sharp contrast with other major economies, Tufano said, creating “this whipsawed environment where if you get on a plane, a few hours later, you’re in a jurisdiction that’s saying exactly the opposite thing.”

But even as companies and financial institutions publicly retreat from their climate commitments amid US political pressure, in a phenomenon called “greenhushing,” their decisions remain driven by the bottom line. “Banks are going to do what they’re going to do, and they’re going to lend to the most profitable or to the most growth-oriented industries,” Hearn said, “and right now, that’s not the fossil fuel industry.”

Photo of Inside Climate News

Texas suit alleging anti-coal “cartel” of top Wall Street firms could reshape ESG Read More »

zuckerberg’s-ai-hires-disrupt-meta-with-swift-exits-and-threats-to-leave

Zuckerberg’s AI hires disrupt Meta with swift exits and threats to leave


Longtime acolytes are sidelined as CEO directs biggest leadership reorganization in two decades.

Meta CEO Mark Zuckerberg during the Meta Connect event in Menlo Park, California on September 25, 2024.  Credit: Getty Images | Bloomberg

Within days of joining Meta, Shengjia Zhao, co-creator of OpenAI’s ChatGPT, had threatened to quit and return to his former employer, in a blow to Mark Zuckerberg’s multibillion-dollar push to build “personal superintelligence.”

Zhao went as far as to sign employment paperwork to go back to OpenAI. Shortly afterwards, according to four people familiar with the matter, he was given the title of Meta’s new “chief AI scientist.”

The incident underscores Zuckerberg’s turbulent effort to direct the most dramatic reorganisation of Meta’s senior leadership in the group’s 20-year history.

One of the few remaining Big Tech founder-CEOs, Zuckerberg has relied on longtime acolytes such as Chief Product Officer Chris Cox to head up his favored departments and build out his upper ranks.

But in the battle to dominate AI, the billionaire is shifting towards a new and recently hired generation of executives, including Zhao, former Scale AI CEO Alexandr Wang, and former GitHub chief Nat Friedman.

Current staff are adapting to the reinvention of Meta’s AI efforts as the newcomers seek to flex their power while adjusting to the idiosyncrasies of working within a sprawling $1.95 trillion giant with a hands-on chief executive.

“There’s a lot of big men on campus,” said one investor who is close with some of Meta’s new AI leaders.

Adding to the tumult, a handful of new AI staff have already decided to leave after brief tenures, according to people familiar with the matter.

This includes Ethan Knight, a machine-learning scientist who joined the company weeks ago. Another, Avi Verma, a former OpenAI researcher, went through Meta’s onboarding process but never showed up for his first day, according to a person familiar with the matter.

In a tweet on X on Wednesday, Rishabh Agarwal, a research scientist who started at Meta in April, announced his departure. He said that while Zuckerberg and Wang’s pitch was “incredibly compelling,” he “felt the pull to take on a different kind of risk,” without giving more detail.

Meanwhile, Chaya Nayak and Loredana Crisan, generative AI staffers who had worked at Meta for nine and 10 years respectively, are among the more than half a dozen veteran employees to announce they are leaving in recent days. Wired first reported some details of recent exits, including Zhao’s threatened departure.

Meta said: “We appreciate that there’s outsized interest in seemingly every minute detail of our AI efforts, no matter how inconsequential or mundane, but we’re just focused on doing the work to deliver personal superintelligence.”

A spokesperson said Zhao had been scientific lead of the Meta superintelligence effort from the outset, and the company had waited until the team was in place before formalising his chief scientist title.

“Some attrition is normal for any organisation of this size. Most of these employees had been with the company for years, and we wish them the best,” they added.

Over the summer, Zuckerberg went on a hiring spree to coax AI researchers from rivals such as OpenAI and Apple with the promise of nine-figure sign-on bonuses and access to vast computing resources in a bid to catch up with rival labs.

This month, Meta announced it was restructuring its AI group—recently renamed Meta Superintelligence Lab (MSL)—into four distinct teams. It is the fourth overhaul of its AI efforts in six months.

“One more reorg and everything will be fixed,” joked Meta research scientist Mimansa Jaiswal on X last week. “Just one more.”

Overseeing all of Meta’s AI efforts is Wang, a well-connected and commercially minded Silicon Valley entrepreneur, who was poached by Zuckerberg as part of a $14 billion investment in his Scale data labeling group.

The 28-year-old is heading Zuckerberg’s most secretive new department known as “TBD”—shorthand for “to be determined”—which is filled with marquee hires.

In one of the new team’s first moves, Meta is no longer actively working on releasing its flagship Llama Behemoth model to the public, after it failed to perform as hoped, according to people familiar with the matter. Instead, TBD is focused on building newer cutting-edge models.

Multiple company insiders describe Zuckerberg as deeply invested and involved in the TBD team, while others criticize him for “micromanaging.”

Wang and Zuckerberg have struggled to align on a timeline to achieve the chief executive’s goal of reaching superintelligence, or AI that surpasses human capabilities, according to another person familiar with the matter. The person said Zuckerberg has urged the team to move faster.

Meta said this allegation was “manufactured tension without basis in fact that’s clearly being pushed by dramatic, navel-gazing busybodies.”

Wang’s leadership style has chafed with some, according to people familiar with the matter, who noted he does not have previous experience managing teams across a Big Tech corporation.

One former insider said some new AI recruits have felt frustrated by the company’s bureaucracy and internal competition for resources that they were promised, such as access to computing power.

“While TBD Labs is still relatively new, we believe it has the greatest compute-per-researcher in the industry, and that will only increase,” Meta said.

Wang and other former Scale staffers have struggled with some of the idiosyncratic ways of working at Meta, according to someone familiar with his thinking, for example having to adjust to not having revenue goals as they once did as a startup.

Despite teething problems, some have celebrated the leadership shift, including the appointment of popular entrepreneur and venture capitalist Friedman as head of Products and Applied Research, the team tasked with integrating the models into Meta’s own apps.

The hiring of Zhao, a top technical expert, has also been regarded as a coup by some at Meta and in the industry, who feel he has the decisiveness to propel the company’s AI development.

The shake-up has partially sidelined other Meta leaders. Yann LeCun, Meta’s chief AI scientist, has remained in the role but is now reporting into Wang.

Ahmad Al-Dahle, who led Meta’s Llama and generative AI efforts earlier in the year, has not been named as head of any teams. Cox remains chief product officer, but Wang reports directly into Zuckerberg—cutting Cox out of overseeing generative AI, an area that was previously under his purview.

Meta said that Cox “remains heavily involved” in its broader AI efforts, including overseeing its recommendation systems.

Going forward, Meta is weighing potential cuts to the AI team, one person said. In a memo shared with managers last week, seen by the Financial Times, Meta said that it was “temporarily pausing hiring across all [Meta Superintelligence Labs] teams, with the exception of business critical roles.”

Wang’s staff would evaluate requested hires on a case-by-case basis, but the freeze “will allow leadership to thoughtfully plan our 2026 headcount growth as we work through our strategy,” the memo said.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Zuckerberg’s AI hires disrupt Meta with swift exits and threats to leave Read More »

trump-admin-dismisses-endangered-species-list-as-“hotel-california”

Trump admin dismisses Endangered Species List as “Hotel California”


“Once a species enters, they never leave,” interior secretary says. But there’s more to the story.

A female northern spotted owl catches a mouse on a stick at the Hoopa Valley Tribe on the Hoopa Valley Reservation on Aug. 28, 2024. Credit: The Washington Post/Getty Images

“You can check out any time you like, but you can never leave.”

It’s the ominous slogan for “Hotel California,” an iconic fictional lodging dreamed up by the Eagles in 1976. One of the rock band’s lead singers, Don Henley, said in an interview that the song and place “can have a million interpretations.”

For US Interior Secretary Doug Burgum, what comes to mind is a key part of one of the country’s most central conservation laws.

“The Endangered Species List has become like the Hotel California: once a species enters, they never leave,” Burgum wrote in an April post on X. He’s referring to the roster of more than 1,600 species of imperiled plants and animals that receive protections from the federal government under the Endangered Species Act to prevent their extinctions. “In fact, 97 percent of species that are added to the endangered list remain there. This is because the status quo is focused on regulation more than innovation.”

US Secretary of the Interior Doug Burgum speaks during a press conference on Aug. 11, 2025. Credit: Yasin Ozturk/Anadolu via Getty Images

Since January, the Endangered Species Act has been a frequent target of the Trump administration, which claims that the law’s strict regulations inhibit development and “energy domination.” Several recent executive orders direct the federal government to change ESA regulations in a way that could enable businesses—fossil fuel firms in particular—to bypass the typical environmental reviews associated with project approval.

More broadly, though, Burgum and other conservative politicians are implying the law is ineffective at achieving its main goal: recovering biodiversity. But a number of biologists, environmental groups and legal experts say that recovery delays for endangered species are not a result of the law itself.

Instead, they point to systemically low conservation funding and long-standing political flip-flopping as wildlife faces mounting threats from climate change and widespread habitat loss.

“We continue to wait until species are in dire straits before we protect them under the Endangered Species Act,” said David Wilcove, a professor of ecology, evolutionary biology, and public affairs at Princeton University, “and in doing that, we are more or less ensuring that it’s going to be very difficult to recover them and get them off the list.”

Endangered species by the numbers

Since the Endangered Species Act was enacted in 1973, the US Fish and Wildlife Service and the National Oceanic and Atmospheric Administration have listed more than 2,370 species of plants and animals as threatened or endangered—from schoolbus-sized North Atlantic right whales off the East Coast to tiny Oahu tree snails in Hawaii. In some cases, the list covers biodiversity abroad to prevent further harm from the global wildlife trade.

Once a plant or animal is added, it receives certain protections by the federal government to stanch population losses. Those measures include safeguards from adverse effects of federal activities, restrictions on hunting or development, and active conservation plans like seed planting or captive rearing of animals.

Despite these steps, only 54 of the several thousand species listed from 1973 to 2021 recovered to the point where they no longer needed protection. A number of factors play into this low recovery rate, according to a 2022 study.

The team of researchers who worked on it dove into the population sizes for species of concern, the timelines of their listings, and recovery efforts.

A few trends emerged: Most of the imperiled plants and animals in the US do not receive protections until their populations have fallen to “dangerously low levels,” with less genetic diversity and more vulnerability to extinction from extreme events like severe weather or disease outbreaks.

Additionally, the process to get a species listed frequently took several years, allowing time for populations to dip even lower, said Wilcove, a co-author of the study.

“It’s simply a biological fact that if you don’t start protecting a species until it’s down to a small number of individuals, you’re going to face a long uphill battle,” he said. On top of that, “there are more species in trouble, but at the same time, we are providing less funding on a per-species basis for the Fish and Wildlife Service, so we’re basically asking them to do more and more with less and less.”

These findings echo a similar paper Wilcove co-authored in 1993. Since that analysis was published, the number of listings has risen, while federal funding per species has dropped substantially. “Hotel California” isn’t the right analogy for the endangered species list, in Wilcove’s view: He says it’s more akin to “the critical care unit of the hospital”—one that is struggling to stay afloat.

“It’s as though you built a great hospital and then didn’t pay any money for medical equipment or doctors,” he said. “The hospital isn’t going to work.”

Even so, it has prevented a lot of deaths, experts say. Since the law was passed, just 26 listed species have gone extinct, many of which had not been seen in the wild for years prior to their listing. An estimated 47 species have perished while being considered for a listing, as they were still exposed to the threats that helped reduce their populations in the first place, according to an analysis by the High Country News. Some listing decisions take more than a decade.

“I think the marquee statistic is how few animals have gone extinct under the watch of the federal government,” said Andrew Mergen, the director of Harvard Law School’s Emmett Environmental Law and Policy Clinic. He spent more than 30 years serving as legal counsel in the US Department of Justice, where he litigated a bevy of cases related to the Endangered Species Act.

“Our goal should be to get them off the list and to recover them, but it requires a commitment to this enterprise that we don’t see very often,” Mergen said.

History shows it can be done. Bald eagles—widely considered an emblem of American patriotism—nearly disappeared in the 1960s, with just 417 known nesting pairs left in the lower 48 states. This was largely due to habitat loss and the pesticide DDT, which caused eagle eggshells to become too brittle to survive incubation. By the time the bald eagle was listed as threatened or endangered in all lower 48 states in 1978, DDT had been outlawed, a regulation that the ESA helped enforce, experts say.

A bald eagle flies over the Massapequa Preserve on March 25, 2025 in Massapequa, New York.

A bald eagle flies over the Massapequa Preserve on March 25, 2025 in Massapequa, New York. Credit: Bruce Bennett/Getty Images

This step, along with captive breeding programs, reintroduction efforts, law enforcement, and habitat protection, helped recover populations to nearly 10,000 nesting pairs. In 2007, bald eagles came off the list. Other once-endangered animals like American alligators and Steller sea lions have also been delisted in recent decades due to targeted limits on actions that led to their decline, such as hunting.

Recovery gets trickier when threats to species are more multifaceted, according to Taal Levi, an associate professor at Oregon State University.

“The other class of species with complex, multicausal, or poorly understood threats can be like Hotel California,” Levi said over email. “This is in part because we don’t always have funding to research the threats, and if we identify them, we don’t always have funding to mitigate the threats.”

That is particularly true for the primary driver of biodiversity decline: habitat loss. Levi studies the endangered Humboldt marten, a small carnivore that lives on the Northern California and Southern Oregon coast. The animal was once widespread, but logging in old-growth and coniferous forests decimated their habitats. Now, Levi said it is difficult to fund research that helps unveil basic things about the animals, including what constitutes high-quality habitats. Other animals, like endangered Florida panthers, also struggle to maintain high populations in environments fragmented by urbanization.

“Sometimes being in Hotel California isn’t the worst thing,” Levi wrote in his email. “We’d prefer that Florida Panthers expand into other available habitat to the North of South Florida, but in lieu of that, maintaining them on the ESA seems wise to prevent their extinction.”

The private lands predicament

The federal government manages around 640 million acres of public lands and more than 3.4 million nautical miles of ocean, and it has final say on how endangered species are protected within these areas. However, more than two-thirds of species listed under the Endangered Species Act depend at least in part on private lands, with 10 percent residing only on such property.

The law prohibits any action that would harm a listed species wherever it might be, even if unintentionally. There is also a provision that enables the government to designate certain “critical habitat” areas that are crucial for a species’ survival, including on private land.

As a result, landowners and businesses often see endangered species as a detriment to their operations, said Jonathan Adler, an environmental law professor at William & Mary Law School in Virginia.

“Your ability to use that land is going to be limited, and you can be prosecuted… That creates a lot of conflict, and it discourages landowners from being cooperative,” he said. Adler published a paper in 2024 that argued the Endangered Species Act has been largely ineffective at conserving species, mainly due to the private land problem.

In some cases, this dynamic can create what Adler calls “perverse incentives” for landowners to destroy a habitat before a species is found on their land or listed to avoid any restrictions or costs associated with the endangered label.

Take the red-cockaded woodpecker, which typically relies on old-growth pine trees for nesting. This bird was part of the first cohort listed as endangered under the Act, which limited timber production in many areas of North Carolina. However, an analysis of timber harvests from 1984 to 1990 found that the closer a timber plot was to red-cockaded woodpeckers, the more likely the pines were to be harvested at a young age. This was most likely to prevent the trees from reaching maturity and avoid critical habitat regulation altogether, according to the 2007 study.

Adler argues that the ESA in its current form has too many sticks and not enough carrots. Over the years, Congress has implemented a few strategies to incentivize biodiversity protection on private lands, including providing tax benefits or purchasing conservation easements. This voluntary legal agreement allows an individual to receive compensation for a portion of their land while still owning it, in exchange for agreeing to certain restrictions, such as limiting development or following sustainable farming practices. Environmental groups often purchase conservation easements as well.

This strategy has helped protect animals like the California tiger salamander, San Joaquin kit fox, waterfowls, and other imperiled species. However, providing incentives to landowners for conservation is becoming less common under the Trump administration, Princeton’s Wilcove said.

The Department of the Interior did not respond to requests for comment.

“You shouldn’t reduce the prohibition on harming endangered species, but you should make it easier for landowners to do the right thing, and there are ways for doing that, and this administration is not a champion of those ways,” Wilcove said. “We’re waiting too long to protect species, and when we get around to protecting them, we’re not giving the government sufficient resources to do the job.”

Is the Endangered Species Act itself endangered? 

The Endangered Species Act was passed with wide bipartisan support. But it has become one of the most highly litigated environmental laws in the US, in part because anyone can petition to have a species listed as endangered.

A number of conservative presidential administrations and members of Congress have tried to soften the law’s power, but more environmentally minded administrations often strengthened it once again.

“It’s been a very strong law, partly because so much of the public supports it,” said Kristen Boyles, an attorney at the nonprofit Earthjustice, which has frequently filed ESA-related lawsuits. “Whenever legislative changes have been proposed, we’ve pretty much been able to defeat those.”

But experts say things may be different this time around as the Trump administration takes a more accelerated and aggressive approach to the ESA at a time when environmentalists can’t count on the Supreme Court to push back.

Since January, the president has issued several executive orders that would allow certain fossil fuel projects to get a fast-pass trip through environmental reviews, including those that could harm endangered animals or plants. In April, the Fish and Wildlife Service proposed rescinding certain habitat protections for endangered species, effectively allowing such activities as logging and oil drilling even if they degrade the surrounding environment.

Meanwhile, the Department of the Interior and NOAA have in recent months cut funding for conservation programs and laid off many of the people responsible for carrying out the Endangered Species Act’s mandate. That includes rangers who were monitoring animals like the endangered Pacific fisher in California’s Yosemite National Park.

People observe North Atlantic right whales from a boat in Canada’s Bay of Fundy.

People observe North Atlantic right whales from a boat in Canada’s Bay of Fundy. Credit: (Photo by: Francois Gohier/VW Pics/Universal Images Group via Getty Images)

“One thing that I would say to [Secretary Burgum] is that you have a duty to faithfully execute the law as a member of the executive branch as it was enacted by Congress,” Harvard’s Mergen said. “That’s going to mean that you should not cut all your biologists out but invest in the recovery of these species, understanding what’s putting them at risk and mitigating those harms.”

Conservation funding declined long before Trump entered office, so there is “plenty of blame to go around,” Wilcove said. But political flip-flopping on how recovery projects are carried out inhibit their effectiveness, he added. “If you’re lurching between administrations that care and administrations that are hostile, it’s going to be very hard to make progress.”

For all the discussion about the economic costs of endangered species regulations, studies show that funding biodiversity protection has a strong return on investment for society.

For instance, coastal mangroves around the world reduce property damage from storms by more than $65 billion annually and protect more than 15 million people, according to 2020 research. The Fish and Wildlife Service estimates that insect crop pollination equates to $34 billion in value each year.

Protecting vulnerable animals can also benefit industries that depend on healthy landscapes and oceans. Researchers estimated in 2007 that protecting water flow in the Rio Grande River in Texas for the endangered Rio Grande silvery minnow produces average annual benefits of over $200,000 per year for west Texas agriculture and over $1 million for El Paso municipal and industrial water users.

Endangered species can be a boon for the outdoor tourism industry, too. NOAA Fisheries estimates that the endangered North Atlantic right whale generated $2.3 billion in sales in the whale-watching industry and across the broader economy in 2008 alone, compared to annual costs of about $30 million related to shipping and fishing restrictions protecting them.

Beyond financial gains, humanity has pulled a wealth of knowledge from nature to help treat and cure diseases. For example, the anti-cancer compound paclitaxel was originally extracted from the bark of the Pacific yew tree and is “too fiendishly complex” a chemical structure for researchers to have invented on their own, according to the federal government.

Preventing endangered species from going extinct ensures that we can someday still discover what we don’t yet know, according to Dave Owen, an environmental law professor at the University of California Law, San Francisco.

“Even seemingly simple species are extraordinarily complex; they contain an incredible variety of chemicals, microbes, and genetic adaptations, all of which we can learn from—but only if the species is still around,” he said over email.

Last month, the Fish and Wildlife Service announced that the Roanoke logperch—a freshwater fish—has recovered enough to be removed from the endangered species list altogether.

In a post on X, the Interior secretary declared this is “proof that the Endangered Species List is no longer Hotel California. Under the Trump admin, species can finally leave!”

But this striped fish’s recovery didn’t happen overnight. Federal agencies, local partners, landowners, and conservationists spent more than three decades, millions of dollars, and countless hours removing obsolete dams, restoring wetlands, and reintroducing fish populations to help pull the Roanoke logperch back from the brink. And it was the Biden administration that first proposed delisting the fish in 2024.

These types of success stories give reasons for hope, Wilcove said.

“What I’m optimistic about is our ability to save species, if we put our mind and our resources to it.”

This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy and the environment. Sign up for their newsletter here.

Photo of Inside Climate News

Trump admin dismisses Endangered Species List as “Hotel California” Read More »

the-first-stars-may-not-have-been-as-uniformly-massive-as-we thought

The first stars may not have been as uniformly massive as we thought


Collapsing gas clouds in the early universe may have formed lower-mass stars as well.

Stars form in the universe from massive clouds of gas. Credit: European Southern Observatory, CC BY-SA

For decades, astronomers have wondered what the very first stars in the universe were like. These stars formed new chemical elements, which enriched the universe and allowed the next generations of stars to form the first planets.

The first stars were initially composed of pure hydrogen and helium, and they were massive—hundreds to thousands of times the mass of the Sun and millions of times more luminous. Their short lives ended in enormous explosions called supernovae, so they had neither the time nor the raw materials to form planets, and they should no longer exist for astronomers to observe.

At least that’s what we thought.

Two studies published in the first half of 2025 suggest that collapsing gas clouds in the early universe may have formed lower-mass stars as well. One study uses a new astrophysical computer simulation that models turbulence within the cloud, causing fragmentation into smaller, star-forming clumps. The other study—an independent laboratory experiment—demonstrates how molecular hydrogen, a molecule essential for star formation, may have formed earlier and in larger abundances. The process involves a catalyst that may surprise chemistry teachers.

As an astronomer who studies star and planet formation and their dependence on chemical processes, I am excited at the possibility that chemistry in the first 50 million to 100 million years after the Big Bang may have been more active than we expected.

These findings suggest that the second generation of stars—the oldest stars we can currently observe and possibly the hosts of the first planets—may have formed earlier than astronomers thought.

Primordial star formation

Video illustration of the star and planet formation process. Credit: Space Telescope Science Institute.

Stars form when massive clouds of hydrogen many light-years across collapse under their own gravity. The collapse continues until a luminous sphere surrounds a dense core that is hot enough to sustain nuclear fusion.

Nuclear fusion happens when two or more atoms gain enough energy to fuse together. This process creates a new element and releases an incredible amount of energy, which heats the stellar core. In the first stars, hydrogen atoms fused together to create helium.

The new star shines because its surface is hot, but the energy fueling that luminosity percolates up from its core. The luminosity of a star is its total energy output in the form of light. The star’s brightness is the small fraction of that luminosity that we directly observe.

This process where stars form heavier elements by nuclear fusion is called stellar nucleosynthesis. It continues in stars after they form as their physical properties slowly change. The more massive stars can produce heavier elements such as carbon, oxygen, and nitrogen, all the way up to iron, in a sequence of fusion reactions that end in a supernova explosion.

Supernovae can create even heavier elements, completing the periodic table of elements. Lower-mass stars like the Sun, with their cooler cores, can sustain fusion only up to carbon. As they exhaust the hydrogen and helium in their cores, nuclear fusion stops, and the stars slowly evaporate.

The remnant of a high-mass star supernova explosion imaged by the Chandra X-ray Observatory, left, and the remnant of a low-mass star evaporating in a blue bubble, right.

The remnant of a high-mass star supernova explosion imaged by the Chandra X-ray Observatory, left, and the remnant of a low-mass star evaporating in a blue bubble, right. Credit: CC BY 4.0

High-mass stars have high pressure and temperature in their cores, so they burn bright and use up their gaseous fuel quickly. They last only a few million years, whereas low-mass stars—those less than two times the Sun’s mass—evolve much more slowly, with lifetimes of billions or even trillions of years.

If the earliest stars were all high-mass stars, then they would have exploded long ago. But if low-mass stars also formed in the early universe, they may still exist for us to observe.

Chemistry that cools clouds

The first star-forming gas clouds, called protostellar clouds, were warm—roughly room temperature. Warm gas has internal pressure that pushes outward against the inward force of gravity trying to collapse the cloud. A hot air balloon stays inflated by the same principle. If the flame heating the air at the base of the balloon stops, the air inside cools, and the balloon begins to collapse.

Stars form when clouds of dust collapse inward and condense around a small, bright, dense core. Credit: NASA, ESA, CSA, and STScI, J. DePasquale (STScI), CC BY-ND

Only the most massive protostellar clouds with the most gravity could overcome the thermal pressure and eventually collapse. In this scenario, the first stars were all massive.

The only way to form the lower-mass stars we see today is for the protostellar clouds to cool. Gas in space cools by radiation, which transforms thermal energy into light that carries the energy out of the cloud. Hydrogen and helium atoms are not efficient radiators below several thousand degrees, but molecular hydrogen, H₂, is great at cooling gas at low temperatures.

When energized, H₂ emits infrared light, which cools the gas and lowers the internal pressure. That process would make gravitational collapse more likely in lower-mass clouds.

For decades, astronomers have reasoned that a low abundance of H₂ early on resulted in hotter clouds whose internal pressure would be too hot to easily collapse into stars. They concluded that only clouds with enormous masses, and therefore higher gravity, would collapse, leaving more massive stars.

Helium hydride

In a July 2025 journal article, physicist Florian Grussie and collaborators at the Max Planck Institute for Nuclear Physics demonstrated that the first molecule to form in the universe, helium hydride, HeH⁺, could have been more abundant in the early universe than previously thought. They used a computer model and conducted a laboratory experiment to verify this result.

Helium hydride? In high school science you probably learned that helium is a noble gas, meaning it does not react with other atoms to form molecules or chemical compounds. As it turns out, it does—but only under the extremely sparse and dark conditions of the early universe, before the first stars formed.

HeH⁺ reacts with hydrogen deuteride—HD, which is one normal hydrogen atom bonded to a heavier deuterium atom—to form H₂. In the process, HeH⁺ also acts as a coolant and releases heat in the form of light. So the high abundance of both molecular coolants earlier on may have allowed smaller clouds to cool faster and collapse to form lower-mass stars.

Gas flow also affects stellar initial masses

In another study, published in July 2025, astrophysicist Ke-Jung Chen led a research group at the Academia Sinica Institute of Astronomy and Astrophysics using a detailed computer simulation that modeled how gas in the early universe may have flowed.

The team’s model demonstrated that turbulence, or irregular motion, in giant collapsing gas clouds can form lower-mass cloud fragments from which lower-mass stars condense.

The study concluded that turbulence may have allowed these early gas clouds to form stars either the same size or up to 40 times more massive than the Sun’s mass.

The galaxy NGC 1140 is small and contains large amounts of primordial gas with far fewer elements heavier than hydrogen and helium than are present in our Sun. This composition makes it similar to the intensely star-forming galaxies found in the early universe. These early universe galaxies were the building blocks for large galaxies such as the Milky Way.

The galaxy NGC 1140 is small and contains large amounts of primordial gas with far fewer elements heavier than hydrogen and helium than are present in our Sun. This composition makes it similar to the intensely star-forming galaxies found in the early universe. These early universe galaxies were the building blocks for large galaxies such as the Milky Way. Credit: ESA/Hubble & NASA, CC BY-ND

The two new studies both predict that the first population of stars could have included low-mass stars. Now, it is up to us observational astronomers to find them.

This is no easy task. Low-mass stars have low luminosities, so they are extremely faint. Several observational studies have recently reported possible detections, but none are yet confirmed with high confidence. If they are out there, though, we will find them eventually.The Conversation

Luke Keller is a professor of physics and astronomy at Ithaca College.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo of The Conversation

The Conversation is an independent source of news and views, sourced from the academic and research community. Our team of editors work with these experts to share their knowledge with the wider public. Our aim is to allow for better understanding of current affairs and complex issues, and hopefully improve the quality of public discourse on them.

The first stars may not have been as uniformly massive as we thought Read More »

how-the-cavefish-lost-its-eyes—again-and-again

How the cavefish lost its eyes—again and again


Mexican tetras in pitch-black caverns had no use for the energetically costly organs.

Photographs of Astyanax mexicanus, surface form with eyes (top) and cave form without eyes (bottom). Credit: Daniel Castranova, NICHD/NIH

Photographs of Astyanax mexicanus, surface form with eyes (top) and cave form without eyes (bottom). Credit: Daniel Castranova, NICHD/NIH

Time and again, whenever a population was swept into a cave and survived long enough for natural selection to have its way, the eyes disappeared. “But it’s not that everything has been lost in cavefish,” says geneticist Jaya Krishnan of the Oklahoma Medical Research Foundation. “Many enhancements have also happened.”

Though the demise of their eyes continues to fascinate biologists, in recent years, attention has shifted to other intriguing aspects of cavefish biology. It has become increasingly clear that they haven’t just lost sight but also gained many adaptations that help them to thrive in their cave environment, including some that may hold clues to treatments for obesity and diabetes in people.

Casting off expensive eyes

It has long been debated why the eyes were lost. Some biologists used to argue that they just withered away over generations because cave-dwelling animals with faulty eyes experienced no disadvantage. But another explanation is now considered more likely, says evolutionary physiologist Nicolas Rohner of the University of Münster in Germany: “Eyes are very expensive in terms of resources and energy. Most people now agree that there must be some advantage to losing them if you don’t need them.”

Scientists have observed that mutations in different genes involved in eye formation have led to eye loss. In other words, says Krishnan, “different cavefish populations have lost their eyes in different ways.”

Meanwhile, the fishes’ other senses tend to have been enhanced. Studies have found that cave-dwelling fish can detect lower levels of amino acids than surface fish can. They also have more tastebuds and a higher density of sensitive cells alongside their bodies that let them sense water pressure and flow.

Regions of the brain that process other senses are also expanded, says developmental biologist Misty Riddle of the University of Nevada, Reno, who coauthored a 2023 article on Mexican tetra research in the Annual Review of Cell and Developmental Biology. “I think what happened is that you have to, sort of, kill the eye program in order to expand the other areas.”

Killing the processes that support the formation of the eye is quite literally what happens. Just like non-cave-dwelling members of the species, all cavefish embryos start making eyes. But after a few hours, cells in the developing eye start dying, until the entire structure has disappeared. Riddle thinks this apparent inefficiency may be unavoidable. “The early development of the brain and the eye are completely intertwined—they happen together,” she says. That means the least disruptive way for eyelessness to evolve may be to start making an eye and then get rid of it.

In what Krishnan and Rohner have called “one of the most striking experiments performed in the field of vertebrate evolution,” a study published in 2000 showed that the fate of the cavefish eye is heavily influenced by its lens. Scientists showed this by transplanting the lens of a surface fish embryo to a cavefish embryo, and vice versa. When they did this, the eye of the cavefish grew a retina, rod cells, and other important parts, while the eye of the surface fish stayed small and underdeveloped.

Starving and bingeing

It’s easy to see why cavefish would be at a disadvantage if they were to maintain expensive tissues they aren’t using. Since relatively little lives or grows in their caves, the fish are likely surviving on a meager diet of mostly bat feces and organic waste that washes in during the rainy season. Researchers keeping cavefish in labs have discovered that, genetically, the creatures are exquisitely adapted to absorbing and storing nutrients. “They’re constantly hungry, eating as much as they can,” Krishnan says.

Intriguingly, the fish have at least two mutations that are associated with diabetes and obesity in humans. In the cavefish, though, they may be the basis of some traits that are very helpful to a fish that occasionally has a lot of food but often has none. When scientists compare cavefish and surface fish kept in the lab under the same conditions, cavefish fed regular amounts of standard fish food “get fat. They get high blood sugar,” Rohner says. “But remarkably, they do not develop obvious signs of disease.”

Fats can be toxic for tissues, Rohner explains, so they are stored in fat cells. “But when these cells get too big, they can burst, which is why we often see chronic inflammation in humans and other animals that have stored a lot of fat in their tissues.” Yet a 2020 study by Rohner, Krishnan, and their colleagues revealed that even very well-fed cavefish had fewer signs of inflammation in their fat tissues than surface fish do.

Even in their sparse cave conditions, wild cavefish can sometimes get very fat, says Riddle. This is presumably because, whenever food ends up in the cave, the fish eat as much of it as possible, since there may be nothing else for a long time to come. Intriguingly, Riddle says, their fat is usually bright yellow, because of high levels of carotenoids, the substance in the carrots that your grandmother used to tell you were good for your… eyes.

“The first thing that came to our mind, of course, was that they were accumulating these because they don’t have eyes,” says Riddle. In this species, such ideas can be tested: Scientists can cross surface fish (with eyes) and cavefish (without eyes) and look at what their offspring are like. When that’s done, Riddle says, researchers see no link between eye presence or size and the accumulation of carotenoids. Some eyeless cavefish had fat that was practically white, indicating lower carotenoid levels.

Instead, Riddle thinks these carotenoids may be another adaptation to suppress inflammation, which might be important in the wild, as cavefish are likely overeating whenever food arrives.

Studies by Krishnan, Rohner, and colleagues published in 2020 and 2022 have found other adaptations that seem to help tamp down inflammation. Cavefish cells produce lower levels of certain molecules called cytokines that promote inflammation, as well as lower levels of reactive oxygen species — tissue-damaging byproducts of the body’s metabolism that are often elevated in people with obesity or diabetes.

Krishnan is investigating this further, hoping to understand how the well-fed cavefish remain healthy. Rohner, meanwhile, is increasingly interested in how cavefish survive not just overeating, but long periods of starvation, too.

No waste

On a more fundamental level, researchers still hope to figure out why the Mexican tetra evolved into cave forms while any number of other Mexican river fish that also regularly end up in caves did not. (Globally, there are more than 200 cave-adapted fish species, but species that also still have populations on the surface are quite rare.) “Presumably, there is something about the tetras’ genetic makeup that makes it easier for them to adapt,” says Riddle.

Though cavefish are now well-established lab animals used in research and are easy to purchase for that purpose, preserving them in the wild will be important to safeguard the lessons they still hold for us. “There are hundreds of millions of the surface fish,” says Rohner, but cavefish populations are smaller and more vulnerable to pressures like pollution and people drawing water from caves during droughts.

One of Riddle’s students, David Perez Guerra, is now involved in a committee to support cavefish conservation. And researchers themselves are increasingly careful, too. “The tissues of the fish collected during our lab’s last field trip benefited nine different labs,” Riddle says. “We wasted nothing.”

This article originally appeared in Knowable Magazine, a nonprofit publication dedicated to making scientific knowledge accessible to all. Sign up for Knowable Magazine’s newsletter.

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

How the cavefish lost its eyes—again and again Read More »

why-wind-farms-attract-so-much-misinformation-and-conspiracy theory

Why wind farms attract so much misinformation and conspiracy theory

The recent resistance

Academic work on the question of anti-wind farm activism is revealing a pattern: Conspiracy thinking is a stronger predictor of opposition than age, gender, education, or political leaning.

In Germany, the academic Kevin Winter and colleagues found that belief in conspiracies had many times more influence on wind opposition than any demographic factor. Worryingly, presenting opponents with facts was not particularly successful.

In a more recent article, based on surveys in the US, UK, and Australia that looked at people’s propensity to give credence to conspiracy theories, Winter and colleagues argued that opposition is “rooted in people’s worldviews.”

If you think climate change is a hoax or a beat-up by hysterical eco-doomers, you’re going to be easily persuaded that wind turbines are poisoning groundwater, causing blackouts, or, in Trump’s words, “driving [the whales] loco.”

Wind farms are fertile ground for such theories. They are highly visible symbols of climate policy, and complex enough to be mysterious to non-specialists. A row of wind turbines can become a target for fears about modernity, energy security, or government control.

This, say Winter and colleagues, “poses a challenge for communicators and institutions committed to accelerating the energy transition.” It’s harder to take on an entire worldview than to correct a few made-up talking points.

What is it all about?

Beneath the misinformation, often driven by money or political power, there’s a deeper issue. Some people—perhaps Trump among them—don’t want to deal with the fact that fossil technologies, which brought prosperity and a sense of control, are also causing environmental crises. And these are problems that aren’t solved with the addition of more technology. It offends their sense of invulnerability, of dominance. This “anti-reflexivity,” as some academics call it, is a refusal to reflect on the costs of past successes.

It is also bound up with identity. In some corners of the online “manosphere,” concerns over climate change are being painted as effeminate.

Many boomers, especially white heterosexual men like Trump, have felt disoriented as their world has shifted and changed around them. The clean energy transition symbolizes part of this change. Perhaps this is a good way to understand why Trump is lashing out at “windmills.”The Conversation

Marc Hudson, Visiting Fellow, SPRU, University of Sussex Business School, University of Sussex. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why wind farms attract so much misinformation and conspiracy theory Read More »

a-geothermal-network-in-colorado-could-help-a-rural-town-diversify-its-economy

A geothermal network in Colorado could help a rural town diversify its economy


Town pitches companies to take advantage of “reliable, cost-effective heating and cooling.”

This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.

Hayden, a small town in the mountains of northwest Colorado, is searching for ways to diversify its economy, much like other energy communities across the Mountain West.

For decades, a coal-fired power plant, now scheduled to shut down in the coming years, served as a reliable source of tax revenue, jobs, and electricity.

When town leaders in the community just west of Steamboat Springs decided to create a new business park, harnessing geothermal energy to heat and cool the buildings simply made sense.

The technology aligns with Colorado’s sustainability goals and provides access to grants and tax credits that make the project financially feasible for a town with around 2,000 residents, said Matthew Mendisco, town manager.

“We’re creating the infrastructure to attract employers, support local jobs, and give our community reliable, cost-effective heating and cooling for decades to come,” Mendisco said in a statement.

Bedrock Energy, a geothermal drilling startup company that employs advanced drilling techniques developed by the oil and gas industry, is currently drilling dozens of boreholes that will help heat and cool the town’s Northwest Colorado Business District.

The 1,000-feet-deep boreholes or wells will connect buildings in the industrial park to steady underground temperatures. Near the surface the Earth is approximately 51° F year round. As the drills go deeper, the temperature slowly increases to approximately 64° F near the bottom of the boreholes. Pipes looping down into each well will draw on this thermal energy for heating in the winter and cooling in the summer, significantly reducing energy needs.

Ground source heat pumps located in each building will provide additional heating or cooling depending on the time of year.

The project, one of the first in the region, drew the interest of some of the state’s top political leaders, who attended an open house hosted by town officials and company executives on Wednesday.

“Our energy future is happening right now—right here in Hayden,” US Senator John Hickenlooper (D-Colo.) said in a prepared statement prior to the event.

“Projects like this will drive rural economic growth while harnessing naturally occurring energy to provide reliable, cost-effective heating and cooling to local businesses,” said US Senator Michael Bennet (D-Colo.) in a written statement.

In an interview with Inside Climate News, Mendisco said that extreme weather snaps, which are not uncommon in a town over 6,000 feet above sea level, will not force companies to pay higher prices for fossil fuels to meet energy demands, like they do elsewhere in the country. He added that the system’s rates will be “fairly sustainable, and they will be as competitive as any of our other providers, natural gas, etcetera.”

The geothermal system under construction for Hayden’s business district will be owned by the town and will initially consist of separate systems for each building that will be connected into a larger network over time. Building out the network as the business park grows will help reduce initial capital costs.

Statewide interest

Hayden received two state grants totaling $300,000 to help design and build its geothermal system.

“It wasn’t completely clear to us how much interest was really going to be out there,” Will Toor, executive director of the Colorado Energy Office, said of a grant program the state launched in 2022.

In the past few years, the program has seen significant interest, with approximately 80 communities across the state exploring similar projects, said Bryce Carter, the geothermal program manager for the state’s Energy Office.

Two projects under development are by Xcel Energy, the largest electricity and gas provider in the state. A law passed in Colorado in 2023 required large gas utilities to develop at least one geothermal heating and cooling network in the state. The networks, which connect individual buildings and boreholes into a shared thermal loop, offer high efficiency and an economy of scale, but also have high upfront construction costs.

There are now 26 utility-led geothermal heating and cooling projects under development or completed nationwide, Jessica Silber-Byrne of the Building Decarbonization Coalition, a nonprofit based in Delaware, said.

Utility companies are widely seen as a natural developer of such projects as they can shoulder multi-million dollar expenses and recoup those costs in ratepayer fees over time. The first, and so far only, geothermal network completed by a gas utility was built by Eversource Energy in Framingham, Massachusetts, last year.

Grid stress concerns heat up geothermal opportunities

Twelve states have legislation supporting or requiring the development of thermal heating and cooling networks. Regulators are interested in the technology because its high efficiency can reduce demand on electricity grids.

Geothermal heating and cooling is roughly twice as efficient as air source heat pumps, a common electric heating and cooling alternative that relies on outdoor air. During periods of extreme heat or extreme cold, air source heat pumps have to work harder, requiring approximately four times more electricity than ground source heat pumps.

As more power-hungry data centers come online, the ability of geothermal heating and cooling to reduce the energy needs of other users of the grid, particularly at periods of peak demand, could become increasingly important, geothermal proponents say.

“The most urgent conversation about energy right now is the stress on the grid,” Joselyn Lai, Bedrock Energy’s CEO, said. “Geothermal’s role in the energy ecosystem will actually increase because of the concerns about meeting load growth.”

The geothermal system will be one of the larger drilling projects to date for Bedrock, a company founded in Austin, Texas, in 2022. Bedrock, which is working on another similarly sized project in Crested Butte, Colorado, seeks to reduce the cost of relatively shallow-depth geothermal drilling through the use of robotics and data analytics that rely on artificial intelligence.

By using a single, continuous steel pipe for drilling, rather than dozens of shorter pipe segments that need to be attached as they go, Bedrock can drill faster and transmit data more easily from sensors near the drill head to the surface.

In addition to shallow, low-temperature geothermal heating and cooling networks, deep, hot-rock geothermal systems that generate steam for electricity production are also seeing increased interest. New, enhanced geothermal systems that draw on hydraulic fracturing techniques developed by the oil and gas industry and other advanced drilling methods are quickly expanding geothermal energy’s potential.

“We’re also very bullish on geothermal electricity,” said Toor, of the Colorado Energy Office, adding that the state has a goal of reducing carbon emissions from the electricity sector by 80 percent by 2030. He said geothermal power that produces clean, round-the-clock electricity will likely play a key role in meeting that target.

The University of Colorado, Boulder, is currently considering the use of geothermal energy for heating, cooling, and electricity production and has received grants for initial feasibility studies through the state’s energy office.

For town officials in Hayden, the technology’s appeal is simple.

“Geothermal works at night, it works in the day, it works whenever you want it to work,” Mendisco said. “It doesn’t matter if there’s a giant snowstorm [or] a giant rainstorm. Five hundred feet to 1,000 feet below the surface, the Earth doesn’t care. It just generates heat.”

Photo of Inside Climate News

A geothermal network in Colorado could help a rural town diversify its economy Read More »

using-pollen-to-make-paper,-sponges,-and-more

Using pollen to make paper, sponges, and more

Softening the shell

To begin working with pollen, scientists can remove the sticky coating around the grains in a process called defatting. Stripping away these lipids and allergenic proteins is the first step in creating the empty capsules for drug delivery that Csaba seeks. Beyond that, however, pollen’s seemingly impenetrable shell—made up of the biopolymer sporopollenin—had long stumped researchers and limited its use.

A breakthrough came in 2020, when Cho and his team reported that incubating pollen in an alkaline solution of potassium hydroxide at 80° Celsius (176° Fahrenheit) could significantly alter the surface chemistry of pollen grains, allowing them to readily absorb and retain water.

The resulting pollen is as pliable as Play-Doh, says Shahrudin Ibrahim, a research fellow in Cho’s lab who helped to develop the technique. Before the treatment, pollen grains are more like marbles: hard, inert, and largely unreactive. After, the particles are so soft they stick together easily, allowing more complex structures to form. This opens up numerous applications, Ibrahim says, proudly holding up a vial of the yellow-brown slush in the lab.

When cast onto a flat mold and dried out, the microgel assembles into a paper or film, depending on the final thickness, that is strong yet flexible. It is also sensitive to external stimuli, including changes in pH and humidity. Exposure to the alkaline solution causes pollen’s constituent polymers to become more hydrophilic, or water-loving, so depending on the conditions, the gel will swell or shrink due to the absorption or expulsion of water, explains Ibrahim.

For technical applications, pollen grains are first stripped of their allergy-inducing sticky coating, in a process called defatting. Next, if treated with acid, they form hollow sporopollenin capsules that can be used to deliver drugs. If treated instead with an alkaline solution, the defatted pollen grains are transformed into a soft microgel that can be used to make thin films, paper, and sponges. Credit: Knowable Magazine

This winning combination of properties, the Singaporean researchers believe, makes pollen-based film a prospect for many future applications: smart actuators that allow devices to detect and respond to changes in their surroundings, wearable health trackers to monitor heart signals, and more. And because pollen is naturally UV-protective, there’s the possibility it could substitute for certain photonically active substrates in perovskite solar cells and other optoelectronic devices.

Using pollen to make paper, sponges, and more Read More »

the-west-texas-measles-outbreak-has-ended

The West Texas measles outbreak has ended

A large measles outbreak in Texas that has affected 762 people has now ended, according to an announcement Monday by the Texas Department of State Health Services. The agency says it has been more than 42 days since a new case was reported in any of the counties that previously showed evidence of ongoing transmission.

The outbreak has contributed to the worst year for measles cases in the United States in more than 30 years. As of August 5, the most recent update from the Centers for Disease Control and Prevention, a total of 1,356 confirmed measles cases have been reported across the country this year. For comparison, there were just 285 measles cases in 2024.

The Texas outbreak began in January in a rural Mennonite community with low vaccination rates. More than two-thirds of the state’s reported cases were in children, and two children in Texas died of the virus. Both were unvaccinated and had no known underlying conditions. Over the course of the outbreak, a total of 99 people were hospitalized, representing 13 percent of cases.

Measles is a highly contagious respiratory illness that can temporarily weaken the immune system, leaving individuals vulnerable to secondary infections such as pneumonia. In rare cases, it can also lead to swelling of the brain and long-term neurological damage. It can also cause pregnancy complications, such as premature birth and babies with low birth weight. The best way to prevent the disease is the measles, mumps, and rubella (MMR) vaccine. One dose of the vaccine is 93 percent effective against measles while two doses is 97 percent effective.

The West Texas measles outbreak has ended Read More »

how-a-mysterious-particle-could-explain-the-universe’s-missing-antimatter

How a mysterious particle could explain the Universe’s missing antimatter


New experiments focused on understanding the enigmatic neutrino may offer insights.

An artist’s composition of the Milky Way seen with a neutrino lens (blue). Credit: IceCube Collaboration/NSF/ESO

Everything we see around us, from the ground beneath our feet to the most remote galaxies, is made of matter. For scientists, that has long posed a problem: According to physicists’ best current theories, matter and its counterpart, antimatter, ought to have been created in equal amounts at the time of the Big Bang. But antimatter is vanishingly rare in the universe. So what happened?

Physicists don’t know the answer to that question yet, but many think the solution must involve some subtle difference in the way that matter and antimatter behave. And right now, the most promising path into that unexplored territory centers on new experiments involving the mysterious subatomic particle known as the neutrino.

“It’s not to say that neutrinos are definitely the explanation of the matter-antimatter asymmetry, but a very large class of models that can explain this asymmetry are connected to neutrinos,” says Jessica Turner, a theoretical physicist at Durham University in the United Kingdom.

Let’s back up for a moment: When physicists talk about matter, that’s just the ordinary stuff that the universe is made of—mainly protons and neutrons (which make up the nuclei of atoms), along with lighter particles like electrons. Although the term “antimatter” has a sci-fi ring to it, antimatter is not all that different from ordinary matter. Typically, the only difference is electric charge: For example, the positron—the first antimatter particle to be discovered—matches an electron in its mass but carries a positive rather than a negative charge. (Things are a bit more complicated with electrically neutral particles. For example, a photon is considered to be its own antiparticle, but an antineutron is distinct from a neutron in that it’s made up of antiquarks rather than ordinary quarks.)

Various antimatter particles can exist in nature; they occur in cosmic rays and in thunderclouds, and are produced by certain kinds of radioactive decay. (Because people—and bananas—contain a small amount of radioactive potassium, they emit minuscule amounts of antimatter in the form of positrons.)

Small amounts of antimatter have also been created by scientists in particle accelerators and other experiments, at great effort and expense—putting a damper on science fiction dreams of rockets propelled by antimatter or planet-destroying weapons energized by it.

When matter and antimatter meet, they annihilate, releasing energy in the form of radiation. Such encounters are governed by Einstein’s famous equation, E=mc2—energy equals mass times the square of the speed of light — which says you can convert a little bit of matter into a lot of energy, or vice versa. (The positrons emitted by bananas and bodies have so little mass that we don’t notice the teeny amounts of energy released when they annihilate.) Because matter and antimatter annihilate so readily, it’s hard to make a chunk of antimatter much bigger than an atom, though in theory you could have everything from antimatter molecules to antimatter planets and stars.

But there’s a puzzle: If matter and antimatter were created in equal amounts at the time of the Big Bang, as theory suggests, shouldn’t they have annihilated, leaving a universe made up of pure energy? Why is there any matter left?

Physicists’ best guess is that some process in the early universe favored the production of matter compared to the production of antimatter — but exactly what that process was is a mystery, and the question of why we live in a matter-dominated universe is one of the most vexing problems in all of physics.

Crucially, physicists haven’t been able to think of any such process that would mesh with today’s leading theory of matter and energy, known as the Standard Model of particle physics. That leaves theorists seeking new ideas, some as-yet-unknown physics that goes beyond the Standard Model. This is where neutrinos come in.

A neutral answer

Neutrinos are tiny particles without any electric charge. (The name translates as “little neutral one.”) According to the Standard Model, they ought to be massless, like photons, but experiments beginning in the 1990s showed that they do in fact have a tiny mass. (They’re at least a million times lighter than electrons, the extreme lightweights among normal matter.) Since physicists already know that neutrinos violate the Standard Model by having mass, their hope is that learning more about these diminutive particles might yield insights into whatever lies beyond.

Neutrinos have been slow to yield their secrets, however, because they barely interact with other particles. About 60 billion neutrinos from the Sun pass through every square centimeter of your skin each second. If those neutrinos interacted with the atoms in our bodies, they would probably destroy us. Instead, they pass right through. “You most likely will not interact with a single neutrino in your lifetime,” says Pedro Machado, a physicist at Fermilab near Chicago. “It’s just so unlikely.”

Experiments, however, have shown that neutrinos “oscillate” as they travel, switching among three different identities—physicists call them “flavors”: electron neutrino, muon neutrino, and tau neutrino. Oscillation measurements have also revealed that different-flavored neutrinos have slightly different masses.

Neutrinos are known to oscillate, switching between three varieties or “flavors.” Exactly how they oscillate is governed by the laws of quantum mechanics, and the probability of finding that an electron neutrino has transformed into a muon neutrino, for example, varies as a function of the distance traveled. (The third flavor state, the tau neutrino, is very rare.) Credit: Knowable Magazine

Neutrino oscillation is weird, but it may be weird in a useful way, because it might allow physicists to probe certain fundamental symmetries in nature—and these in turn may illuminate the most troubling of asymmetries, namely the universe’s matter-antimatter imbalance.

For neutrino researchers, a key symmetry is called charge-parity or CP symmetry. It’s actually a combination of two distinct symmetries: Changing a particle’s charge flips matter into antimatter (or vice versa), while changing a particle’s parity flips a particle into its mirror image (like turning a right-handed glove into a left-handed glove). So the CP-opposite version of a particle of ordinary matter is a mirror image of the corresponding antiparticle. But does this opposite particle behave exactly the same as the original one? If not, physicists say that CP symmetry is violated—a fancy way of saying that matter and antimatter behave slightly differently from one another. So any examples of CP symmetry violation in nature could help to explain the matter-antimatter imbalance.

In fact, CP violation has already been observed in some mesons, a type of subatomic particle typically made up of one quark and one antiquark, a surprising result first found in the 1960s. But it’s an extremely small effect, and it falls far short of being able to account for the universe’s matter-antimatter asymmetry.

In July 2025, scientists working at the Large Hadron Collider at CERN near Geneva reported clear evidence for a similar violation by one type of particle from a different family of subatomic particles known as baryons—but this newly observed CP violation is similarly believed to be much too small to account for the matter-antimatter imbalance.

Charge-parity or CP symmetry is a combination of two distinct symmetries: Changing a particle’s charge from positive to negative, for example, flips matter into antimatter (or vice versa), while changing a particle’s parity flips a particle into its mirror image (like turning a right-handed glove into a left-handed glove). Consider an electron: Flip its charge and you end up with a positron; flip its “handedness”—in particle physics, this is actually a quantum-mechanical property known as spin—and you get an electron with opposite spin. Flip both properties, and you get a positron that’s like a mirror image of the original electron. Whether this CP-flipped particle behaves the same way as the original electron is a key question: If it doesn’t, physicists say that CP symmetry is “violated.” Any examples of CP symmetry violation in nature could help to explain the matter-antimatter imbalance observed in the universe today. Credit: Knowable Magazine

Experiments on the horizon

So what about neutrinos? Do they violate CP symmetry—and if so, do they do it in a big enough way to explain why we live in a matter-dominated universe? This is precisely the question being addressed by a new generation of particle physics experiments. Most ambitious among them is the Deep Underground Neutrino Experiment (DUNE), which is now under construction in the United States; data collection could begin as early as 2029.

DUNE will employ the world’s most intense neutrino beam, which will fire both neutrinos and antineutrinos from Fermilab to the Sanford Underground Research Facility, located 800 miles away in South Dakota. (There’s no tunnel; the neutrinos and antineutrinos simply zip through the earth, for the most part hardly noticing that it’s there.) Detectors at each end of the beam will reveal how the particles oscillate as they traverse the distance between the two labs—and whether the behavior of the neutrinos differs from that of the antineutrinos.

DUNE won’t pin down the precise amount of neutrinos’ CP symmetry violation (if there is any), but it will set an upper limit on it. The larger the possible effect, the greater the discrepancy in the behavior of neutrinos versus antineutrinos, and the greater the likelihood that neutrinos could be responsible for the matter-antimatter asymmetry in the early universe.

The Deep Underground Neutrino Experiment (DUNE), now under construction, will see both neutrinos and antineutrinos fired from below Fermilab near Chicago to the Sanford Underground Research Facility some 800 miles away in South Dakota. Neutrinos can pass through earth unaltered, with no need of a tunnel. The ambitious experiment may reveal how the behavior of neutrinos differs from that of their antimatter counterparts, antineutrinos. Credit: Knowable Magazine

For Shirley Li, a physicist at the University of California, Irvine, the issue of neutrino CP violation is an urgent question, one that could point the way to a major rethink of particle physics. “If I could have one question answered by the end of my lifetime, I would want to know what that’s about,” she says.

Aside from being a major discovery in its own right, CP symmetry violation in neutrinos could challenge the Standard Model by pointing the way to other novel physics. For example, theorists say it would mean there could be two kinds of neutrinos—left-handed ones (the normal lightweight ones observed to date) and much heavier right-handed neutrinos, which are so far just a theoretical possibility. (The particles’ “handedness” refers to their quantum properties.)

These right-handed neutrinos could be as much as 1015 times heavier than protons, and they’d be unstable, decaying almost instantly after coming into existence. Although they’re not found in today’s universe, physicists suspect that right-handed neutrinos may have existed in the moments after the Big Bang — possibly decaying via a process that mimicked CP violation and favored the creation of matter over antimatter.

It’s even possible that neutrinos can act as their own antiparticles—that is, that neutrinos could turn into antineutrinos and vice versa. This scenario, which the discovery of right-handed neutrinos would support, would make neutrinos fundamentally different from more familiar particles like quarks and electrons. If antineutrinos can turn into neutrinos, that could help explain where the antimatter went during the universe’s earliest moments.

One way to test this idea is to look for an unusual type of radioactive decay — theorized but thus far never observed—known as “neutrinoless double-beta decay.” In regular double-beta decay, two neutrons in a nucleus simultaneously decay into protons, releasing two electrons and two antineutrinos in the process. But if neutrinos can act as their own antiparticles, then the two neutrinos could annihilate each other, leaving only the two electrons and a burst of energy.

A number of experiments are underway or planned to look for this decay process, including the KamLAND-Zen experiment, at the Kamioka neutrino detection facility in Japan; the nEXO experiment at the SNOLAB facility in Ontario, Canada; the NEXT experiment at the Canfranc Underground Laboratory in Spain; and the LEGEND experiment at the Gran Sasso laboratory in Italy. KamLAND-Zen, NEXT, and LEGEND are already up and running.

While these experiments differ in the details, they all employ the same general strategy: They use a giant vat of dense, radioactive material with arrays of detectors that look for the emission of unusually energetic electrons. (The electrons’ expected neutrino companions would be missing, with the energy they would have had instead carried by the electrons.)

While the neutrino remains one of the most mysterious of the known particles, it is slowly but steadily giving up its secrets. As it does so, it may crack the puzzle of our matter-dominated universe — a universe that happens to allow inquisitive creatures like us to flourish. The neutrinos that zip silently through your body every second are gradually revealing the universe in a new light.

“I think we’re entering a very exciting era,” says Turner.

This article originally appeared in Knowable Magazine, a nonprofit publication dedicated to making scientific knowledge accessible to all. Sign up for Knowable Magazine’s newsletter.

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

How a mysterious particle could explain the Universe’s missing antimatter Read More »

upcoming-deepseek-ai-model-failed-to-train-using-huawei’s-chips

Upcoming DeepSeek AI model failed to train using Huawei’s chips

DeepSeek is still working with Huawei to make the model compatible with Ascend for inference, the people said.

Founder Liang Wenfeng has said internally he is dissatisfied with R2’s progress and has been pushing to spend more time to build an advanced model that can sustain the company’s lead in the AI field, they said.

The R2 launch was also delayed because of longer-than-expected data labeling for its updated model, another person added. Chinese media reports have suggested that the model may be released as soon as in the coming weeks.

“Models are commodities that can be easily swapped out,” said Ritwik Gupta, an AI researcher at the University of California, Berkeley. “A lot of developers are using Alibaba’s Qwen3, which is powerful and flexible.”

Gupta noted that Qwen3 adopted DeepSeek’s core concepts, such as its training algorithm that makes the model capable of reasoning, but made them more efficient to use.

Gupta, who tracks Huawei’s AI ecosystem, said the company is facing “growing pains” in using Ascend for training, though he expects the Chinese national champion to adapt eventually.

“Just because we’re not seeing leading models trained on Huawei today doesn’t mean it won’t happen in the future. It’s a matter of time,” he said.

Nvidia, a chipmaker at the center of a geopolitical battle between Beijing and Washington, recently agreed to give the US government a cut of its revenues in China in order to resume sales of its H20 chips to the country.

“Developers will play a crucial role in building the winning AI ecosystem,” said Nvidia about Chinese companies using its chips. “Surrendering entire markets and developers would only hurt American economic and national security.”

DeepSeek and Huawei did not respond to a request for comment.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Upcoming DeepSeek AI model failed to train using Huawei’s chips Read More »

openai,-cofounder-sam-altman-to-take-on-neuralink-with-new-startup

OpenAI, cofounder Sam Altman to take on Neuralink with new startup

The company aims to raise $250 million from OpenAI and other investors, although the talks are at an early stage. Altman will not personally invest.

The new venture would be in direct competition with Neuralink, founded by Musk in 2016, which seeks to wire brains directly to computers.

Musk and Altman cofounded OpenAI, but Musk left the board in 2018 after clashing with Altman, and the two have since become fierce rivals in their pursuit of AI.

Musk launched his own AI start-up, xAI, in 2023 and has been attempting to block OpenAI’s conversion from a nonprofit in the courts. Musk donated much of the initial capital to get OpenAI off the ground.

Neuralink is one of a pack of so-called brain-computer interface companies, while a number of start-ups, such as Precision Neuroscience and Synchron, have also emerged on the scene.

Neuralink earlier this year raised $650 million at a $9 billion valuation, and it is backed by investors including Sequoia Capital, Thrive Capital, and Vy Capital. Altman had previously invested in Neuralink.

Brain implants are a decades-old technology, but recent leaps forward in AI and in the electronic components used to collect brain signals have offered the prospect that they can become more practically useful.

Altman has backed a number of other companies in markets adjacent to ChatGPT-maker OpenAI, which is valued at $300 billion. In addition to cofounding World, he has also invested in the nuclear fission group Oklo and nuclear fusion project Helion.

OpenAI declined to comment.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

OpenAI, cofounder Sam Altman to take on Neuralink with new startup Read More »