syndication

using-pollen-to-make-paper,-sponges,-and-more

Using pollen to make paper, sponges, and more

Softening the shell

To begin working with pollen, scientists can remove the sticky coating around the grains in a process called defatting. Stripping away these lipids and allergenic proteins is the first step in creating the empty capsules for drug delivery that Csaba seeks. Beyond that, however, pollen’s seemingly impenetrable shell—made up of the biopolymer sporopollenin—had long stumped researchers and limited its use.

A breakthrough came in 2020, when Cho and his team reported that incubating pollen in an alkaline solution of potassium hydroxide at 80° Celsius (176° Fahrenheit) could significantly alter the surface chemistry of pollen grains, allowing them to readily absorb and retain water.

The resulting pollen is as pliable as Play-Doh, says Shahrudin Ibrahim, a research fellow in Cho’s lab who helped to develop the technique. Before the treatment, pollen grains are more like marbles: hard, inert, and largely unreactive. After, the particles are so soft they stick together easily, allowing more complex structures to form. This opens up numerous applications, Ibrahim says, proudly holding up a vial of the yellow-brown slush in the lab.

When cast onto a flat mold and dried out, the microgel assembles into a paper or film, depending on the final thickness, that is strong yet flexible. It is also sensitive to external stimuli, including changes in pH and humidity. Exposure to the alkaline solution causes pollen’s constituent polymers to become more hydrophilic, or water-loving, so depending on the conditions, the gel will swell or shrink due to the absorption or expulsion of water, explains Ibrahim.

For technical applications, pollen grains are first stripped of their allergy-inducing sticky coating, in a process called defatting. Next, if treated with acid, they form hollow sporopollenin capsules that can be used to deliver drugs. If treated instead with an alkaline solution, the defatted pollen grains are transformed into a soft microgel that can be used to make thin films, paper, and sponges. Credit: Knowable Magazine

This winning combination of properties, the Singaporean researchers believe, makes pollen-based film a prospect for many future applications: smart actuators that allow devices to detect and respond to changes in their surroundings, wearable health trackers to monitor heart signals, and more. And because pollen is naturally UV-protective, there’s the possibility it could substitute for certain photonically active substrates in perovskite solar cells and other optoelectronic devices.

Using pollen to make paper, sponges, and more Read More »

the-west-texas-measles-outbreak-has-ended

The West Texas measles outbreak has ended

A large measles outbreak in Texas that has affected 762 people has now ended, according to an announcement Monday by the Texas Department of State Health Services. The agency says it has been more than 42 days since a new case was reported in any of the counties that previously showed evidence of ongoing transmission.

The outbreak has contributed to the worst year for measles cases in the United States in more than 30 years. As of August 5, the most recent update from the Centers for Disease Control and Prevention, a total of 1,356 confirmed measles cases have been reported across the country this year. For comparison, there were just 285 measles cases in 2024.

The Texas outbreak began in January in a rural Mennonite community with low vaccination rates. More than two-thirds of the state’s reported cases were in children, and two children in Texas died of the virus. Both were unvaccinated and had no known underlying conditions. Over the course of the outbreak, a total of 99 people were hospitalized, representing 13 percent of cases.

Measles is a highly contagious respiratory illness that can temporarily weaken the immune system, leaving individuals vulnerable to secondary infections such as pneumonia. In rare cases, it can also lead to swelling of the brain and long-term neurological damage. It can also cause pregnancy complications, such as premature birth and babies with low birth weight. The best way to prevent the disease is the measles, mumps, and rubella (MMR) vaccine. One dose of the vaccine is 93 percent effective against measles while two doses is 97 percent effective.

The West Texas measles outbreak has ended Read More »

how-a-mysterious-particle-could-explain-the-universe’s-missing-antimatter

How a mysterious particle could explain the Universe’s missing antimatter


New experiments focused on understanding the enigmatic neutrino may offer insights.

An artist’s composition of the Milky Way seen with a neutrino lens (blue). Credit: IceCube Collaboration/NSF/ESO

Everything we see around us, from the ground beneath our feet to the most remote galaxies, is made of matter. For scientists, that has long posed a problem: According to physicists’ best current theories, matter and its counterpart, antimatter, ought to have been created in equal amounts at the time of the Big Bang. But antimatter is vanishingly rare in the universe. So what happened?

Physicists don’t know the answer to that question yet, but many think the solution must involve some subtle difference in the way that matter and antimatter behave. And right now, the most promising path into that unexplored territory centers on new experiments involving the mysterious subatomic particle known as the neutrino.

“It’s not to say that neutrinos are definitely the explanation of the matter-antimatter asymmetry, but a very large class of models that can explain this asymmetry are connected to neutrinos,” says Jessica Turner, a theoretical physicist at Durham University in the United Kingdom.

Let’s back up for a moment: When physicists talk about matter, that’s just the ordinary stuff that the universe is made of—mainly protons and neutrons (which make up the nuclei of atoms), along with lighter particles like electrons. Although the term “antimatter” has a sci-fi ring to it, antimatter is not all that different from ordinary matter. Typically, the only difference is electric charge: For example, the positron—the first antimatter particle to be discovered—matches an electron in its mass but carries a positive rather than a negative charge. (Things are a bit more complicated with electrically neutral particles. For example, a photon is considered to be its own antiparticle, but an antineutron is distinct from a neutron in that it’s made up of antiquarks rather than ordinary quarks.)

Various antimatter particles can exist in nature; they occur in cosmic rays and in thunderclouds, and are produced by certain kinds of radioactive decay. (Because people—and bananas—contain a small amount of radioactive potassium, they emit minuscule amounts of antimatter in the form of positrons.)

Small amounts of antimatter have also been created by scientists in particle accelerators and other experiments, at great effort and expense—putting a damper on science fiction dreams of rockets propelled by antimatter or planet-destroying weapons energized by it.

When matter and antimatter meet, they annihilate, releasing energy in the form of radiation. Such encounters are governed by Einstein’s famous equation, E=mc2—energy equals mass times the square of the speed of light — which says you can convert a little bit of matter into a lot of energy, or vice versa. (The positrons emitted by bananas and bodies have so little mass that we don’t notice the teeny amounts of energy released when they annihilate.) Because matter and antimatter annihilate so readily, it’s hard to make a chunk of antimatter much bigger than an atom, though in theory you could have everything from antimatter molecules to antimatter planets and stars.

But there’s a puzzle: If matter and antimatter were created in equal amounts at the time of the Big Bang, as theory suggests, shouldn’t they have annihilated, leaving a universe made up of pure energy? Why is there any matter left?

Physicists’ best guess is that some process in the early universe favored the production of matter compared to the production of antimatter — but exactly what that process was is a mystery, and the question of why we live in a matter-dominated universe is one of the most vexing problems in all of physics.

Crucially, physicists haven’t been able to think of any such process that would mesh with today’s leading theory of matter and energy, known as the Standard Model of particle physics. That leaves theorists seeking new ideas, some as-yet-unknown physics that goes beyond the Standard Model. This is where neutrinos come in.

A neutral answer

Neutrinos are tiny particles without any electric charge. (The name translates as “little neutral one.”) According to the Standard Model, they ought to be massless, like photons, but experiments beginning in the 1990s showed that they do in fact have a tiny mass. (They’re at least a million times lighter than electrons, the extreme lightweights among normal matter.) Since physicists already know that neutrinos violate the Standard Model by having mass, their hope is that learning more about these diminutive particles might yield insights into whatever lies beyond.

Neutrinos have been slow to yield their secrets, however, because they barely interact with other particles. About 60 billion neutrinos from the Sun pass through every square centimeter of your skin each second. If those neutrinos interacted with the atoms in our bodies, they would probably destroy us. Instead, they pass right through. “You most likely will not interact with a single neutrino in your lifetime,” says Pedro Machado, a physicist at Fermilab near Chicago. “It’s just so unlikely.”

Experiments, however, have shown that neutrinos “oscillate” as they travel, switching among three different identities—physicists call them “flavors”: electron neutrino, muon neutrino, and tau neutrino. Oscillation measurements have also revealed that different-flavored neutrinos have slightly different masses.

Neutrinos are known to oscillate, switching between three varieties or “flavors.” Exactly how they oscillate is governed by the laws of quantum mechanics, and the probability of finding that an electron neutrino has transformed into a muon neutrino, for example, varies as a function of the distance traveled. (The third flavor state, the tau neutrino, is very rare.) Credit: Knowable Magazine

Neutrino oscillation is weird, but it may be weird in a useful way, because it might allow physicists to probe certain fundamental symmetries in nature—and these in turn may illuminate the most troubling of asymmetries, namely the universe’s matter-antimatter imbalance.

For neutrino researchers, a key symmetry is called charge-parity or CP symmetry. It’s actually a combination of two distinct symmetries: Changing a particle’s charge flips matter into antimatter (or vice versa), while changing a particle’s parity flips a particle into its mirror image (like turning a right-handed glove into a left-handed glove). So the CP-opposite version of a particle of ordinary matter is a mirror image of the corresponding antiparticle. But does this opposite particle behave exactly the same as the original one? If not, physicists say that CP symmetry is violated—a fancy way of saying that matter and antimatter behave slightly differently from one another. So any examples of CP symmetry violation in nature could help to explain the matter-antimatter imbalance.

In fact, CP violation has already been observed in some mesons, a type of subatomic particle typically made up of one quark and one antiquark, a surprising result first found in the 1960s. But it’s an extremely small effect, and it falls far short of being able to account for the universe’s matter-antimatter asymmetry.

In July 2025, scientists working at the Large Hadron Collider at CERN near Geneva reported clear evidence for a similar violation by one type of particle from a different family of subatomic particles known as baryons—but this newly observed CP violation is similarly believed to be much too small to account for the matter-antimatter imbalance.

Charge-parity or CP symmetry is a combination of two distinct symmetries: Changing a particle’s charge from positive to negative, for example, flips matter into antimatter (or vice versa), while changing a particle’s parity flips a particle into its mirror image (like turning a right-handed glove into a left-handed glove). Consider an electron: Flip its charge and you end up with a positron; flip its “handedness”—in particle physics, this is actually a quantum-mechanical property known as spin—and you get an electron with opposite spin. Flip both properties, and you get a positron that’s like a mirror image of the original electron. Whether this CP-flipped particle behaves the same way as the original electron is a key question: If it doesn’t, physicists say that CP symmetry is “violated.” Any examples of CP symmetry violation in nature could help to explain the matter-antimatter imbalance observed in the universe today. Credit: Knowable Magazine

Experiments on the horizon

So what about neutrinos? Do they violate CP symmetry—and if so, do they do it in a big enough way to explain why we live in a matter-dominated universe? This is precisely the question being addressed by a new generation of particle physics experiments. Most ambitious among them is the Deep Underground Neutrino Experiment (DUNE), which is now under construction in the United States; data collection could begin as early as 2029.

DUNE will employ the world’s most intense neutrino beam, which will fire both neutrinos and antineutrinos from Fermilab to the Sanford Underground Research Facility, located 800 miles away in South Dakota. (There’s no tunnel; the neutrinos and antineutrinos simply zip through the earth, for the most part hardly noticing that it’s there.) Detectors at each end of the beam will reveal how the particles oscillate as they traverse the distance between the two labs—and whether the behavior of the neutrinos differs from that of the antineutrinos.

DUNE won’t pin down the precise amount of neutrinos’ CP symmetry violation (if there is any), but it will set an upper limit on it. The larger the possible effect, the greater the discrepancy in the behavior of neutrinos versus antineutrinos, and the greater the likelihood that neutrinos could be responsible for the matter-antimatter asymmetry in the early universe.

The Deep Underground Neutrino Experiment (DUNE), now under construction, will see both neutrinos and antineutrinos fired from below Fermilab near Chicago to the Sanford Underground Research Facility some 800 miles away in South Dakota. Neutrinos can pass through earth unaltered, with no need of a tunnel. The ambitious experiment may reveal how the behavior of neutrinos differs from that of their antimatter counterparts, antineutrinos. Credit: Knowable Magazine

For Shirley Li, a physicist at the University of California, Irvine, the issue of neutrino CP violation is an urgent question, one that could point the way to a major rethink of particle physics. “If I could have one question answered by the end of my lifetime, I would want to know what that’s about,” she says.

Aside from being a major discovery in its own right, CP symmetry violation in neutrinos could challenge the Standard Model by pointing the way to other novel physics. For example, theorists say it would mean there could be two kinds of neutrinos—left-handed ones (the normal lightweight ones observed to date) and much heavier right-handed neutrinos, which are so far just a theoretical possibility. (The particles’ “handedness” refers to their quantum properties.)

These right-handed neutrinos could be as much as 1015 times heavier than protons, and they’d be unstable, decaying almost instantly after coming into existence. Although they’re not found in today’s universe, physicists suspect that right-handed neutrinos may have existed in the moments after the Big Bang — possibly decaying via a process that mimicked CP violation and favored the creation of matter over antimatter.

It’s even possible that neutrinos can act as their own antiparticles—that is, that neutrinos could turn into antineutrinos and vice versa. This scenario, which the discovery of right-handed neutrinos would support, would make neutrinos fundamentally different from more familiar particles like quarks and electrons. If antineutrinos can turn into neutrinos, that could help explain where the antimatter went during the universe’s earliest moments.

One way to test this idea is to look for an unusual type of radioactive decay — theorized but thus far never observed—known as “neutrinoless double-beta decay.” In regular double-beta decay, two neutrons in a nucleus simultaneously decay into protons, releasing two electrons and two antineutrinos in the process. But if neutrinos can act as their own antiparticles, then the two neutrinos could annihilate each other, leaving only the two electrons and a burst of energy.

A number of experiments are underway or planned to look for this decay process, including the KamLAND-Zen experiment, at the Kamioka neutrino detection facility in Japan; the nEXO experiment at the SNOLAB facility in Ontario, Canada; the NEXT experiment at the Canfranc Underground Laboratory in Spain; and the LEGEND experiment at the Gran Sasso laboratory in Italy. KamLAND-Zen, NEXT, and LEGEND are already up and running.

While these experiments differ in the details, they all employ the same general strategy: They use a giant vat of dense, radioactive material with arrays of detectors that look for the emission of unusually energetic electrons. (The electrons’ expected neutrino companions would be missing, with the energy they would have had instead carried by the electrons.)

While the neutrino remains one of the most mysterious of the known particles, it is slowly but steadily giving up its secrets. As it does so, it may crack the puzzle of our matter-dominated universe — a universe that happens to allow inquisitive creatures like us to flourish. The neutrinos that zip silently through your body every second are gradually revealing the universe in a new light.

“I think we’re entering a very exciting era,” says Turner.

This article originally appeared in Knowable Magazine, a nonprofit publication dedicated to making scientific knowledge accessible to all. Sign up for Knowable Magazine’s newsletter.

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

How a mysterious particle could explain the Universe’s missing antimatter Read More »

upcoming-deepseek-ai-model-failed-to-train-using-huawei’s-chips

Upcoming DeepSeek AI model failed to train using Huawei’s chips

DeepSeek is still working with Huawei to make the model compatible with Ascend for inference, the people said.

Founder Liang Wenfeng has said internally he is dissatisfied with R2’s progress and has been pushing to spend more time to build an advanced model that can sustain the company’s lead in the AI field, they said.

The R2 launch was also delayed because of longer-than-expected data labeling for its updated model, another person added. Chinese media reports have suggested that the model may be released as soon as in the coming weeks.

“Models are commodities that can be easily swapped out,” said Ritwik Gupta, an AI researcher at the University of California, Berkeley. “A lot of developers are using Alibaba’s Qwen3, which is powerful and flexible.”

Gupta noted that Qwen3 adopted DeepSeek’s core concepts, such as its training algorithm that makes the model capable of reasoning, but made them more efficient to use.

Gupta, who tracks Huawei’s AI ecosystem, said the company is facing “growing pains” in using Ascend for training, though he expects the Chinese national champion to adapt eventually.

“Just because we’re not seeing leading models trained on Huawei today doesn’t mean it won’t happen in the future. It’s a matter of time,” he said.

Nvidia, a chipmaker at the center of a geopolitical battle between Beijing and Washington, recently agreed to give the US government a cut of its revenues in China in order to resume sales of its H20 chips to the country.

“Developers will play a crucial role in building the winning AI ecosystem,” said Nvidia about Chinese companies using its chips. “Surrendering entire markets and developers would only hurt American economic and national security.”

DeepSeek and Huawei did not respond to a request for comment.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Upcoming DeepSeek AI model failed to train using Huawei’s chips Read More »

openai,-cofounder-sam-altman-to-take-on-neuralink-with-new-startup

OpenAI, cofounder Sam Altman to take on Neuralink with new startup

The company aims to raise $250 million from OpenAI and other investors, although the talks are at an early stage. Altman will not personally invest.

The new venture would be in direct competition with Neuralink, founded by Musk in 2016, which seeks to wire brains directly to computers.

Musk and Altman cofounded OpenAI, but Musk left the board in 2018 after clashing with Altman, and the two have since become fierce rivals in their pursuit of AI.

Musk launched his own AI start-up, xAI, in 2023 and has been attempting to block OpenAI’s conversion from a nonprofit in the courts. Musk donated much of the initial capital to get OpenAI off the ground.

Neuralink is one of a pack of so-called brain-computer interface companies, while a number of start-ups, such as Precision Neuroscience and Synchron, have also emerged on the scene.

Neuralink earlier this year raised $650 million at a $9 billion valuation, and it is backed by investors including Sequoia Capital, Thrive Capital, and Vy Capital. Altman had previously invested in Neuralink.

Brain implants are a decades-old technology, but recent leaps forward in AI and in the electronic components used to collect brain signals have offered the prospect that they can become more practically useful.

Altman has backed a number of other companies in markets adjacent to ChatGPT-maker OpenAI, which is valued at $300 billion. In addition to cofounding World, he has also invested in the nuclear fission group Oklo and nuclear fusion project Helion.

OpenAI declined to comment.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

OpenAI, cofounder Sam Altman to take on Neuralink with new startup Read More »

china-tells-alibaba,-bytedance-to-justify-purchases-of-nvidia-ai-chips

China tells Alibaba, ByteDance to justify purchases of Nvidia AI chips

Beijing is demanding tech companies including Alibaba and ByteDance justify their orders of Nvidia’s H20 artificial intelligence chips, complicating the US chipmaker’s business in China after striking an export arrangement with the Trump administration.

The tech companies have been asked by regulators such as the Ministry of Industry and Information Technology (MIIT) to explain why they need to order Nvidia’s H20 chips instead of using domestic alternatives, said three people familiar with the situation.

Some tech companies, who were the main buyers of Nvidia’s H20 chips before their sale in China was restricted, were planning to downsize their orders as a result of the questions from regulators, said two of the people.

“It’s not banned but has kind of become a politically incorrect thing to do,” said one Chinese data center operator about purchasing Nvidia’s H20 chips.

Alibaba, ByteDance, and MIIT did not immediately respond to a request for comment.

Chinese regulators have expressed growing disapproval of companies using Nvidia’s chips for any government or security related projects. Bloomberg reported on Tuesday that Chinese authorities had sent notices to a range of companies discouraging the use of the H20 chips, particularly for government-related work.

China tells Alibaba, ByteDance to justify purchases of Nvidia AI chips Read More »

experiment-will-attempt-to-counter-climate-change-by-altering-ocean

Experiment will attempt to counter climate change by altering ocean


Gulf of Maine will be site of safety and effectiveness testing.

Woods Hole researchers, Adam Subhas (left) and Chris Murray, conducted a series of lab experiments earlier this year to test the impact of an alkaline substance, known as sodium hydroxide, on copepods in the Gulf of Maine. Credit: Daniel Hentz/Woods Hole Oceanographic Institution

Later this summer, a fluorescent reddish-pink spiral will bloom across the Wilkinson Basin in the Gulf of Maine, about 40 miles northeast of Cape Cod. Scientists from the Woods Hole Oceanographic Institution will release the nontoxic water tracer dye behind their research vessel, where it will unfurl into a half-mile wide temporary plume, bright enough to catch the attention of passing boats and even satellites.

As it spreads, the researchers will track its movement to monitor a tightly controlled, federally approved experiment testing whether the ocean can be engineered to absorb more carbon, and in turn, help combat the climate crisis.

As the world struggles to stay below the 1.5° Celsius global warming threshold—a goal set out in the Paris Agreement to avoid the most severe impacts of climate change—experts agree that reducing greenhouse gas emissions won’t be enough to avoid overshooting this target. The latest Intergovernmental Panel on Climate Change report, published in 2023, emphasizes the urgent need to actively remove carbon from the atmosphere, too.

“If we really want to have a shot at mitigating the worst effects of climate change, carbon removal needs to start scaling to the point where it can supplement large-scale emissions reductions,” said Adam Subhas, an associate scientist in marine chemistry and geochemistry at the Woods Hole Oceanographic Institution, who will oversee the week-long experiment.

The test is part of the LOC-NESS project—short for Locking away Ocean Carbon in the Northeast Shelf and Slope—which Subhas has been leading since 2023. The ongoing research initiative is evaluating the effectiveness and environmental impact of a marine carbon dioxide removal approach called ocean alkalinity enhancement (OAE).

This method of marine carbon dioxide removal involves adding alkaline substances to the ocean to boost its natural ability to neutralize acids produced by greenhouse gases. It’s promising, Subhas said, because it has the potential to lock away carbon permanently.

“Ocean alkalinity enhancement does have the potential to reach sort of gigatons per year of carbon removal, which is the scale at which you would need to supplement emissions reductions,” Subhas said. “Once the alkalinity is dissolved in seawater, it reacts with carbon dioxide and forms bicarbonate—essentially dissolved baking soda. That bicarbonate is one of the most stable forms of carbon in the ocean, and it can stay locked away for tens of thousands, even hundreds of thousands of years.”

But it will be a long time before this could happen at the magnitude needed to mitigate climate change.

According to Wil Burns, co-director of the Institute for Responsible Carbon Removal at American University, between 6 and 10 gigatons of carbon need to be removed from the atmosphere annually by 2050 in order to meet the Paris Agreement climate target. “It’s a titanic task,” he said.

Most marine carbon dioxide removal initiatives, including those involving OAE, are still in a nascent stage.

“We’re really far from having any of these technologies be mature,” said Lisa Levin, an oceanographer and professor at the Scripps Institution of Oceanography at the University of California San Diego, who spoke on a panel at the United Nations Ocean Conference in June about the potential environmental risks of mining and carbon dioxide removal on deep-sea ecosystems. “We’re looking at a decade until any serious, large-scale marine carbon removal is going to be able to happen—or more.”

“In the meantime, everybody acknowledges that what we have to do is to reduce emissions, right, and not rely on taking carbon out of the atmosphere,” she said.

Marine carbon dioxide removal

So far, most carbon removal efforts have centered on land-based strategies, such as planting trees, restoring soils, and building machines that capture carbon dioxide directly from the air. Increasingly, researchers are exploring whether the oceans might help.

“Looking at the oceans makes a lot of sense when it comes to carbon removal, because the oceans sequester 70 times more CO2 than terrestrial sources,” Burns said. What if it can hold more?

That question is drawing growing attention, not only from scientists. In recent years, a wave of private companies have started piloting various methods of removing carbon from the oceans.

“It’s really the private sector that’s pushing the scaling of this very quickly,” Subhas said. In the US and Canada, he said, there are at least four companies piloting varied ocean alkalinity enhancement techniques.

Last year, Ebb Carbon, a California-based startup focused on marine carbon dioxide removal, signed a deal with Microsoft to remove up to 350,000 metric tons of CO2 over the next decade using an ocean alkalinity enhancement process that splits seawater into acidic and alkaline streams. The alkaline stream is then returned to the sea where it reacts with CO2 and stores it as bicarbonate, enabling the ocean to absorb more carbon dioxide from the atmosphere. In return, Microsoft will purchase carbon removal credits from the startup.

Another company called Vesta, which has headquarters in San Francisco, is using an approach called Coastal Carbon Capture. This involves adding finely ground olivine—a naturally occurring olive-green colored mineral—to sandy beaches. From there, ocean tides and waves carry it into the sea. Olivine reacts quickly with seawater in a process known as enhanced weathering, increasing ocean alkalinity. The company piloted one of their projects in Duck, North Carolina, last year where it estimated approximately 5,000 metric tons of carbon dioxide would be removed through coastal carbon capture after accounting for project emissions, according to its website.

But these efforts are not without risk, AU’s Burns said. “We have to proceed in an extremely precautionary manner,” he said.

Some scientists are concerned that OAE initiatives that involve olivine, which contains heavy metals like nickel and chromium, may harm marine life, he said. Another concern is that the olivine could cloud certain ocean areas and block light from penetrating to deeper depths. If too much alkalinity is introduced too fast in concentrated areas, he said, some animals might not be able to adjust.

Other marine carbon dioxide removal projects are using other methods besides OAE. Some involve adding iron to the ocean to stimulate growth in microscopic plants called phytoplankton, which absorb carbon dioxide through photosynthesis. Others include the cultivation of large-scale farms of kelp and seaweed, which also absorb carbon dioxide through photosynthesis. The marine plants can then be sunk in the deep ocean to store the carbon they absorbed.

In 2023, researchers from Woods Hole Oceanographic Institution conducted their first OAE-related field experiment from the 90-foot research vessel R/V Connecticut south of Massachusetts. As part of this first experiment, nontoxic water tracer dye was released into the ocean. Researchers tracked its movement through the water for 72 hours to model the dispersion of a plume of alkalinity over time.

Credit: Woods Hole Oceanographic Institution

In 2023, researchers from Woods Hole Oceanographic Institution conducted their first OAE-related field experiment from the 90-foot research vessel R/V Connecticut south of Massachusetts. As part of this first experiment, nontoxic water tracer dye was released into the ocean. Researchers tracked its movement through the water for 72 hours to model the dispersion of a plume of alkalinity over time. Credit: Woods Hole Oceanographic Institution

One technique that has not yet been tried, but may be piloted in the future, according to the science-based conservation nonprofit Ocean Visions, would employ new technology to accelerate the ocean’s natural process of transferring surface water and carbon to the deep ocean. That’s called artificial downwelling. In a reverse process—artificial upwelling—cooler, nutrient-rich waters from the deep ocean would be pumped to the surface to spur phytoplankton growth.

So far, UC San Diego’s Levin said she is not convinced that these trials will lead to impactful carbon removal.

“I do not think the ocean is ever going to be a really large part of that solution,” she said. However, she added, “It might be part of the storage solution. Right now, people are looking at injecting carbon dioxide that’s removed from industry activities on land and transporting it to the ocean and injecting it into basalt.”

Levin said she’s also worried that we don’t know enough yet about the consequences of altering natural ocean processes.

“I am concerned about how many field trials would be required to actually understand what would happen, and whether we could truly understand the environmental risk of a fully scaled-up operation,” she said.

The experiment

Most marine carbon dioxide removal projects that have kicked off already are significantly larger in scale than the LOC-NESS experiment, which Subhas estimates will remove around 50 tons of CO2.

But, he emphasized, the goal of this project is not to compete in size or scale. He said the aim is to provide independent academic research that can help guide and inform the future of this industry and ensure it does not have negative repercussions on the marine environment.

There is some concern, he said, that commercial entities may pursue large-scale OAE initiatives to capitalize on the growing voluntary carbon market without first conducting adequate testing for safety and efficacy. Unlike those initiatives, there is no profit to be made from LOC-NESS. No carbon credits will be sold, Subhas said.

The project is funded by a collection of government and philanthropic sources, including the National Oceanic and Atmospheric Administration and the Carbon to Sea Initiative, a nonprofit that brings funders and scientists together to support marine carbon dioxide removal research and technology.

“We really feel like it’s necessary for the scientific community to be delivering transparent, trusted, and rigorous science to evaluate these things as these activities are currently happening and scaling in the ocean by the private sector,” Subhas said.

The LOC-NESS field trial in Wilkinson Basin will be the first “academic only” OAE experiment conducted from a ship in US waters. It is also the first of its kind to receive a permit from the Environmental Protection Agency under the Marine Protection, Research, and Sanctuaries Act.

“There’s no research in the past or planned that gets even close to providing a learning opportunity that this research is providing for OAE in the pelagic environment,” said Carbon to Sea Initiative’s Antonius Gagern, referring to the open sea experiment.

The permit was granted in April after a year of consultations between the EPA and other federal agencies.

During the process’ public comment periods, commenters expressed concerns about the potential impact on marine life, including the critically endangered North Atlantic right whales, small crustaceans that they eat called copepods, and larvae for the commercially important squid and mackerel fisheries. In a written response to some of these comments, the EPA stated that the small-scale project “demonstrates scientific rigor” and is “not expected to significantly affect human health, the marine environment, or other uses of the ocean.”

Subhas and his interdisciplinary team of chemists, biologists, engineers, and physicists from Woods Hole have spent the last few years planning this experiment and conducting a series of trials at their lab on Cape Cod to ensure they can safely execute and effectively monitor the results of the open-water test they will conduct this summer in the Gulf of Maine.

They specifically tested the effects of sodium hydroxide—an alkaline substance also known as lye or caustic soda—on marine microbes, phytoplankton, and copepods, a crucial food source for many marine species in the region in addition to the right whales. “We chose sodium hydroxide because it’s incredibly pure,” Subhas said. It’s widely used in the US to reduce acidity in drinking water.

It also helps counter ocean acidification, according to Subhas. “It’s like Tums for the ocean,” he said.

Ocean acidification occurs when the ocean absorbs excess carbon dioxide, causing its pH to drop. This makes it harder for corals, krill, and shellfish like oysters and clams to develop their hard calcium carbonate shells or skeletons.

This month, the team plans to release 50 tons of sodium hydroxide into a designated area of the Wilkinson Basin from the back of one of two research vessels participating in the LOC-NESS operation.

The basin is an ideal test site, according to Subhas, because there is little presence of phytoplankton, zooplankton, commercial fish larvae, and endangered species, including some whales, during this season. Still, as a precautionary measure, Woods Hole has contracted a protected species observer to keep a look out for marine species and mitigate potential harm if they are spotted. That person will be on board as the vessel travels to and from the field trial site, including while the team releases the sodium hydroxide into the ocean.

The alkaline substance will be dispersed over four to 12 hours off the back of one of the research vessels, along with the nontoxic fluorescent red water tracer dye called rhodamine. The dye will help track the location and spread of the sodium hydroxide once released into the ocean, and the vessel’s wake will help mix the solution in with the ocean water.

After about an hour, Subhas said, it will form into a “pinkish” patch of water that can be picked up on satellites. “We’re going to be taking pictures from space and looking at how this patch sort of evolves, dilutes, and stretches and disperses over time.”

For a week after that, scientists aboard the vessels will take rotating shifts to collect data around the clock. They will deploy drones and analyze over 20 types of samples from the research vessel to monitor how the surrounding waters and marine life respond to the experiment. They’ll track changes in ocean chemistry, nutrient levels, plankton populations and water clarity, while also measuring acidity and dissolved CO2.

In March, the team did a large-scale dry run of the dispersal at an open air testing facility on a naval base in New Jersey. According to Subhas, the trial demonstrated their ability to safely and effectively deliver alkalinity to surface seawater.

“The next step is being able to measure the carbon uptake from seawater—from the atmosphere into seawater,” he said. That is a slower process. He said he expects to have some preliminary results on carbon uptake, as well as environmental impacts, early next year.

This story originally appeared on Inside Climate News.

Photo of Inside Climate News

Experiment will attempt to counter climate change by altering ocean Read More »

nasa-plans-to-build-a-nuclear-reactor-on-the-moon—a-space-lawyer-explains-why

NASA plans to build a nuclear reactor on the Moon—a space lawyer explains why

These sought-after regions are scientifically vital and geopolitically sensitive, as multiple countries want to build bases or conduct research there. Building infrastructure in these areas would cement a country’s ability to access the resources there and potentially exclude others from doing the same.

Critics may worry about radiation risks. Even if designed for peaceful use and contained properly, reactors introduce new environmental and operational hazards, particularly in a dangerous setting such as space. But the UN guidelines do outline rigorous safety protocols, and following them could potentially mitigate these concerns.

Why nuclear? Because solar has limits

The Moon has little atmosphere and experiences 14-day stretches of darkness. In some shadowed craters, where ice is likely to be found, sunlight never reaches the surface at all. These issues make solar energy unreliable, if not impossible, in some of the most critical regions.

A small lunar reactor could operate continuously for a decade or more, powering habitats, rovers, 3D printers, and life-support systems. Nuclear power could be the linchpin for long-term human activity. And it’s not just about the Moon – developing this capability is essential for missions to Mars, where solar power is even more constrained.

The UN Committee on the Peaceful Uses of Outer Space sets guidelines to govern how countries act in outer space. United States Mission to International Organizations in Vienna. Credit: CC BY-NC-ND

A call for governance, not alarm

The United States has an opportunity to lead not just in technology but in governance. If it commits to sharing its plans publicly, following Article IX of the Outer Space Treaty and reaffirming a commitment to peaceful use and international participation, it will encourage other countries to do the same.

The future of the Moon won’t be determined by who plants the most flags. It will be determined by who builds what, and how. Nuclear power may be essential for that future. Building transparently and in line with international guidelines would allow countries to more safely realize that future.

A reactor on the Moon isn’t a territorial claim or a declaration of war. But it is infrastructure. And infrastructure will be how countries display power—of all kinds—in the next era of space exploration.The Conversation

Michelle L.D. Hanlon, Professor of Air and Space Law, University of Mississippi. This article is republished from The Conversation under a Creative Commons license. Read the original article.

NASA plans to build a nuclear reactor on the Moon—a space lawyer explains why Read More »

encryption-made-for-police-and-military-radios-may-be-easily-cracked

Encryption made for police and military radios may be easily cracked


An encryption algorithm can have weaknesses that could allow an attacker to listen in.

Two years ago, researchers in the Netherlands discovered an intentional backdoor in an encryption algorithm baked into radios used by critical infrastructure–as well as police, intelligence agencies, and military forces around the world–that made any communication secured with the algorithm vulnerable to eavesdropping.

When the researchers publicly disclosed the issue in 2023, the European Telecommunications Standards Institute (ETSI), which developed the algorithm, advised anyone using it for sensitive communication to deploy an end-to-end encryption solution on top of the flawed algorithm to bolster the security of their communications.

But now the same researchers have found that at least one implementation of the end-to-end encryption solution endorsed by ETSI has a similar issue that makes it equally vulnerable to eavesdropping. The encryption algorithm used for the device they examined starts with a 128-bit key, but this gets compressed to 56 bits before it encrypts traffic, making it easier to crack. It’s not clear who is using this implementation of the end-to-end encryption algorithm, nor if anyone using devices with the end-to-end encryption is aware of the security vulnerability in them.

The end-to-end encryption the researchers examined, which is expensive to deploy, is most commonly used in radios for law enforcement agencies, special forces, and covert military and intelligence teams that are involved in national security work and therefore need an extra layer of security. But ETSI’s endorsement of the algorithm two years ago to mitigate flaws found in its lower-level encryption algorithm suggests it may be used more widely now than at the time.

In 2023, Carlo Meijer, Wouter Bokslag, and Jos Wetzels of security firm Midnight Blue, based in the Netherlands, discovered vulnerabilities in encryption algorithms that are part of a European radio standard created by ETSI called TETRA (Terrestrial Trunked Radio), which has been baked into radio systems made by Motorola, Damm, Sepura, and others since the ’90s. The flaws remained unknown publicly until their disclosure, because ETSI refused for decades to let anyone examine the proprietary algorithms. The end-to-end encryption the researchers examined recently is designed to run on top of TETRA encryption algorithms.

The researchers found the issue with the end-to-end encryption (E2EE) only after extracting and reverse-engineering the E2EE algorithm used in a radio made by Sepura. The researchers plan to present their findings today at the BlackHat security conference in Las Vegas.

ETSI, when contacted about the issue, noted that the end-to-end encryption used with TETRA-based radios is not part of the ETSI standard, nor was it created by the organization. Instead it was produced by The Critical Communications Association’s (TCCA) security and fraud prevention group (SFPG). But ETSI and TCCA work closely with one another, and the two organizations include many of the same people. Brian Murgatroyd, former chair of the technical body at ETSI responsible for the TETRA standard as well as the TCCA group that developed the E2EE solution, wrote in an email on behalf of ETSI and the TCCA that end-to-end encryption was not included in the ETSI standard “because at the time it was considered that E2EE would only be used by government groups where national security concerns were involved, and these groups often have special security needs.

For this reason, Murgatroyd noted that purchasers of TETRA-based radios are free to deploy other solutions for end-to-end encryption on their radios, but he acknowledges that the one produced by the TCCA and endorsed by ETSI “is widely used as far as we can tell.”

Although TETRA-based radio devices are not used by police and military in the US, the majority of police forces around the world do use them. These include police forces in Belgium and Scandinavian countries, as well as Eastern European countries like Serbia, Moldova, Bulgaria, and Macedonia, and in the Middle East in Iran, Iraq, Lebanon, and Syria. The Ministries of Defense in Bulgaria, Kazakhstan, and Syria also use them, as do the Polish military counterintelligence agency, the Finnish defense forces, and Lebanon and Saudi Arabia’s intelligence services. It’s not clear, however, how many of these also deploy end-to-end decryption with their radios.

The TETRA standard includes four encryption algorithms—TEA1, TEA2, TEA3 and TEA4—that can be used by radio manufacturers in different products, depending on the intended customer and usage. The algorithms have different levels of security based on whether the radios will be sold in or outside Europe. TEA2, for example, is restricted for use in radios used by police, emergency services, military, and intelligence agencies in Europe. TEA3 is available for police and emergency services radios used outside Europe but only in countries deemed “friendly” to the EU. Only TEA1 is available for radios used by public safety agencies, police agencies, and militaries in countries deemed not friendly to Europe, such as Iran. But it’s also used in critical infrastructure in the US and other countries for machine-to-machine communication in industrial control settings such as pipelines, railways, and electric grids.

All four TETRA encryption algorithms use 80-bit keys to secure communication. But the Dutch researchers revealed in 2023 that TEA1 has a feature that causes its key to get reduced to just 32 bits, which allowed the researchers to crack it in less than a minute.

In the case of the E2EE, the researchers found that the implementation they examined starts with a key that is more secure than ones used in the TETRA algorithms, but it gets reduced to 56 bits, which would potentially let someone decrypt voice and data communications. They also found a second vulnerability that would let someone send fraudulent messages or replay legitimate ones to spread misinformation or confusion to personnel using the radios.

The ability to inject voice traffic and replay messages affects all users of the TCCA end-to-end encryption scheme, according to the researchers. They say this is the result of flaws in the TCCA E2EE protocol design rather than a particular implementation. They also say that “law enforcement end users” have confirmed to them that this flaw is in radios produced by vendors other than Sepura.

But the researchers say only a subset of end-to-end encryption users are likely affected by the reduced-key vulnerability because it depends on how the encryption was implemented in radios sold to various countries.

ETSI’s Murgatroyd said in 2023 that the TEA1 key was reduced to meet export controls for encryption sold to customers outside Europe. He said when the algorithm was created, a key with 32 bits of entropy was considered secure for most uses. Advances in computing power make it less secure now, so when the Dutch researchers exposed the reduced key two years ago, ETSI recommended that customers using TEA1 deploy TCCA’s end-to-end encryption solution on top of it.

But Murgatroyd said the end-to-end encryption algorithm designed by TCCA is different. It doesn’t specify the key length the radios should use because governments using the end-to-end encryption have their own “specific and often proprietary security rules” for the devices they use. Therefore they are able to customize the TCCA encryption algorithm in their devices by working with their radio supplier to select the “encryption algorithm, key management and so on” that is right for them—but only to a degree.

“The choice of encryption algorithm and key is made between supplier and customer organisation, and ETSI has no input to this selection—nor knowledge of which algorithms and key lengths are in use in any system,” he said. But he added that radio manufacturers and customers “will always have to abide by export control regulations.”

The researchers say they cannot verify that the TCCA E2EE doesn’t specify a key length because the TCCA documentation describing the solution is protected by a nondisclosure agreement and provided only to radio vendors. But they note that the E2EE system calls out an “algorithm identifier” number, which means it calls out the specific algorithm it’s using for the end-to-end encryption. These identifiers are not vendor specific, the researchers say, which suggests the identifiers refer to different key variants produced by TCCA—meaning TCCA provides specifications for algorithms that use a 126 bit key or 56 bit key, and radio vendors can configure their devices to use either of these variants, depending on the export controls in place for the purchasing country.

Whether users know their radios could have this vulnerability is unclear. The researchers found a confidential 2006 Sepura product bulletin that someone leaked online, which mentions that “the length of the traffic key … is subject to export control regulations and hence the [encryption system in the device] will be factory configured to support 128, 64, or 56 bit key lengths.” But it’s not clear what Sepura customers receive or if other manufacturers whose radios use a reduced key disclose to customers if their radios use a reduced-key algorithm.

“Some manufacturers have this in brochures; others only mention this in internal communications, and others don’t mention it at all,” says Wetzels. He says they did extensive open-source research to examine vendor documentation and “ found no clear sign of weakening being communicated to end users. So while … there are ‘some’ mentions of the algorithm being weakened, it is not fully transparent at all.”

Sepura did not respond to an inquiry from WIRED.

But Murgatroyd says that because government customers who have opted to use TCCA’s E2EE solution need to know the security of their devices, they are likely to be aware if their systems are using a reduced key.

“As end-to-end encryption is primarily used for government communications, we would expect that the relevant government National Security agencies are fully aware of the capabilities of their end-to-end encryption systems and can advise their users appropriately,” Murgatroyd wrote in his email.

Wetzels is skeptical of this, however. “We consider it highly unlikely non-Western governments are willing to spend literally millions of dollars if they know they’re only getting 56 bits of security,” he says.

This story originally appeared at WIRED.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Encryption made for police and military radios may be easily cracked Read More »

national-academies-to-fast-track-a-new-climate-assessment

National Academies to fast-track a new climate assessment

The nation’s premier group of scientific advisers announced Thursday that it will conduct an independent, fast-track review of the latest climate science. It will do so with an eye to weighing in on the Trump administration’s planned repeal of the government’s 2009 determination that greenhouse gas emissions harm human health and the environment.

The move by the National Academies of Sciences, Engineering, and Medicine to self-fund the study is a departure from their typical practice of responding to requests by government agencies or Congress for advice. The Academies intend to publicly release it in September, in time to inform the Environmental Protection Agency’s decision on the so-called “endangerment finding,” they said in a prepared statement.

“It is critical that federal policymaking is informed by the best available scientific evidence,” said Marcia McNutt, president of the National Academy of Sciences. “Decades of climate research and data have yielded expanded understanding of how greenhouse gases affect the climate. We are undertaking this fresh examination of the latest climate science in order to provide the most up-to-date assessment to policymakers and the public.”

The Academies are private, nonprofit institutions that operate under an 1863 congressional charter, signed by President Abraham Lincoln, directing them to provide independent, objective analysis and advice to inform public policy decisions.

The Trump administration’s move to rescind the endangerment finding, announced last month, would eliminate the legal underpinning of the most important actions the federal government has taken on climate change—regulation of carbon pollution from motor vehicles and power plants under the Clean Air Act. Since assuming his role, EPA Administrator Lee Zeldin has made clear he intends to repeal the climate rules that were put in place under the Biden administration, but his job will be far easier with the elimination of the endangerment finding.

The EPA based its proposal mainly on a narrow interpretation of the agency’s legal authority, but the agency also cited uncertainties in the science, pointing to a report published the same day by the Department of Energy that was authored by a hand-picked quintet of well-known skeptics of the mainstream consensus on climate change. The administration has given a short window of opportunity—30 days—for the public to respond to its endangerment finding proposal and to the DOE report on climate science.

The EPA did not immediately respond to a request for comment on the announcement by the National Academies. Critics of the Trump administration’s approach applauded the decision by the scientific panel.

“I think the National Academies have identified a very fundamental need that is not being met, which is the need for independent, disinterested expert advice on what the science is telling us,” said Bob Sussman, who served as deputy administrator of the EPA in the Clinton administration and was a senior adviser in the agency during the Obama administration.

Earlier Thursday, before the National Academies announcement, Sussman posted a blog at the Environmental Law Institute website calling for a “blue-ribbon review” of the science around the endangerment finding. Sussman noted the review of the state of climate science that the National Academies conducted in 2001 at the request of President George W. Bush’s administration. Since then, the Academies have conducted numerous studies on aspects of climate change, including the development of a “climate-ready workforce,” how to power AI sustainably, and emerging technologies for removing carbon from the atmosphere, for example.

The National Academies announced in 2023 that they were developing a rapid response capacity to address the many emerging scientific policy issues the nation was facing. The first project they worked on was an assessment of the state of science around diagnostics for avian influenza.

Andrew Dessler, director of the Texas Center for Extreme Weather at Texas A&M University, said the new controversy that the Trump administration had stirred around climate science was a fitting subject for a fast-track effort by the National Academies.

“The National Academies [were] established exactly to do things like this—to answer questions of scientific importance for the government,” he said. “This is what the DOE should have done all along, rather than hire five people who represent a tiny minority of the scientific community and have views that virtually nobody else agrees with.”

Dessler is leading an effort to coordinate a response from the scientific community to the DOE report, which would also be submitted to the EPA. He said that he had heard from about 70 academics eager to participate after putting out a call on the social media network Bluesky. He said that work will continue because it seems to have a slightly different focus than the National Academies’ announced review, which does not mention the DOE report but talks about focusing on the scientific evidence on the harms of greenhouse gas emissions that has emerged since 2009, the year the endangerment finding was adopted by the EPA.

This story originally appeared on Inside Climate News.

National Academies to fast-track a new climate assessment Read More »

president-trump-says-intel’s-new-ceo-“must-resign-immediately”

President Trump says Intel’s new CEO “must resign immediately”

Intel and the White House did not immediately respond to a request for comment on Trump’s post. Intel shares dropped 3 percent in pre-market trading in New York.

Tan was appointed as Intel CEO in March after the Silicon Valley company’s board ousted his predecessor, Pat Gelsinger, in December.

Intel is the only US-headquartered company capable of producing advanced semiconductors, though it has so far largely missed out on the current boom for artificial intelligence chips. It has been awarded billions of dollars in US government subsidies and loans to support its chip manufacturing business, which has fallen far behind its rival Taiwan Semiconductor Manufacturing Company.

However, amid a radical cost-cutting program, Tan warned last month that Intel might be forced to abandon development of its next-generation manufacturing technology if it were unable to secure a “significant external customer.” Such a move would hand a virtual monopoly of leading-edge chipmaking to TSMC.

“Intel is required to be a responsible steward of American taxpayer dollars and to comply with applicable security regulations,” Cotton wrote in Tuesday’s letter to Intel’s board chair, Frank Yeary. “Mr Tan’s associations raise questions about Intel’s ability to fulfill these obligations.”

Additional reporting by Demetri Sevastopulo.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

President Trump says Intel’s new CEO “must resign immediately” Read More »

states-take-the-lead-in-ai-regulation-as-federal-government-steers-clear

States take the lead in AI regulation as federal government steers clear

AI in health care

In the first half of 2025, 34 states introduced over 250 AI-related health bills. The bills generally fall into four categories: disclosure requirements, consumer protection, insurers’ use of AI, and clinicians’ use of AI.

Bills about transparency define requirements for information that AI system developers and organizations that deploy the systems disclose.

Consumer protection bills aim to keep AI systems from unfairly discriminating against some people and ensure that users of the systems have a way to contest decisions made using the technology.

Bills covering insurers provide oversight of the payers’ use of AI to make decisions about health care approvals and payments. And bills about clinical uses of AI regulate use of the technology in diagnosing and treating patients.

Facial recognition and surveillance

In the US, a long-standing legal doctrine that applies to privacy protection issues, including facial surveillance, is to protect individual autonomy against interference from the government. In this context, facial recognition technologies pose significant privacy challenges as well as risks from potential biases.

Facial recognition software, commonly used in predictive policing and national security, has exhibited biases against people of color and consequently is often considered a threat to civil liberties. A pathbreaking study by computer scientists Joy Buolamwini and Timnit Gebru found that facial recognition software poses significant challenges for Black people and other historically disadvantaged minorities. Facial recognition software was less likely to correctly identify darker faces.

Bias also creeps into the data used to train these algorithms, for example when the composition of teams that guide the development of such facial recognition software lack diversity.

By the end of 2024, 15 states in the US had enacted laws to limit the potential harms from facial recognition. Some elements of state-level regulations are requirements on vendors to publish bias test reports and data management practices, as well as the need for human review in the use of these technologies.

States take the lead in AI regulation as federal government steers clear Read More »