Author name: Shannon Garcia

otc-nasal-spray-seemed-to-cut-covid-infections-by-67%-in-mid-sized-trial

OTC nasal spray seemed to cut COVID infections by 67% in mid-sized trial

COVID context

Like all trials, there are limitations. As mentioned, the number of infections here is small—the impressive efficacy numbers could potentially vanish in a larger trial with more infections. And, while the trial had a high-quality design, it was undertaken in just one location in Germany and mostly involved healthy white women between the ages of 20 and 46, so the findings are not generalizable. The study was also funded by a pharmaceutical company that makes an azelastine nasal spray (though not the one that is sold over the counter in the US).

Still, with the previous studies, the trial offers some hope that this accessible nasal spray could be used as a viral prophylactic for respiratory seasons in the future. And the results land at a time when access to COVID-19 vaccines—which have firmly proven to be safe and highly effective—has been severely restricted in the US by health secretary and anti-vaccine activist Robert F. Kennedy Jr.

As it stands now, it appears that only people ages 65 and over, and those at higher risk of COVID-19 will have access to the shots this year, though some aspects of that access are murky, including how people will prove they’re at high risk. For healthy children, teens, and adults under 65, there may be no access or extremely limited access. That includes groups that medical experts recommend get vaccinated, namely healthy pregnant people and children ages 6 months to 23 months, both of which are considered at high risk from COVID-19 by medical experts, but not federal guidance under Kennedy. Experts also recommend access for healthy people who have contact with vulnerable people, such as cancer doctors, people who live with immunocompromised family members, and people who work in nursing homes.

With limited vaccine access and the normal slew of respiratory viruses on the horizon, a simple nasal spray is an appealing addition to the defenses. The main side effects are fairly minor, including bitter taste in the mouth, nosebleeds, and tiredness.

OTC nasal spray seemed to cut COVID infections by 67% in mid-sized trial Read More »

delete,-delete,-delete:-how-fcc-republicans-are-killing-rules-faster-than-ever

Delete, Delete, Delete: How FCC Republicans are killing rules faster than ever


FCC speeds up rule-cutting, giving public as little as 10 days to file objections.

FCC Chairman Brendan Carr testifies before the House Appropriations Subcommittee on Financial Services and General Government on May 21, 2025 in Washington, DC. Credit: Getty Images | John McDonnell

The Federal Communications Commission’s Republican chairman is eliminating regulations at breakneck speed by using a process that cuts dozens of rules at a time while giving the public only 10 or 20 days to review each proposal and submit objections.

Chairman Brendan Carr started his “Delete, Delete, Delete” rule-cutting initiative in March and later announced he’d be using the Direct Final Rule (DFR) mechanism to eliminate regulations without a full public-comment period. Direct Final Rule is just one of several mechanisms the FCC is using in the Delete, Delete, Delete initiative. But despite the seeming obscurity of regulations deleted under Direct Final Rule so far, many observers are concerned that the process could easily be abused to eliminate more significant rules that protect consumers.

On July 24, the FCC removed what it called “11 outdated and useless rule provisions” related to telegraphs, rabbit-ear broadcast receivers, and phone booths. The FCC said the 11 provisions consist of “39 regulatory burdens, 7,194 words, and 16 pages.”

The FCC eliminated these rules without the “prior notice and comment” period typically used to comply with the US Administrative Procedure Act (APA), with the FCC finding that it had “good cause” to skip that step. The FCC said it would allow comment for 10 days and that rule eliminations would take effect automatically after the 10-day period unless the FCC concluded that it received “significant adverse comments.”

On August 7, the FCC again used Direct Final Rule to eliminate 98 rules and requirements imposed on broadcasters. This time, the FCC allowed 20 days for comment. But it maintained its stance that the rules would be deleted automatically at the end of the period if no “significant” comments were received.

By contrast, FCC rulemakings usually allow 30 days for initial comments and another 15 days for reply comments. The FCC then considers the comments, responds to the major issues raised, and drafts a final proposal that is put up for a commission vote. This process, which takes months and gives both the public and commissioners more opportunity to consider the changes, can apply both to the creation of new rules and the elimination of existing ones.

FCC’s lone Democrat warns of “Trojan horse”

Telecom companies want the FCC to eliminate rules quickly. As we’ve previously written, AT&T submitted comments to the Delete, Delete, Delete docket urging the agency to eliminate rules that can result in financial penalties “without the delay imposed by notice-and-comment proceeding.”

Carr’s use of Direct Final Rule has drawn criticism from advocacy groups, local governments that could be affected by rule changes, and the FCC’s only Democratic commissioner. Anna Gomez, the lone FCC Democrat, told Ars in a phone interview that the rapid rule-cutting method “could be a Trojan horse because what we did, or what the commission did, is it adopted a process without public comment to eliminate any rule it finds to be outdated and, crucially, unwarranted. We don’t define what either of those terms mean, which therefore could lead to a situation that’s ripe for abuse.”

Gomez said she’d “be concerned if we eliminated rules that are meant to protect or inform consumers, or to promote competition, such as the broadband labels. This commission seems to have entirely lost its focus on consumers.”

Gomez told us that she doesn’t think a 10-day comment period is ever appropriate and that Carr seems to be trying “to meet some kind of arbitrary rule reduction quota.” If the rules being eliminated are truly obsolete, “then what’s the rush?” she asked. “If we don’t give sufficient time for public comment, then what happens when we make a mistake? What happens when we eliminate rules and it turns out, in fact, that these rules were important to keep? That’s why we give the public due process to comment on when we adopt rules and when we eliminate rules.”

Gomez hasn’t objected to the specific rules deleted under this process so far, but she spoke out against the method used by Carr both times Direct Final Rule method was used. “I told the chairman that I could support initiating a proceeding to look at how a Direct Final Rule process could be used going forward and including a Notice of Proposed Rulemaking proposing to eliminate the rules the draft order purports to eliminate today. That offer was declined,” she said in her dissenting statement in the July vote.

Gomez said that rules originally adopted under a notice-and-comment process should not be eliminated “without seeking public comment on appropriate processes and guardrails.” She added that the “order does not limit the Direct Final Rule process to elimination of rules that are objectively obsolete with a clear definition of how that will be applied, asserting instead authority to remove rules that are ‘outdated or unwarranted.'”

Local governments object

Carr argued that the Administrative Procedure Act “gives the commission the authority to fast-track the elimination of rules that inarguably fail to serve the public interest. Using this authority, the Commission can forgo the usual prior notice and public comment period before repealing the rules for these bygone regulations.”

Carr justified the deletions by saying that “outdated and unnecessary regulations from Washington often derail efforts to build high-speed networks and infrastructure across the country.” It’s not clear why the specific rule deletions were needed to accelerate broadband deployment, though. As Carr said, the FCC’s first use of Direct Finale Rule targeted regulations for “telegraph services, rabbit-ear broadcast receivers, and telephone booths—technologies that were considered outdated decades ago.”

Carr’s interpretation of the Administrative Procedure Act is wrong, said an August 6 filing submitted by local governments in Maryland, Massachusetts, the District of Columbia, Oregon, Virginia, California, New York, and Texas. Direct Final Rule “is intended for extremely simple, non-substantive decisions,” and the FCC process “is insufficient to ensure that future Commission decisions will fall within the good cause exception of the Administrative Procedure Act,” the filing said.

Local governments argued that “the new procedure is itself a substantive decision” and should be subject to a full notice-and-comment rulemaking. “The procedure adopted by the Commission makes it almost inevitable that the Commission will adopt rule changes outside of any APA exceptions,” the filing said.

The FCC could face court challenges. Gerard Lavery Lederer, a lawyer for the local government coalition, told Ars, “we fully anticipate that Chairman Carr and the FCC’s general counsel will take our concerns seriously.” But he also said local governments are worried about the FCC adopting industry proposals that “violate local government rights as preserved by Congress in the [Communications] Act” or that have “5th Amendment takings implications and/or 10th Amendment overreach issues.”

Is that tech really “obsolete”?

At least some rules targeted for deletion, like regulations on equipment used by radio and TV broadcast stations, may seem too arcane to care about. But a coalition of 22 public interest, civil rights, labor, and digital rights groups argued in a July 17 letter to Carr that some of the rule deletions could harm vulnerable populations and that the shortened comment period wasn’t long enough to determine the impact.

“For example, the Commission has targeted rules relating to calling cards and telephone booths in the draft Order as ‘obsolete,'” the letter said. “However, calling cards and pay phones remain important technologies for rural areas, immigrant communities, the unhoused, and others without reliable access to modern communications services. The impact on these communities is not clear and will not likely be clear in the short time provided for comment.”

The letter also said the FCC’s new procedure “would effectively eliminate any hope for timely judicial review of elimination of a rule on delegated authority.” Actions taken via delegated authority are handled by FCC bureaus without a vote of the commission.

So far, Carr has held commission votes for his Direct Final Rule actions rather than letting FCC bureau issue orders themselves. But in the July order, the FCC said its bureaus and offices have previously adopted or repealed rules without notice and comment and “reaffirm[ed] that all Bureaus and Offices may continue to take such actions in situations that are exempt from the APA’s notice-and-comment requirements.”

“This is about pushing boundaries”

The advocacy groups’ letter said that delegating authority to bureaus “makes judicial review virtually impossible, even though the order goes into effect immediately.” Parties impacted by actions made on delegated authority can’t go straight to the courts and must instead “file an application for review with the Commission as a prerequisite to any petition for judicial review,” the letter said. The groups argued that “a Chairman that does not wish to permit judicial review of elimination of a rule through DFR may order a bureau to remove the rule, then simply refuse to take action on the application for review.”

The letter was signed by Public Knowledge; Asian Americans Advancing Justice-AAJC; the Benton Institute for Broadband & Society; the Center for Digital Democracy; Common Sense Media; the Communications Workers of America; the Electronic Privacy Information Center; HTTP; LGBT Tech; the Media Access Project; MediaJustice; the Multicultural Media, Telecom and Internet Council; the National Action Network; NBJC; the National Council of Negro Women; the National Digital Inclusion Alliance; the National Hispanic Media Coalition; the National Urban League; New America’s Open Technology Institute (OTI); The Leadership Conference on Civil and Human Rights; the United Church of Christ Media Justice Ministry; and UnidosUS.

Harold Feld, senior VP of consumer advocacy group Public Knowledge, told Ars that the FCC “has a long record of thinking that things are obsolete and then discovering when they run an actual proceeding that there are people still using these things.” Feld is worried that the Direct Final Rule process could be used to eliminate consumer protections that apply to old phone networks when they are replaced by either fiber or wireless service.

“I certainly think that this is about pushing boundaries,” Feld said. When there’s a full notice-and-comment period, the FCC has to “actually address every argument made” before eliminating a rule. When the FCC provides less explanation of a decision, that “makes it much harder to challenge on appeal,” he said.

“Once you have this tool that lets you just get rid of rules without the need to do a proceeding, without the need to address the comments that are raised in that proceeding… it’s easy to see how this ramps up and how hard it is for people to stay constantly alert to look for an announcement where they will then only have 10 days to respond once it gets published,” he said.

What is a “significant” comment?

The FCC says its use of Direct Final Rule is guided by December 2024 recommendations from the Administrative Conference of the United States (ACUS), a government agency. But the FCC didn’t implement Direct Final Rule in the exact way recommended by the ACUS.

The ACUS said its guidance “encourages agencies to use direct final rulemaking, interim final rulemaking, and alternative methods of public engagement to ensure robust public participation even when they rely properly on the good cause exemption.” But the ACUS recommended taking public comment for at least 30 days, while the FCC has used 10- and 20-day periods.

The ACUS also said that agencies should only move ahead with rule deletions “if no significant adverse comments are received.” If such comments are received, the agency “can either withdraw the rule or publish a regular proposed rule that is open for public comment,” the recommendation said.

The FCC said that if it receives comments, “we will evaluate whether they are significant adverse comments that warrant further procedures before changing the rules.” The letter from 22 advocacy groups said it is worried about the leeway the FCC is giving itself in defining whether a comment is adverse and significant:

Although ACUS recommends that the agency revert to standard notice-and-comment rulemaking in the event of a single adverse comment, the draft Order requires multiple adverse comments—at which point the bureau/Commission will consider whether to shift to notice-and-comment rulemaking. If the bureau/Commission decides that adverse comments are not ‘substantive,’ it will explain its determination in a public notice that will not be filed in the Federal Register. The Commission states that it will be guided, but not bound, by the definition of ‘adverse comment’ recommended by ACUS.

Criticism from many corners

TechFreedom, a libertarian-leaning think tank, said it supports Carr’s goals in the “Delete, Delete, Delete” initiative but objected to the Direct Final Rule process. TechFreedom wrote in July comments that “deleting outdated regulations via a Direct Final Rule is unprecedented at the FCC.”

“No such process exists under current FCC rules,” the group said, urging the agency to seek public comment on the process. “If the Commission wishes to establish a new method by which it can eliminate existing regulations without undertaking a full rulemaking proceeding, it should open a docket specific to that subject and seek public comment,” the filing said.

TechFreedom said it is especially important for the FCC to “seek comment as to when the direct final rule procedures should be invoked… What is ‘routine,’ ‘insignificant,’ or ‘inconsequential’ and who is to decide—the Commissioners or the Bureau chiefs?”

The American Library Association and other groups wrote on August 14 that either 10 or 20 days is not long enough for public comment. Moreover, the groups said the two Direct Final Rule actions so far “offer minimal explanation for why the rules are being removed. There is only one sentence describing elimination of many rules and each rule removal is described in a footnote with a parenthetical about the change. It is not enough.”

The Utility Reform Network offered similar objections about the process and said that the FCC declaring technologies to be “obsolete” and markets “outdated” without a detailed explanation “suggests the Commission’s view that these rules are not minor or technical changes but support a larger deregulatory effort that should itself be subject to notice-and-comment rulemaking.”

The National Consumer Law Center and other groups said that “rushing regulatory changes as proposed is likely illegal in many instances, counterproductive, and bad policy,” and that “changes to regulations should be effectuated only through careful, thoughtful, and considered processes.”

We contacted Chairman Carr’s office and did not receive a response.

FCC delegated key decisions to bureaus

Gomez told Ars that Direct Final Rule could serve a purpose “with the right procedures and guardrails in place.” For example, she said the quick rule deletions can be justified for eliminating rules that have become obsolete because of a court reversal or Congressional actions.

“I would argue that we cannot, under the Administrative Procedure Act and the Constitution, simply eliminate rules because we’ve made a judgment call that they are unwarranted,” she said. “That does not meet the good cause exemption to notice-and-comment requirements.”

Gomez also opposes FCC bureaus making significant decisions without a commission vote, which effectively gives Carr more power over the agency’s operations. For example, T-Mobile’s purchase of US Cellular’s wireless operations and Verizon’s purchase of Frontier were approved by the FCC at the Bureau level.

In another instance cited by Gomez, the FCC Media Bureau waived a requirement for broadcast licensees to file their biennial ownership reports for 18 months. “The waiver order, which was done at the bureau level on delegated authority, simply said ‘we find good cause to waive these rules.’ There was no analysis whatsoever,” Gomez said.

Gomez also pointed out that the Carr FCC’s Wireline Competition Bureau delayed implementation of certain price caps on prison phone services. The various bureau-level decisions are a “stretching of the guardrails that we have internally for when things should be done on delegated authority, and when they should be voted by the commission,” Gomez said. “I’m concerned that [Direct Final Rule] is just the next iteration of the same issue.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Delete, Delete, Delete: How FCC Republicans are killing rules faster than ever Read More »

earth-models-can-predict-the-planet’s-future-but-not-their-own

Earth models can predict the planet’s future but not their own


One of the world’s foremost climate models now faces funding threats.

Credit: Jonathan Kitchen/Getty Images

Credit: Jonathan Kitchen/Getty Images

In the 1960s, meteorologist Edward Lorenz was running weather simulations on an early computer system when he realized that a small rounding difference led to extremely divergent weather predictions. He later called this idea the butterfly effect to communicate that small changes in initial conditions, like a butterfly flapping its wings in Nepal, could produce wildly different outcomes, like rain in New York.

But better understanding those initial conditions and how the biological world couples with the atmospheric one can provide better predictions about the future of the planet—from where umbrellas may be most needed in a given season to where electricity needs might sap the grid.

Today, computers are much more powerful than when Lorenz was working, and scientists use a special kind of simulation that accounts for physics, chemistry, biology, and water cycles to try to grasp the past and predict the future. These simulations, called Earth system models, or ESMs, attempt to consider the planet as a system made up of components that nudge and shove each other. Scientists first developed physical climate models in the 1960s and 1970s, and became better at integrating atmospheric and ocean models in subsequent years. As both environmental knowledge and computing power increased, they began to sprinkle in the other variables, leading to current-day ESMs.

“It’s coupling together usually an atmosphere model, an ocean model, a sea ice model, land model, together to get a full picture of a physical system,” said David Lawrence, a senior scientist at the National Center for Atmospheric Research’s Climate and Global Dynamics Laboratory, which he noted was recently changed to the CGD Laboratory to remove the word climate. The models also move beyond the planet’s physical components, including chemistry and biology.

In doing so, ESMs can find surprising conclusions. In 2023, for instance, the Energy Exascale Earth System Model, or E3SM, which was built by the Department of Energy, found that in the simulation, the shapes of cavities in Antarctic ice significantly affect tides many miles away, along the North American coast. That hemisphere-separated connection is just one example of how including an unexpected variable can affect a real-world outcome, and just one of many examples to emerge from E3SM.

E3SM is one of the world’s premier Earth system models, one DOE has worked on for more than a decade, led by Lawrence Livermore National Laboratory in California. But as part of budget and programmatic cuts being proposed under the administration of President Donald Trump, E3SM and Earth system research are under threat: The model’s website has been scrubbed of some information, and proposed federal budgets have terminated its future use for climate-related activities—one of its core functions—though it’s unclear how exactly that will play out. Outside researchers could, of course, use the model to study any research questions they desire, provided they could get funding.

E3SM is much finer-grained than most such models, providing more tailored and accurate results over a given region. It’s used to predict extreme events, like floods, and unlike most other models, to understand how the climate interacts with the power system—like how that extreme weather may tax the grid or cause it to falter. Both kinds of studies matter to humans living their lives, in addition to weather wonks.

DOE has already announced about $100 million in funding between 2018 and 2022, according to publicly available statements Undark located, to enhance and improve the model. That sum doesn’t include the resources that would have gone into its initial development. Those more recent investments may now be in question. “There’s nothing definitive,” said Lawrence. But the agency’s proposed budget would decrease both funding and capability.

Meanwhile, experts say that funding cuts could mean modeling abilities migrate overseas, some science may never be realized, and expertise could be lost.

With that toss-off of talent, said Andrew Dessler, a professor of atmospheric sciences at Texas A&M University, countries like China may catch up to the US “It would have been very hard for them to have a more respected scientific organization or scientific system than the US did,” Dessler said. “Our research universities are really the envy of the world, and our government labs are the envy of the world.”

But they won’t be, he said, if the country loses the expertise of those who work in them.

E3SM scientists want to understand how Earth changes over time and how much conditions vary within long-term projections—like, say, how average temperature may creep up over time, but extremely low temperatures blast Colorado nevertheless. Eventually, these scientists hope to incorporate enough chemistry, physics, and biology to create a “digital twin” of the planet—modeling Earth in a way true to its real form.

That’s a lofty goal, especially since reaching even the current, less twin-like stage took scientists more than 10 years of software development and tweaking. “The models are very big in terms of how much code there is,” said Lawrence, the earth system scientist at NCAR. (Through a spokesperson, Lawrence Livermore National Laboratory, whose scientists lead the model’s development, declined to comment for this story. “We aren’t able to offer interviews about E3SM at this time,” lab spokesperson Jeremy Thomas wrote in an email; he did not respond to an emailed question about why.)

Lawrence, though, knows this, as head of a similar project, called the Community Earth System Model, an early version of which served as a basis for E3SM.

Around 30 years ago, scientists at NCAR began working on the community model building on an existing foundation at the agency. In building the community model, they collaborated with DOE researchers, and the agency co-sponsored the model. Later, though, DOE decided to pursue slightly different research priorities, according to Lawrence.

One of those priorities, which they started pursuing in 2014 with the official launch of the project, involved taking advantage of powerful computers. The agency, in addition to studying climate and energy, is also in charge of nuclear weapons. It possesses some of the world’s most powerful supercomputers to simulate those weapons’ inner workings and do science on the side.

DOE, true to its name, also wanted to focus on energy issues. Understanding the planet’s weather and water machinations is critical for, say, knowing how to cool power stations, or when temperatures might tax the grid. NCAR scientists were less focused on energy and didn’t have the same computational bite, according to Lawrence.

And so the two groups split. After around a decade of development, E3SM scientists achieved their main goal in 2023: a terrestrial simulation built for an exascale supercomputer (“exascale” means the supercomputer can do a quintillion calculations per second—millions of times faster than a laptop). After a review planned for later this year, the project is slated to begin its fourth iteration in 2026.

E3SM has been useful to DOE researchers but also to independent ones, who use the model to answer their own burning questions. Environmental researcher Yi Yao, for instance, used E3SM to understand how irrigation affects not just the planet but the people on it. “It’s very important to know that the human activities are altering the system, and it may cause some catastrophic consequences,” said Yao, a postdoc at ETH Zurich who, along with co-authors, published his study’s findings in Nature Communications.

Irrigation, he found, contributes to “moist heat”—essentially, humidity, natural and human-caused. “Farmers who were working in the field, their health—their life even—can be endangered by the moist heat,” he said, not something their employers generally forecast for when irrigating and planning operations. Irrigation, in fact, has been proposed as a strategy for managing heat, by cooling surface temperature, something his study shows wouldn’t be effective.

Importantly, Yao’s work compared results from a variety of ESMs. That’s common practice in the field, and part of why having multiple models is important. “Obviously the physics of the world, the biology of the world, the chemistry of the world, there’s just one version of it,” explained Lawrence. “But how you represent that is so complex that there is no one answer.” Interpolating between different answers helps scientists learn more than they might from a single model alone.

Other scientists have recently used E3SM to find that rising average temperatures can turn farmlands into carbon creators instead of carbon sinks, that intense rains push nutrients into the Gulf of Mexico, and that Pacific hurricanes that first speed west but then turn tail northward decrease the number of forest fires in the American southwest.

But beyond big climatic questions, Earth system models like E3SM are also useful on a more practical level. That’s especially true as scientists work to make them more reliable over time, “so you can really use them for making all sorts of decisions, whether it’s what you’re going to do for your summer vacation to how are you going to deal with sea-level rise in your region,” said Lawrence. How useful and available American ESMs will be in the coming years, though, is a question of money and its disappearance. Overall, climate research at DOE has been in the crosshairs of the Trump administration. In the skinny budget request for the department’s Office of Science, the administration noted that “the Budget reduces funding for climate change and Green New Scam research,” referencing the proposed Green New Deal, with a cut of more than $1 billion to the DOE’s Office of Science.

According to the DOE’s recent budget request, though, E3SM will continue to exist, but seemingly without one of its primary raisons d’etre. “Any Energy Exascale Earth System Model (E3SM) activities involving climate are terminated,” reads the 2025 budget request, although it is unclear how a climate model can skirt around the climate.

“I do not know to what extent we can say that a topic has nothing to do with climate,” said Yao in an email. “Considering that atmosphere is one important component of the Earth System, it would be very difficult to fully exclude climate.” He did note, though, that some studies are not dedicated to the impacts of climate change but, say, to ecological applications or hydrology. “I do not think it is appropriate to call them having nothing to do with climate but in these cases, they are not used for climate predictions,” he wrote.

The document earmarks “investments on further refinement of the science serving administration priorities,” and details technology that will be used to advance the model, like AI and more powerful computers. It doesn’t specify what goals that AI might serve, beyond enabling higher resolution.

An example of a high-resolution E3SM earth system model simulating the strongest hurricanes with surface winds exceeding 150 mph. This simulation shows how the surface temperature of the ocean evolves as a hurricane moves across the Atlantic and how the resultant cold wake affects the intensity of the next hurricane. Credit: LLNL

In 2026, the proposed budget decreases DOE funding for Earth and environmental system modeling from around $110 million to $30 million. “Funding will be consolidated under this subprogram to focus on supporting the administration’s highest priority research,” the document notes. It does not specify what those priorities are.

Meanwhile, the National Science Foundation’s budget request notes that its funding of NCAR, which oversees the lab Lawrence works for, will “curtail but continue to support research to refine weather and Earth system models and to better understand the evolution of wildland fires.” The federal government terminated a grant supporting an update to the model, although much of the work was already completed.

And the National Oceanic and Atmospheric Administration proposed a funding decrease of around 25 percent, with many of the cuts related to climate change. For many experts in the field, the future of this research can feel unpredictable, like the weather itself.

These cuts have scientists worried globally. In Europe, where Yao is based, what will become of the American ESMs is of great concern. “This is the topic of every lunch table here,” he said.

“It’s quite sad,” he added, “because the USA has always been a leader in the field.”

But it’s hard for US scientists to lead if they can’t describe their work, as some government guidance now forbids certain key terms. Indeed, according to a May report from a local newspaper, an internal publication from Lawrence Livermore noted that the laboratory “has been directed to reword or remove specific words and phrases from all external-facing media, web pages and public-facing communications.” Those terms included “climate change.” When asked by email about the report, Jeremy Thomas, a public information officer for Lawrence Livermore wrote, “We can’t comment on The Independent’s reporting.”

In the view of Dessler, the Texas-based professor, these cuts aren’t just climate-change denial on a scientific basis. “There’s a push to get rid of science that can be used to regulate,” he said—whether that has to do with pesticides or carbon.

But even if the models are curtailed in the US, options may exist to keep them sailing—by, for instance, duplicating their capabilities elsewhere. That has happened on the data side before: In the previous Trump administration, people feared the government would delete climate data, so people like John Baez, an emeritus mathematician at the University of California, Riverside who is now working at the University of Edinburgh, backed it up. In the current administration, others have leaped into action, creating archives like the Safeguarding Research & Culture project, which has collected a variety of datasets and publications—from satellite observations of coral reefs to space telescope observations of distant planets—and made sure they’re public and available.

Scientists could theoretically do something similar for ESMs. “You can reestablish that model,” Baez said. “So if some European government decides to take on responsibility for this exascale model, I can imagine that being done.” However, noted Lawrence, to be useful, a model needs to be accompanied by staff with the relevant scientific and technical expertise to run it.

To think that other countries could gather all those ingredients at once might be optimistic. “It’s not like this is the only responsibility that’s something being dropped in the lap of other countries,” said Baez, “and whether they will have the funds and the energy to pursue all of these, it’s actually unlikely.”

Dessler said that if E3SM disappears, or isn’t supported, people could use CESM, which has the same technological origins. Beyond that, said Dessler, other ESMs exist. And they’re still plenty advanced even if they’re not exascale.

To Dessler, the potential obsolescence of any given model is not the issue. “I think the much bigger problem is they’re just going to zero out the work being done at DOE on climate,” he said.

And that zeroing includes people. “What’s really chilling, I think, is the loss of human capital,” he said.

“You cannot generate a scientist out of thin air,” he continued. “It takes years to produce a scientist, and to produce a senior scientist takes decades. And so if you don’t have any senior scientists, you’re screwed for a very long time.”

To understand how that changing variable will affect the planet would likely require a model even more powerful than an ESM.

“I think that’s really the story,” Dessler said.

This article was originally published on Undark. Read the original article.

Earth models can predict the planet’s future but not their own Read More »

starship’s-heat-shield-appears-to-have-performed-quite-well-in-test

Starship’s heat shield appears to have performed quite well in test

One of the more curious aspects of the 10th flight of SpaceX’s Starship rocket on Tuesday was the striking orange discoloration of the second stage. This could be observed on video taken from a buoy near the landing site as the vehicle made a soft landing in the Indian Ocean.

This color—so different from the silvery skin and black tiles that cover Starship’s upper stage—led to all sorts of speculation. Had heating damaged the stainless steel skin? Had the vehicle’s tiles been shucked off, leaving behind some sort of orange adhesive material? Was this actually NASA’s Space Launch System in disguise?

The answer to this question was rather important, as SpaceX founder Elon Musk had said before this flight that gathering data about the performance of this heat shield was the most important aspect of the mission.

We got some answers on Thursday. During the afternoon, the company posted some new high-resolution photos, taken by a drone in the vicinity of the landing location. They offered a clear view of the Starship vehicle with its heat shield intact, albeit with a rust-colored tint.

Musk provided some clarity on this discoloration on Thursday evening, writing on the social media site X, “Worth noting that the heat shield tiles almost entirely stayed attached, so the latest upgrades are looking good! The red color is from some metallic test tiles that oxidized and the white is from insulation of areas where we deliberately removed tiles.”

The new images and information from Musk suggest that SpaceX is making progress on developing a heat shield for Starship. This really is the key technology to make an upper stage rapidly reusable—NASA’s space shuttle orbiters were reusable but required a standing army to refurbish the vehicle between flights. To unlock Starship’s potential, SpaceX wants to be able to refly Starships within 24 hours.

Starship’s heat shield appears to have performed quite well in test Read More »

cdc-slashed-food-safety-surveillance,-now-tracks-only-2-of-8-top-infections

CDC slashed food safety surveillance, now tracks only 2 of 8 top infections

In July, the Centers for Disease Control and Prevention dramatically, but quietly, scaled back a food safety surveillance system, cutting active tracking from eight top foodborne infections down to just two, according to a report by NBC News.

The Foodborne Diseases Active Surveillance Network (FoodNet)—a network of surveillance sites that spans 10 states and covers about 54 million Americans (16 percent of the US population)—previously included active monitoring for eight infections from pathogens. Those include Campylobacter, Cyclospora, Listeria, Salmonella, Shiga toxin-producing E. coli (STEC), Shigella, Vibrio, and Yersinia.

Now the network is only monitoring for STEC and Salmonella.

A list of talking points the CDC sent the Connecticut health department (which is part of FoodNet) suggested that a lack of funding is behind the scaleback. “Funding has not kept pace with the resources required to maintain the continuation of FoodNet surveillance for all eight pathogens,” the CDC document said, according to NBC. The Trump administration has made brutal cuts to federal agencies, including the CDC, which has lost hundreds of employees this year.

A CDC spokesperson told the outlet that “Although FoodNet will narrow its focus to Salmonella and STEC, it will maintain both its infrastructure and the quality it has come to represent. Narrowing FoodNet’s reporting requirements and associated activities will allow FoodNet staff to prioritize core activities.”

CDC slashed food safety surveillance, now tracks only 2 of 8 top infections Read More »

are-they-starting-to-take-our-jobs?

Are They Starting To Take Our Jobs?

Is generative AI making it harder for young people to find jobs?

My answer is:

  1. Yes, definitely, in terms of for any given job that exists finding the job and getting hired. That’s getting harder. AI is most definitely screwing up that process.

  2. Yes, probably, in terms of employment in automation-impacted sectors. It always seemed odd to think otherwise, and this week’s new study has strong evidence here.

  3. Maybe, overall, in terms of the jobs available (excluding search and matching effects from #1), because AI should be increasing employment in areas not being automated yet, and that effect can be small and still dominate.

The claims go back and forth on the employment effects of AI. As Derek Thompson points out, if you go by articles in the popular press, we’ve gone from ‘possibly’ to ‘definitely yes’ to ‘almost certainly no’ until what Derek describes as this week’s ‘plausibly yes’ and which others are treating as stronger than that.

Derek Thompson: To be honest with you, I considered this debate well and truly settled. No, I’d come to think, AI is probably not wrecking employment for young people. But now, I’m thinking about changing my mind again.

It’s weird to pull an ‘I told you all so’ when what you said was ‘I am confused and you all are overconfident’ but yeah, basically. The idea that this was ‘well and truly settled’ always seemed absurd to me even considering present effects, none of these arguments should have filled anyone with confidence and neither should the new one, and this is AI so even if it definitively wasn’t happening now who knows where we would be six months later.

People changing their minds a lot reflects, as Derek notes, the way discovery, evaluation, discourse and science are supposed to work, except for the overconfidence.

Most recently before this week we had claims that what looks like effects of AI automation are delayed impacts from Covid, various interest rate changes, existing overhiring or other non-AI market trends.

The new hotness is this new Stanford study from Brynjolfsson, Chandar and Chen:

This paper examines changes in the labor market for occupations exposed to generative artificial intelligence using high-frequency administrative data from the largest payroll software provider in the United States.

We present six facts that characterize these shifts. We find that since the widespread adoption of generative AI, early-career workers (ages 22-25) in the most AI-exposed occupations have experienced a 13 percent relative decline in employment even after controlling for firm-level shocks.

In contrast, employment for workers in less exposed fields and more experienced workers in the same occupations has remained stable or continued to grow.

We also find that adjustments occur primarily through employment rather than compensation. Furthermore, employment declines are concentrated in occupations where AI is more likely to automate, rather than augment, human labor. Our results are robust to alternative explanations, such as excluding technology-related firms and excluding occupations amenable to remote work.

Effects acting through employment rather than compensation makes sense since the different fields are competing against each other for labor and wages are sticky downwards even across workers.

Bharat Chanar (author): We observe millions of workers each month. Use this cut the data finely by age and occ.

What do we find?

Stories about young SW developers struggling to find work borne out in data

Employment for 22-25 y/o developers ⬇️ ~20% from peak in 2022. Older ages show steady rise.

This isn’t just about software. See a similar pattern for customer service reps, another job highly exposed to AI. For both roles, the decline is sharpest for the 22-25 age group, with older, more experienced workers less affected.

In contrast, jobs less exposed to AI, like health aides, show the opposite trend. These jobs, which require in-person physical tasks, have seen the fastest employment growth among youngest workers.

Overall, job market for entry-level workers has been stagnant since late 2022, while market for experienced workers remains robust. Stagnation for young workers driven by declines in AI-exposed jobs. Of course, lots of changes in the economy, so this is not all caused by AI.

Note the y-axis scale on the graphs, but that does seem like a definitive result. It seems far too fast and targeted to be the result of non-AI factors.

John Burn-Murdoch: Very important paper, for two reasons:

  1. Key finding: employment *isfalling in early-career roles exposed to LLM automation

  2. Shows that administrative data (millions of payroll records) is much better than survey data for questions requiring precision (occupation x age)

There’s always that battle between ‘our findings are robust to various things’ and ‘your findings go away when you account for this particular thing in this way,’ and different findings appear to contradict.

I don’t know for sure who is right, but I was convinced by their explanation of why they have better data sources and thus they’re right and the FT study was wrong, in terms of there being relative entry-level employment effects that vary based on the amount of automation in each sector.

Areas with automation from AI saw job losses at entry level, whereas areas with AI amplification saw job gains, but we should expect more full automation over time.

There’s the additional twist that a 13 percent decline in employment for the AI-exposed early-career jobs does not mean work is harder to find. Everyone agrees AI will automate away some jobs. The bull case for employment is not that those jobs don’t go away. It is that those jobs are replaced by other jobs. So the 13% could be an 11% decline in some areas and a 2% increase in other larger areas, where they cancel out. AI is driving substantial economic growth already which should create jobs. We can’t tell.

There is one place I am very confident AI is making things harder. That is the many ways it is making it harder to find and get hired for what jobs do exist. Automated job applications are flooding and breaking the job application market, most of all in software but across the board. Matching is by all reports getting harder rather than easier, although if you are ahead of the curve on AI use here you presumably have an edge.

Predictions are hard, especially about the future, but I would as strongly as always disagree with this advice from Derek Thompson:

Derek Thompson: Someone once asked me recently if I had any advice on how to predict the future when I wrote about social and technological trends. Sure, I said. My advice is that predicting the future is impossible, so the best thing you can do is try to describe the present accurately.

Since most people live in the past, hanging onto stale narratives and outdated models, people who pay attention to what’s happening as it happens will appear to others like they’re predicting the future when all they’re doing is describing the present.

Predicting the future is hard in some ways, but that is no reason to throw up one’s hands and pretend to know nothing. We can especially know big things based on broad trends, destinations are often clearer than the road towards them. And in the age of AI, while predicting the present puts you ahead of many, we can know for certain many ways the future will not look like the present.

The most important and in some ways easiest things we can say involve what would happen with powerful or transformational AI, and that is really important, the only truly important thing, but in this particular context that’s not important right now.

If by the future we do mean the effect on jobs, and we presume that the world is not otherwise transformed so much we have far bigger problems, we can indeed still say many things. At minimum, we know many jobs will be amplified or augmented, and many more jobs will be fully automated or rendered irrelevant, even if we have high uncertainty about which ones in what order how fast.

We know that there will be some number of new jobs created by this process, especially if we have time to adjust, but that as AI ‘automates the new jobs as well’ this will get harder and eventually break. And we know that there is a lot of slack for an increasingly wealthy civilization to hire people for quite a lot of what I call ‘shadow jobs,’ which are jobs that would already exist except labor and capital currently choose better opportunities, again if those jobs too are not yet automated. Eventually we should expect unemployment.

Getting more speculative and less confident, earlier than that, it makes sense to expect unemployment for those lacking a necessary threshold of skill as technology advances, even if AI wasn’t a direct substitute for your intelligence. Notice that the employment charts above start at age 22. They used to start at age 18, and before that even younger, or they would have if we had charts back then.

Discussion about this post

Are They Starting To Take Our Jobs? Read More »

chris-roberts-hopes-squadron-42-will-be-“almost-as-big”-as-gta-vi-next-year

Chris Roberts hopes Squadron 42 will be “almost as big” as GTA VI next year

The long and winding road

It’s hard to remember now, but Star Citizen‘s then-impressive $6.3 million Kickstarter campaign came just a few months before Grand Theft Auto V first launched on the PlayStation 3 and Xbox 360 (remember those?). But development on Rockstar’s long-awaited sequel didn’t start in earnest until 2020, publisher Take Two says, around the time Star Citizen developer Roberts Space Industries was settling a contentious lawsuit over game engine rights and rolling out a new development roadmap for the game.

A graph visualizing the growing crowdfunding for Star Citizen from 2012 (top) through 2022 (bottom).

A graph visualizing the growing crowdfunding for Star Citizen from 2012 (top) through 2022 (bottom). Credit: Reddit / Rainbowles

Of course, the development of Grand Theft Auto VI has happened completely behind closed doors, with developer Rockstar and publisher Take Two only occasionally offering tiny drops of information to a desperate press and fan base. By contrast, Roberts Space Industries has issued regular, incredibly detailed information dumps on the drawn-out development progress for Star Citizen and Squadron 42, even when that kind of openness has contributed to the public appearance of internal dysfunction.

The massive, ongoing crowdfunding that powers the open development structure “allows us to do things without imposing the framework of a typical video game studio,” Roberts told La Presse. “The players who fund us expect the best game, period. We don’t have to streamline, cut jobs, or change our business model.”

That pre-launch development cycle must eventually end, of course, and the La Presse report suggests that the full 1.0 release of Star Citizen is “now promised” for “2027 or 2028.” While we’d love to believe that, the history of Star Citizen development thus far (and the lack of any provided sourcing for the claim) makes us more than a little skeptical.

Chris Roberts hopes Squadron 42 will be “almost as big” as GTA VI next year Read More »

the-first-stars-may-not-have-been-as-uniformly-massive-as-we thought

The first stars may not have been as uniformly massive as we thought


Collapsing gas clouds in the early universe may have formed lower-mass stars as well.

Stars form in the universe from massive clouds of gas. Credit: European Southern Observatory, CC BY-SA

For decades, astronomers have wondered what the very first stars in the universe were like. These stars formed new chemical elements, which enriched the universe and allowed the next generations of stars to form the first planets.

The first stars were initially composed of pure hydrogen and helium, and they were massive—hundreds to thousands of times the mass of the Sun and millions of times more luminous. Their short lives ended in enormous explosions called supernovae, so they had neither the time nor the raw materials to form planets, and they should no longer exist for astronomers to observe.

At least that’s what we thought.

Two studies published in the first half of 2025 suggest that collapsing gas clouds in the early universe may have formed lower-mass stars as well. One study uses a new astrophysical computer simulation that models turbulence within the cloud, causing fragmentation into smaller, star-forming clumps. The other study—an independent laboratory experiment—demonstrates how molecular hydrogen, a molecule essential for star formation, may have formed earlier and in larger abundances. The process involves a catalyst that may surprise chemistry teachers.

As an astronomer who studies star and planet formation and their dependence on chemical processes, I am excited at the possibility that chemistry in the first 50 million to 100 million years after the Big Bang may have been more active than we expected.

These findings suggest that the second generation of stars—the oldest stars we can currently observe and possibly the hosts of the first planets—may have formed earlier than astronomers thought.

Primordial star formation

Video illustration of the star and planet formation process. Credit: Space Telescope Science Institute.

Stars form when massive clouds of hydrogen many light-years across collapse under their own gravity. The collapse continues until a luminous sphere surrounds a dense core that is hot enough to sustain nuclear fusion.

Nuclear fusion happens when two or more atoms gain enough energy to fuse together. This process creates a new element and releases an incredible amount of energy, which heats the stellar core. In the first stars, hydrogen atoms fused together to create helium.

The new star shines because its surface is hot, but the energy fueling that luminosity percolates up from its core. The luminosity of a star is its total energy output in the form of light. The star’s brightness is the small fraction of that luminosity that we directly observe.

This process where stars form heavier elements by nuclear fusion is called stellar nucleosynthesis. It continues in stars after they form as their physical properties slowly change. The more massive stars can produce heavier elements such as carbon, oxygen, and nitrogen, all the way up to iron, in a sequence of fusion reactions that end in a supernova explosion.

Supernovae can create even heavier elements, completing the periodic table of elements. Lower-mass stars like the Sun, with their cooler cores, can sustain fusion only up to carbon. As they exhaust the hydrogen and helium in their cores, nuclear fusion stops, and the stars slowly evaporate.

The remnant of a high-mass star supernova explosion imaged by the Chandra X-ray Observatory, left, and the remnant of a low-mass star evaporating in a blue bubble, right.

The remnant of a high-mass star supernova explosion imaged by the Chandra X-ray Observatory, left, and the remnant of a low-mass star evaporating in a blue bubble, right. Credit: CC BY 4.0

High-mass stars have high pressure and temperature in their cores, so they burn bright and use up their gaseous fuel quickly. They last only a few million years, whereas low-mass stars—those less than two times the Sun’s mass—evolve much more slowly, with lifetimes of billions or even trillions of years.

If the earliest stars were all high-mass stars, then they would have exploded long ago. But if low-mass stars also formed in the early universe, they may still exist for us to observe.

Chemistry that cools clouds

The first star-forming gas clouds, called protostellar clouds, were warm—roughly room temperature. Warm gas has internal pressure that pushes outward against the inward force of gravity trying to collapse the cloud. A hot air balloon stays inflated by the same principle. If the flame heating the air at the base of the balloon stops, the air inside cools, and the balloon begins to collapse.

Stars form when clouds of dust collapse inward and condense around a small, bright, dense core. Credit: NASA, ESA, CSA, and STScI, J. DePasquale (STScI), CC BY-ND

Only the most massive protostellar clouds with the most gravity could overcome the thermal pressure and eventually collapse. In this scenario, the first stars were all massive.

The only way to form the lower-mass stars we see today is for the protostellar clouds to cool. Gas in space cools by radiation, which transforms thermal energy into light that carries the energy out of the cloud. Hydrogen and helium atoms are not efficient radiators below several thousand degrees, but molecular hydrogen, H₂, is great at cooling gas at low temperatures.

When energized, H₂ emits infrared light, which cools the gas and lowers the internal pressure. That process would make gravitational collapse more likely in lower-mass clouds.

For decades, astronomers have reasoned that a low abundance of H₂ early on resulted in hotter clouds whose internal pressure would be too hot to easily collapse into stars. They concluded that only clouds with enormous masses, and therefore higher gravity, would collapse, leaving more massive stars.

Helium hydride

In a July 2025 journal article, physicist Florian Grussie and collaborators at the Max Planck Institute for Nuclear Physics demonstrated that the first molecule to form in the universe, helium hydride, HeH⁺, could have been more abundant in the early universe than previously thought. They used a computer model and conducted a laboratory experiment to verify this result.

Helium hydride? In high school science you probably learned that helium is a noble gas, meaning it does not react with other atoms to form molecules or chemical compounds. As it turns out, it does—but only under the extremely sparse and dark conditions of the early universe, before the first stars formed.

HeH⁺ reacts with hydrogen deuteride—HD, which is one normal hydrogen atom bonded to a heavier deuterium atom—to form H₂. In the process, HeH⁺ also acts as a coolant and releases heat in the form of light. So the high abundance of both molecular coolants earlier on may have allowed smaller clouds to cool faster and collapse to form lower-mass stars.

Gas flow also affects stellar initial masses

In another study, published in July 2025, astrophysicist Ke-Jung Chen led a research group at the Academia Sinica Institute of Astronomy and Astrophysics using a detailed computer simulation that modeled how gas in the early universe may have flowed.

The team’s model demonstrated that turbulence, or irregular motion, in giant collapsing gas clouds can form lower-mass cloud fragments from which lower-mass stars condense.

The study concluded that turbulence may have allowed these early gas clouds to form stars either the same size or up to 40 times more massive than the Sun’s mass.

The galaxy NGC 1140 is small and contains large amounts of primordial gas with far fewer elements heavier than hydrogen and helium than are present in our Sun. This composition makes it similar to the intensely star-forming galaxies found in the early universe. These early universe galaxies were the building blocks for large galaxies such as the Milky Way.

The galaxy NGC 1140 is small and contains large amounts of primordial gas with far fewer elements heavier than hydrogen and helium than are present in our Sun. This composition makes it similar to the intensely star-forming galaxies found in the early universe. These early universe galaxies were the building blocks for large galaxies such as the Milky Way. Credit: ESA/Hubble & NASA, CC BY-ND

The two new studies both predict that the first population of stars could have included low-mass stars. Now, it is up to us observational astronomers to find them.

This is no easy task. Low-mass stars have low luminosities, so they are extremely faint. Several observational studies have recently reported possible detections, but none are yet confirmed with high confidence. If they are out there, though, we will find them eventually.The Conversation

Luke Keller is a professor of physics and astronomy at Ithaca College.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo of The Conversation

The Conversation is an independent source of news and views, sourced from the academic and research community. Our team of editors work with these experts to share their knowledge with the wider public. Our aim is to allow for better understanding of current affairs and complex issues, and hopefully improve the quality of public discourse on them.

The first stars may not have been as uniformly massive as we thought Read More »

corsair’s-pc-dockable-screen-helped-me-monitor-my-pc-components-and-news-feeds

Corsair’s PC-dockable screen helped me monitor my PC components and news feeds


Corsair’s Xeneon Edge is the best at what it does but is software-dependent.

Corsair Xeneon Edge

Corsair’s Xeneon Edge touchscreen monitor. Credit: Scharon Harding

Corsair’s Xeneon Edge touchscreen monitor. Credit: Scharon Harding

Finding a cheap secondary PC monitor is pretty easy. But if you want one that looks good, is built well, and is easily customizable, you won’t find those qualities in a budget screen from a no-name brand on Amazon. Instead, Corsair’s Xeneon Edge is a premium alternative that almost justifies its $250 price tag.

Corsair first announced the Xeneon Edge at the CES trade show in January. It’s a 5-point capacitive touchscreen that can live on your desk and serve as a secondary computer monitor. If you’re feeling fun, you can download Corsair’s iCUE software to use customizable widgets for displaying things like CPU temperature and usage, the time and date, and media playing. More adventurous users can attach the screen onto their desktop PC’s fan mounts or side panel.

I used Corsair’s monitor for a couple of weeks. From its build to its image quality and software, the monitor is exemplary for a screen of this kind. The flagship widgets feature needs some work, but I couldn’t ask for much more from a secondary, PC-mountable display.

PC-mountable monitor

Corsair Xeneon Edge

The monitor is set to 50 percent brightness, which was suffient in my sunny office. Maxing out brightness washed out the display’s colors.

Credit: Scharon Harding

The monitor is set to 50 percent brightness, which was suffient in my sunny office. Maxing out brightness washed out the display’s colors. Credit: Scharon Harding

PC builders may be intrigued by the Xeneon Edge’s ability to attach to any 360 mm fan mount. There are four corner machine screws on the back of the monitor to attach the screen to a fan mount. Corsair also sells “Frame Series” PC cases that support attaching the monitor onto the side panel. You can see a video of the different PC mounting options here.

If you don’t have a desktop or want to pair Corsair’s screen with a laptop, the screen comes with a tiny plastic stand that adheres to the monitor’s four corners via the display’s 14 integrated magnets. This minimalist solution meant I could use my Xeneon Edge within minutes of opening it.

Corsair Xeneon Edge's backside and stand

The included stand (top) and the monitor’s backside (bottom).

Credit: Scharon Harding

The included stand (top) and the monitor’s backside (bottom). Credit: Scharon Harding

Yet another option is to use the Xeneon Edge’s two standard female 1/4″-20 mounts to connect the monitor to a stand, giving it more height and, depending on the arm, the ability to rotate.

Widget drawbacks

While cheaper monitors similar to the Xeneon Edge are out there, they’re always just missing the mark. This $160 (as of this writing) option, for example, specifically names Corsair compatibility in its keyword-stuffed product name. Some of these rivals—which often have similar specs, like size and resolution—also emphasize their ability to display information from the connected system, such as CPU and GPU temperature. However, I haven’t seen these cheaper screens come with dedicated software that simplifies configuring what the monitor displays, while ensuring its image looks clean, sophisticated, and easily digestible.

This monitor’s product images, for example, show a screen with a lot of information (potentially too much) about the connected PC’s CPU, GPU, RAM, and storage, accompanied by Dragon Ball Super anime graphics. But in order to get that on the display, you’d need to download and customize Aida64 and Wallpaper Engine, per the listing. iCUE is a simpler alternative and will require less time to set up.

To use widgets on the Xeneon Edge, iCUE must be running. Whenever I stopped the app from running in the background, the widgets disappeared, and the Xeneon Edge would work as a widget-free secondary monitor. Corsair’s manual reads: “Monitor settings are saved directly on the device and will remain consistent, even when iCUE is not running.” Once I re-opened iCUE, my widget layouts were accessible again. This limitation could mean that you’ll never want to use Corsair’s widgets. For some people, particularly those building PCs and buying dedicated screens for monitoring PC components, requiring iCUE to run is counterproductive.

If peripheral companies like Corsair and Razer have broken you down to where you don’t mind proprietary software using computing resources in perpetuity, you’ll be happy with iCUE’s simple, sensible UI for tweaking things like the size and color of widgets.

But I thought there’d be more widgets—namely calendar and weather ones, as Corsair teased in January promotional images for the Xeneon Edge.

A promotional image of the touchscreen from January shows calendar and weather widgets.

I asked Corsair about this, and a company spokesperson said that the weather and calendar widgets will be available in Q1 2026. Wanting more and improved widgets is a good reason to hold off on buying this monitor (the monitor could potentially be cheaper in the future, too), which just came out today.

A screenshot of Corsair iCUE configuring the Xeneon Edge.

I’d like to see timer and alarm widgets added to the companion app.

Credit: Scharon Harding/Corsair

I’d like to see timer and alarm widgets added to the companion app. Credit: Scharon Harding/Corsair

Occasionally I had trouble navigating websites within the monitor’s URL widget. It was fine for leaving my favorite website up, for example. But the widget sometimes cut off certain areas, such as menu bars, on other websites. When I used the widget to display the website for an RSS feed reader, I sometimes got logged out when exiting iCUE. When I reopened iCUE, the widget wouldn’t let me type within the widget in order to log back in, unless I had iCUE up on my other screen. Scrolling through the Ars Technica website looked choppy, too. Notably, iCUE emphasizes that “some websites do not permit their content to be displayed in an iFrame.

Corsair Xeneon Edge

The Ars Technica website within Corsair’s URL widget.

Credit: Scharon Harding

The Ars Technica website within Corsair’s URL widget. Credit: Scharon Harding

Corsair’s rep told me that the URL widget uses a “customized flavor of Chromium.” Of course, the widget doesn’t offer nearly the same functionality as a standard browser. You can’t store bookmarks or enter new URLs within the widget, for example.

If the monitor is using widgets, you can’t use it like a regular monitor, so you can’t drag or view windows on it. This was limiting and prevented me from displaying widgets and other apps fit for a secondary screen, like Slack, simultaneously. As of my writing, the only dedicated chat widget is for Twitch Chat.

Corsair’s rep told me that the company is currently “working on more features and widgets, so things should open up pretty soon.” He pointed to upcoming widgets for Discord, stocks, a virtual keyboard and mouse, and SimHub, plus a widget builder.

I think most users will end up choosing between having the display typically run widgets or serving as a monitor. For Team Widget, there’s a handy feature where you can swipe left or right on the screen to quickly toggle different widget layouts that you’ve saved.

As good as it gets, with room for improvement

Corsair’s Xeneon Edge isn’t the only 14.5-inch touchscreen monitor out there, but it certainly has an edge over its nondescript rivals. The Xeneon Edge is more expensive than most of its competition. But during my testing with the display, I never felt like I was looking at something cheap. The IPS panel appeared bright, colorful, and legible, even in bright rooms and when displaying smaller text (very small text was still readable, but I’d prefer to read small lettering on something sharper).

Many will completely forego Corsair’s widgets. They’ll miss out on some of what makes the Xeneon Edge expensive, but the display’s mounting options, solid build, and image quality, along with Corsair’s reputation, help it make sense over cheaper 14.5-inch touchscreens. Corsair gives the monitor a two-year limited warranty.

Some might consider the software burdensome, but if you choose to use it, the app is modern and effective without making you jump through hoops to do things like adjust the monitor’s brightness, contrast, or sensor logging or set an image as the screen’s background.

More widgets would help this monitor come closer to earning the $250 MSRP. But if you’re looking for a small, premium touchscreen to add to your desk—or mount to your PC—the Xeneon Edge is top of the line.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Corsair’s PC-dockable screen helped me monitor my PC components and news feeds Read More »

4chan-refuses-to-pay-uk-online-safety-act-fines,-asks-trump-admin-to-intervene

4chan refuses to pay UK Online Safety Act fines, asks Trump admin to intervene

4chan’s law firms, Byrne & Storm and Coleman Law, said in a statement on August 15 that “4chan is a United States company, incorporated in Delaware, with no establishment, assets, or operations in the United Kingdom. Any attempt to impose or enforce a penalty against 4chan will be resisted in US federal court. American businesses do not surrender their First Amendment rights because a foreign bureaucrat sends them an e-mail.”

4chan seeks Trump admin’s help

4chan’s lawyers added that US “authorities have been briefed on this matter… We call on the Trump administration to invoke all diplomatic and legal levers available to the United States to protect American companies from extraterritorial censorship mandates.”

The US Federal Trade Commission appears to have a similar concern. FTC Chairman Andrew Ferguson yesterday sent letters to over a dozen social media and technology companies warning them that “censoring Americans to comply with a foreign power’s laws, demands, or expected demands” may violate US law.

Ferguson’s letters directly referenced the UK Online Safety Act. The letters were sent to Akamai, Alphabet, Amazon, Apple, Cloudflare, Discord, GoDaddy, Meta, Microsoft, Signal, Snap, Slack, and X.

“The letters noted that companies might feel pressured to censor and weaken data security protections for Americans in response to the laws, demands, or expected demands of foreign powers,” the FTC said. “These laws include the European Union’s Digital Services Act and the United Kingdom’s Online Safety Act, which incentivize tech companies to censor worldwide speech, and the UK’s Investigatory Powers Act, which can require companies to weaken their encryption measures to enable UK law enforcement to access data stored by users.”

Wikipedia is meanwhile fighting a court battle against a UK Online Safety Act provision that could force it to verify the identity of Wikipedia users. The Wikimedia Foundation said the potential requirement would be burdensome to users and “could expose users to data breaches, stalking, vexatious lawsuits or even imprisonment by authoritarian regimes.”

Separately, the Trump administration said this week that the UK dropped its demand that Apple create a backdoor for government security officials to access encrypted data. The UK made the demand under its Investigatory Powers Act.

4chan refuses to pay UK Online Safety Act fines, asks Trump admin to intervene Read More »

for-some-people,-music-doesn’t-connect-with-any-of-the-brain’s-reward-circuits

For some people, music doesn’t connect with any of the brain’s reward circuits

“I was talking with my colleagues at a conference 10 years ago and I just casually said that everyone loves music,” recalls Josep Marco Pallarés, a neuroscientist at the University of Barcelona. But it was a statement he started to question almost immediately, given there were clinical cases in psychiatry where patients reported deriving absolutely no pleasure from listening to any kind of tunes.

So, Pallarés and his team spent the past 10 years researching the neural mechanisms behind a condition they called specific musical anhedonia: the inability to enjoy music.

The wiring behind joy

When we like something, it is usually a joint effect of circuits in our brain responsible for perception—be it perception of taste, touch, or sound—and reward circuits that give us a shot of dopamine in response to nice things we experience. For a long time, scientists attributed a lack of pleasure from things most people find enjoyable to malfunctions in one or more of those circuits.

You can’t enjoy music when the parts of the brain that process auditory stimuli don’t work properly, since you can’t hear it in the way that you would if the system were intact. You also can’t enjoy music when the reward circuit refuses to release that dopamine, even if you can hear it loud and clear. Pallarés, though, thought this traditional idea lacked a bit of explanatory power.

“When your reward circuit doesn’t work, you don’t experience enjoyment from anything, not just music,” Pallarés says. “But some people have no hearing impairments and can enjoy everything else—winning money, for example. The only thing they can’t enjoy is music.”

For some people, music doesn’t connect with any of the brain’s reward circuits Read More »

deeply-divided-supreme-court-lets-nih-grant-terminations-continue

Deeply divided Supreme Court lets NIH grant terminations continue

The dissents

The primary dissent was written by Chief Justice Roberts, and joined in part by the three Democratic appointees, Jackson, Kagan, and Sotomayor. It is a grand total of one paragraph and can be distilled down to a single sentence: “If the District Court had jurisdiction to vacate the directives, it also had jurisdiction to vacate the ‘Resulting Grant Terminations.’”

Jackson, however, chose to write a separate and far more detailed argument against the decision, mostly focusing on the fact that it’s not simply a matter of abstract law; it has real-world consequences.

She notes that existing law prevents plaintiffs from suing in the Court of Federal Claims while the facts are under dispute in other courts (something acknowledged by Barrett). That would mean that, as here, any plaintiffs would have to have the policy declared illegal first in the District Court, and only after that was fully resolved could they turn to the Federal Claims Court to try to restore their grants. That’s a process that could take years. In the meantime, the scientists would be out of funding, with dire consequences.

Yearslong studies will lose validity. Animal subjects will be euthanized. Life-saving medication trials will be abandoned. Countless researchers will lose their jobs. And community health clinics will close.

Jackson also had little interest in hearing that the government would be harmed by paying out the grants in the meantime. “For the Government, the incremental expenditure of money is at stake,” she wrote. “For the plaintiffs and the public, scientific progress itself hangs in the balance along with the lives that progress saves.”

With this decision, of course, it no longer hangs in the balance. There’s a possibility that the District Court’s ruling that the government’s policy was arbitrary and capricious will ultimately prevail; it’s not clear, because Barrett says she hasn’t even seen the government make arguments there, and Roberts only wrote regarding the venue issues. In the meantime, even with the policy stayed, it’s unlikely that anyone will focus grant proposals on the disfavored subjects, given that the policy might be reinstated at any moment.

And even if that ruling is upheld, it will likely take years to get there, and only then could a separate case be started to restore the funding. Any labs that had been using those grants will have long since moved on, and the people working on those projects scattered.

Deeply divided Supreme Court lets NIH grant terminations continue Read More »