Author name: Shannon Garcia

starship’s-heat-shield-appears-to-have-performed-quite-well-in-test

Starship’s heat shield appears to have performed quite well in test

One of the more curious aspects of the 10th flight of SpaceX’s Starship rocket on Tuesday was the striking orange discoloration of the second stage. This could be observed on video taken from a buoy near the landing site as the vehicle made a soft landing in the Indian Ocean.

This color—so different from the silvery skin and black tiles that cover Starship’s upper stage—led to all sorts of speculation. Had heating damaged the stainless steel skin? Had the vehicle’s tiles been shucked off, leaving behind some sort of orange adhesive material? Was this actually NASA’s Space Launch System in disguise?

The answer to this question was rather important, as SpaceX founder Elon Musk had said before this flight that gathering data about the performance of this heat shield was the most important aspect of the mission.

We got some answers on Thursday. During the afternoon, the company posted some new high-resolution photos, taken by a drone in the vicinity of the landing location. They offered a clear view of the Starship vehicle with its heat shield intact, albeit with a rust-colored tint.

Musk provided some clarity on this discoloration on Thursday evening, writing on the social media site X, “Worth noting that the heat shield tiles almost entirely stayed attached, so the latest upgrades are looking good! The red color is from some metallic test tiles that oxidized and the white is from insulation of areas where we deliberately removed tiles.”

The new images and information from Musk suggest that SpaceX is making progress on developing a heat shield for Starship. This really is the key technology to make an upper stage rapidly reusable—NASA’s space shuttle orbiters were reusable but required a standing army to refurbish the vehicle between flights. To unlock Starship’s potential, SpaceX wants to be able to refly Starships within 24 hours.

Starship’s heat shield appears to have performed quite well in test Read More »

cdc-slashed-food-safety-surveillance,-now-tracks-only-2-of-8-top-infections

CDC slashed food safety surveillance, now tracks only 2 of 8 top infections

In July, the Centers for Disease Control and Prevention dramatically, but quietly, scaled back a food safety surveillance system, cutting active tracking from eight top foodborne infections down to just two, according to a report by NBC News.

The Foodborne Diseases Active Surveillance Network (FoodNet)—a network of surveillance sites that spans 10 states and covers about 54 million Americans (16 percent of the US population)—previously included active monitoring for eight infections from pathogens. Those include Campylobacter, Cyclospora, Listeria, Salmonella, Shiga toxin-producing E. coli (STEC), Shigella, Vibrio, and Yersinia.

Now the network is only monitoring for STEC and Salmonella.

A list of talking points the CDC sent the Connecticut health department (which is part of FoodNet) suggested that a lack of funding is behind the scaleback. “Funding has not kept pace with the resources required to maintain the continuation of FoodNet surveillance for all eight pathogens,” the CDC document said, according to NBC. The Trump administration has made brutal cuts to federal agencies, including the CDC, which has lost hundreds of employees this year.

A CDC spokesperson told the outlet that “Although FoodNet will narrow its focus to Salmonella and STEC, it will maintain both its infrastructure and the quality it has come to represent. Narrowing FoodNet’s reporting requirements and associated activities will allow FoodNet staff to prioritize core activities.”

CDC slashed food safety surveillance, now tracks only 2 of 8 top infections Read More »

are-they-starting-to-take-our-jobs?

Are They Starting To Take Our Jobs?

Is generative AI making it harder for young people to find jobs?

My answer is:

  1. Yes, definitely, in terms of for any given job that exists finding the job and getting hired. That’s getting harder. AI is most definitely screwing up that process.

  2. Yes, probably, in terms of employment in automation-impacted sectors. It always seemed odd to think otherwise, and this week’s new study has strong evidence here.

  3. Maybe, overall, in terms of the jobs available (excluding search and matching effects from #1), because AI should be increasing employment in areas not being automated yet, and that effect can be small and still dominate.

The claims go back and forth on the employment effects of AI. As Derek Thompson points out, if you go by articles in the popular press, we’ve gone from ‘possibly’ to ‘definitely yes’ to ‘almost certainly no’ until what Derek describes as this week’s ‘plausibly yes’ and which others are treating as stronger than that.

Derek Thompson: To be honest with you, I considered this debate well and truly settled. No, I’d come to think, AI is probably not wrecking employment for young people. But now, I’m thinking about changing my mind again.

It’s weird to pull an ‘I told you all so’ when what you said was ‘I am confused and you all are overconfident’ but yeah, basically. The idea that this was ‘well and truly settled’ always seemed absurd to me even considering present effects, none of these arguments should have filled anyone with confidence and neither should the new one, and this is AI so even if it definitively wasn’t happening now who knows where we would be six months later.

People changing their minds a lot reflects, as Derek notes, the way discovery, evaluation, discourse and science are supposed to work, except for the overconfidence.

Most recently before this week we had claims that what looks like effects of AI automation are delayed impacts from Covid, various interest rate changes, existing overhiring or other non-AI market trends.

The new hotness is this new Stanford study from Brynjolfsson, Chandar and Chen:

This paper examines changes in the labor market for occupations exposed to generative artificial intelligence using high-frequency administrative data from the largest payroll software provider in the United States.

We present six facts that characterize these shifts. We find that since the widespread adoption of generative AI, early-career workers (ages 22-25) in the most AI-exposed occupations have experienced a 13 percent relative decline in employment even after controlling for firm-level shocks.

In contrast, employment for workers in less exposed fields and more experienced workers in the same occupations has remained stable or continued to grow.

We also find that adjustments occur primarily through employment rather than compensation. Furthermore, employment declines are concentrated in occupations where AI is more likely to automate, rather than augment, human labor. Our results are robust to alternative explanations, such as excluding technology-related firms and excluding occupations amenable to remote work.

Effects acting through employment rather than compensation makes sense since the different fields are competing against each other for labor and wages are sticky downwards even across workers.

Bharat Chanar (author): We observe millions of workers each month. Use this cut the data finely by age and occ.

What do we find?

Stories about young SW developers struggling to find work borne out in data

Employment for 22-25 y/o developers ⬇️ ~20% from peak in 2022. Older ages show steady rise.

This isn’t just about software. See a similar pattern for customer service reps, another job highly exposed to AI. For both roles, the decline is sharpest for the 22-25 age group, with older, more experienced workers less affected.

In contrast, jobs less exposed to AI, like health aides, show the opposite trend. These jobs, which require in-person physical tasks, have seen the fastest employment growth among youngest workers.

Overall, job market for entry-level workers has been stagnant since late 2022, while market for experienced workers remains robust. Stagnation for young workers driven by declines in AI-exposed jobs. Of course, lots of changes in the economy, so this is not all caused by AI.

Note the y-axis scale on the graphs, but that does seem like a definitive result. It seems far too fast and targeted to be the result of non-AI factors.

John Burn-Murdoch: Very important paper, for two reasons:

  1. Key finding: employment *isfalling in early-career roles exposed to LLM automation

  2. Shows that administrative data (millions of payroll records) is much better than survey data for questions requiring precision (occupation x age)

There’s always that battle between ‘our findings are robust to various things’ and ‘your findings go away when you account for this particular thing in this way,’ and different findings appear to contradict.

I don’t know for sure who is right, but I was convinced by their explanation of why they have better data sources and thus they’re right and the FT study was wrong, in terms of there being relative entry-level employment effects that vary based on the amount of automation in each sector.

Areas with automation from AI saw job losses at entry level, whereas areas with AI amplification saw job gains, but we should expect more full automation over time.

There’s the additional twist that a 13 percent decline in employment for the AI-exposed early-career jobs does not mean work is harder to find. Everyone agrees AI will automate away some jobs. The bull case for employment is not that those jobs don’t go away. It is that those jobs are replaced by other jobs. So the 13% could be an 11% decline in some areas and a 2% increase in other larger areas, where they cancel out. AI is driving substantial economic growth already which should create jobs. We can’t tell.

There is one place I am very confident AI is making things harder. That is the many ways it is making it harder to find and get hired for what jobs do exist. Automated job applications are flooding and breaking the job application market, most of all in software but across the board. Matching is by all reports getting harder rather than easier, although if you are ahead of the curve on AI use here you presumably have an edge.

Predictions are hard, especially about the future, but I would as strongly as always disagree with this advice from Derek Thompson:

Derek Thompson: Someone once asked me recently if I had any advice on how to predict the future when I wrote about social and technological trends. Sure, I said. My advice is that predicting the future is impossible, so the best thing you can do is try to describe the present accurately.

Since most people live in the past, hanging onto stale narratives and outdated models, people who pay attention to what’s happening as it happens will appear to others like they’re predicting the future when all they’re doing is describing the present.

Predicting the future is hard in some ways, but that is no reason to throw up one’s hands and pretend to know nothing. We can especially know big things based on broad trends, destinations are often clearer than the road towards them. And in the age of AI, while predicting the present puts you ahead of many, we can know for certain many ways the future will not look like the present.

The most important and in some ways easiest things we can say involve what would happen with powerful or transformational AI, and that is really important, the only truly important thing, but in this particular context that’s not important right now.

If by the future we do mean the effect on jobs, and we presume that the world is not otherwise transformed so much we have far bigger problems, we can indeed still say many things. At minimum, we know many jobs will be amplified or augmented, and many more jobs will be fully automated or rendered irrelevant, even if we have high uncertainty about which ones in what order how fast.

We know that there will be some number of new jobs created by this process, especially if we have time to adjust, but that as AI ‘automates the new jobs as well’ this will get harder and eventually break. And we know that there is a lot of slack for an increasingly wealthy civilization to hire people for quite a lot of what I call ‘shadow jobs,’ which are jobs that would already exist except labor and capital currently choose better opportunities, again if those jobs too are not yet automated. Eventually we should expect unemployment.

Getting more speculative and less confident, earlier than that, it makes sense to expect unemployment for those lacking a necessary threshold of skill as technology advances, even if AI wasn’t a direct substitute for your intelligence. Notice that the employment charts above start at age 22. They used to start at age 18, and before that even younger, or they would have if we had charts back then.

Discussion about this post

Are They Starting To Take Our Jobs? Read More »

chris-roberts-hopes-squadron-42-will-be-“almost-as-big”-as-gta-vi-next-year

Chris Roberts hopes Squadron 42 will be “almost as big” as GTA VI next year

The long and winding road

It’s hard to remember now, but Star Citizen‘s then-impressive $6.3 million Kickstarter campaign came just a few months before Grand Theft Auto V first launched on the PlayStation 3 and Xbox 360 (remember those?). But development on Rockstar’s long-awaited sequel didn’t start in earnest until 2020, publisher Take Two says, around the time Star Citizen developer Roberts Space Industries was settling a contentious lawsuit over game engine rights and rolling out a new development roadmap for the game.

A graph visualizing the growing crowdfunding for Star Citizen from 2012 (top) through 2022 (bottom).

A graph visualizing the growing crowdfunding for Star Citizen from 2012 (top) through 2022 (bottom). Credit: Reddit / Rainbowles

Of course, the development of Grand Theft Auto VI has happened completely behind closed doors, with developer Rockstar and publisher Take Two only occasionally offering tiny drops of information to a desperate press and fan base. By contrast, Roberts Space Industries has issued regular, incredibly detailed information dumps on the drawn-out development progress for Star Citizen and Squadron 42, even when that kind of openness has contributed to the public appearance of internal dysfunction.

The massive, ongoing crowdfunding that powers the open development structure “allows us to do things without imposing the framework of a typical video game studio,” Roberts told La Presse. “The players who fund us expect the best game, period. We don’t have to streamline, cut jobs, or change our business model.”

That pre-launch development cycle must eventually end, of course, and the La Presse report suggests that the full 1.0 release of Star Citizen is “now promised” for “2027 or 2028.” While we’d love to believe that, the history of Star Citizen development thus far (and the lack of any provided sourcing for the claim) makes us more than a little skeptical.

Chris Roberts hopes Squadron 42 will be “almost as big” as GTA VI next year Read More »

the-first-stars-may-not-have-been-as-uniformly-massive-as-we thought

The first stars may not have been as uniformly massive as we thought


Collapsing gas clouds in the early universe may have formed lower-mass stars as well.

Stars form in the universe from massive clouds of gas. Credit: European Southern Observatory, CC BY-SA

For decades, astronomers have wondered what the very first stars in the universe were like. These stars formed new chemical elements, which enriched the universe and allowed the next generations of stars to form the first planets.

The first stars were initially composed of pure hydrogen and helium, and they were massive—hundreds to thousands of times the mass of the Sun and millions of times more luminous. Their short lives ended in enormous explosions called supernovae, so they had neither the time nor the raw materials to form planets, and they should no longer exist for astronomers to observe.

At least that’s what we thought.

Two studies published in the first half of 2025 suggest that collapsing gas clouds in the early universe may have formed lower-mass stars as well. One study uses a new astrophysical computer simulation that models turbulence within the cloud, causing fragmentation into smaller, star-forming clumps. The other study—an independent laboratory experiment—demonstrates how molecular hydrogen, a molecule essential for star formation, may have formed earlier and in larger abundances. The process involves a catalyst that may surprise chemistry teachers.

As an astronomer who studies star and planet formation and their dependence on chemical processes, I am excited at the possibility that chemistry in the first 50 million to 100 million years after the Big Bang may have been more active than we expected.

These findings suggest that the second generation of stars—the oldest stars we can currently observe and possibly the hosts of the first planets—may have formed earlier than astronomers thought.

Primordial star formation

Video illustration of the star and planet formation process. Credit: Space Telescope Science Institute.

Stars form when massive clouds of hydrogen many light-years across collapse under their own gravity. The collapse continues until a luminous sphere surrounds a dense core that is hot enough to sustain nuclear fusion.

Nuclear fusion happens when two or more atoms gain enough energy to fuse together. This process creates a new element and releases an incredible amount of energy, which heats the stellar core. In the first stars, hydrogen atoms fused together to create helium.

The new star shines because its surface is hot, but the energy fueling that luminosity percolates up from its core. The luminosity of a star is its total energy output in the form of light. The star’s brightness is the small fraction of that luminosity that we directly observe.

This process where stars form heavier elements by nuclear fusion is called stellar nucleosynthesis. It continues in stars after they form as their physical properties slowly change. The more massive stars can produce heavier elements such as carbon, oxygen, and nitrogen, all the way up to iron, in a sequence of fusion reactions that end in a supernova explosion.

Supernovae can create even heavier elements, completing the periodic table of elements. Lower-mass stars like the Sun, with their cooler cores, can sustain fusion only up to carbon. As they exhaust the hydrogen and helium in their cores, nuclear fusion stops, and the stars slowly evaporate.

The remnant of a high-mass star supernova explosion imaged by the Chandra X-ray Observatory, left, and the remnant of a low-mass star evaporating in a blue bubble, right.

The remnant of a high-mass star supernova explosion imaged by the Chandra X-ray Observatory, left, and the remnant of a low-mass star evaporating in a blue bubble, right. Credit: CC BY 4.0

High-mass stars have high pressure and temperature in their cores, so they burn bright and use up their gaseous fuel quickly. They last only a few million years, whereas low-mass stars—those less than two times the Sun’s mass—evolve much more slowly, with lifetimes of billions or even trillions of years.

If the earliest stars were all high-mass stars, then they would have exploded long ago. But if low-mass stars also formed in the early universe, they may still exist for us to observe.

Chemistry that cools clouds

The first star-forming gas clouds, called protostellar clouds, were warm—roughly room temperature. Warm gas has internal pressure that pushes outward against the inward force of gravity trying to collapse the cloud. A hot air balloon stays inflated by the same principle. If the flame heating the air at the base of the balloon stops, the air inside cools, and the balloon begins to collapse.

Stars form when clouds of dust collapse inward and condense around a small, bright, dense core. Credit: NASA, ESA, CSA, and STScI, J. DePasquale (STScI), CC BY-ND

Only the most massive protostellar clouds with the most gravity could overcome the thermal pressure and eventually collapse. In this scenario, the first stars were all massive.

The only way to form the lower-mass stars we see today is for the protostellar clouds to cool. Gas in space cools by radiation, which transforms thermal energy into light that carries the energy out of the cloud. Hydrogen and helium atoms are not efficient radiators below several thousand degrees, but molecular hydrogen, H₂, is great at cooling gas at low temperatures.

When energized, H₂ emits infrared light, which cools the gas and lowers the internal pressure. That process would make gravitational collapse more likely in lower-mass clouds.

For decades, astronomers have reasoned that a low abundance of H₂ early on resulted in hotter clouds whose internal pressure would be too hot to easily collapse into stars. They concluded that only clouds with enormous masses, and therefore higher gravity, would collapse, leaving more massive stars.

Helium hydride

In a July 2025 journal article, physicist Florian Grussie and collaborators at the Max Planck Institute for Nuclear Physics demonstrated that the first molecule to form in the universe, helium hydride, HeH⁺, could have been more abundant in the early universe than previously thought. They used a computer model and conducted a laboratory experiment to verify this result.

Helium hydride? In high school science you probably learned that helium is a noble gas, meaning it does not react with other atoms to form molecules or chemical compounds. As it turns out, it does—but only under the extremely sparse and dark conditions of the early universe, before the first stars formed.

HeH⁺ reacts with hydrogen deuteride—HD, which is one normal hydrogen atom bonded to a heavier deuterium atom—to form H₂. In the process, HeH⁺ also acts as a coolant and releases heat in the form of light. So the high abundance of both molecular coolants earlier on may have allowed smaller clouds to cool faster and collapse to form lower-mass stars.

Gas flow also affects stellar initial masses

In another study, published in July 2025, astrophysicist Ke-Jung Chen led a research group at the Academia Sinica Institute of Astronomy and Astrophysics using a detailed computer simulation that modeled how gas in the early universe may have flowed.

The team’s model demonstrated that turbulence, or irregular motion, in giant collapsing gas clouds can form lower-mass cloud fragments from which lower-mass stars condense.

The study concluded that turbulence may have allowed these early gas clouds to form stars either the same size or up to 40 times more massive than the Sun’s mass.

The galaxy NGC 1140 is small and contains large amounts of primordial gas with far fewer elements heavier than hydrogen and helium than are present in our Sun. This composition makes it similar to the intensely star-forming galaxies found in the early universe. These early universe galaxies were the building blocks for large galaxies such as the Milky Way.

The galaxy NGC 1140 is small and contains large amounts of primordial gas with far fewer elements heavier than hydrogen and helium than are present in our Sun. This composition makes it similar to the intensely star-forming galaxies found in the early universe. These early universe galaxies were the building blocks for large galaxies such as the Milky Way. Credit: ESA/Hubble & NASA, CC BY-ND

The two new studies both predict that the first population of stars could have included low-mass stars. Now, it is up to us observational astronomers to find them.

This is no easy task. Low-mass stars have low luminosities, so they are extremely faint. Several observational studies have recently reported possible detections, but none are yet confirmed with high confidence. If they are out there, though, we will find them eventually.The Conversation

Luke Keller is a professor of physics and astronomy at Ithaca College.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo of The Conversation

The Conversation is an independent source of news and views, sourced from the academic and research community. Our team of editors work with these experts to share their knowledge with the wider public. Our aim is to allow for better understanding of current affairs and complex issues, and hopefully improve the quality of public discourse on them.

The first stars may not have been as uniformly massive as we thought Read More »

corsair’s-pc-dockable-screen-helped-me-monitor-my-pc-components-and-news-feeds

Corsair’s PC-dockable screen helped me monitor my PC components and news feeds


Corsair’s Xeneon Edge is the best at what it does but is software-dependent.

Corsair Xeneon Edge

Corsair’s Xeneon Edge touchscreen monitor. Credit: Scharon Harding

Corsair’s Xeneon Edge touchscreen monitor. Credit: Scharon Harding

Finding a cheap secondary PC monitor is pretty easy. But if you want one that looks good, is built well, and is easily customizable, you won’t find those qualities in a budget screen from a no-name brand on Amazon. Instead, Corsair’s Xeneon Edge is a premium alternative that almost justifies its $250 price tag.

Corsair first announced the Xeneon Edge at the CES trade show in January. It’s a 5-point capacitive touchscreen that can live on your desk and serve as a secondary computer monitor. If you’re feeling fun, you can download Corsair’s iCUE software to use customizable widgets for displaying things like CPU temperature and usage, the time and date, and media playing. More adventurous users can attach the screen onto their desktop PC’s fan mounts or side panel.

I used Corsair’s monitor for a couple of weeks. From its build to its image quality and software, the monitor is exemplary for a screen of this kind. The flagship widgets feature needs some work, but I couldn’t ask for much more from a secondary, PC-mountable display.

PC-mountable monitor

Corsair Xeneon Edge

The monitor is set to 50 percent brightness, which was suffient in my sunny office. Maxing out brightness washed out the display’s colors.

Credit: Scharon Harding

The monitor is set to 50 percent brightness, which was suffient in my sunny office. Maxing out brightness washed out the display’s colors. Credit: Scharon Harding

PC builders may be intrigued by the Xeneon Edge’s ability to attach to any 360 mm fan mount. There are four corner machine screws on the back of the monitor to attach the screen to a fan mount. Corsair also sells “Frame Series” PC cases that support attaching the monitor onto the side panel. You can see a video of the different PC mounting options here.

If you don’t have a desktop or want to pair Corsair’s screen with a laptop, the screen comes with a tiny plastic stand that adheres to the monitor’s four corners via the display’s 14 integrated magnets. This minimalist solution meant I could use my Xeneon Edge within minutes of opening it.

Corsair Xeneon Edge's backside and stand

The included stand (top) and the monitor’s backside (bottom).

Credit: Scharon Harding

The included stand (top) and the monitor’s backside (bottom). Credit: Scharon Harding

Yet another option is to use the Xeneon Edge’s two standard female 1/4″-20 mounts to connect the monitor to a stand, giving it more height and, depending on the arm, the ability to rotate.

Widget drawbacks

While cheaper monitors similar to the Xeneon Edge are out there, they’re always just missing the mark. This $160 (as of this writing) option, for example, specifically names Corsair compatibility in its keyword-stuffed product name. Some of these rivals—which often have similar specs, like size and resolution—also emphasize their ability to display information from the connected system, such as CPU and GPU temperature. However, I haven’t seen these cheaper screens come with dedicated software that simplifies configuring what the monitor displays, while ensuring its image looks clean, sophisticated, and easily digestible.

This monitor’s product images, for example, show a screen with a lot of information (potentially too much) about the connected PC’s CPU, GPU, RAM, and storage, accompanied by Dragon Ball Super anime graphics. But in order to get that on the display, you’d need to download and customize Aida64 and Wallpaper Engine, per the listing. iCUE is a simpler alternative and will require less time to set up.

To use widgets on the Xeneon Edge, iCUE must be running. Whenever I stopped the app from running in the background, the widgets disappeared, and the Xeneon Edge would work as a widget-free secondary monitor. Corsair’s manual reads: “Monitor settings are saved directly on the device and will remain consistent, even when iCUE is not running.” Once I re-opened iCUE, my widget layouts were accessible again. This limitation could mean that you’ll never want to use Corsair’s widgets. For some people, particularly those building PCs and buying dedicated screens for monitoring PC components, requiring iCUE to run is counterproductive.

If peripheral companies like Corsair and Razer have broken you down to where you don’t mind proprietary software using computing resources in perpetuity, you’ll be happy with iCUE’s simple, sensible UI for tweaking things like the size and color of widgets.

But I thought there’d be more widgets—namely calendar and weather ones, as Corsair teased in January promotional images for the Xeneon Edge.

A promotional image of the touchscreen from January shows calendar and weather widgets.

I asked Corsair about this, and a company spokesperson said that the weather and calendar widgets will be available in Q1 2026. Wanting more and improved widgets is a good reason to hold off on buying this monitor (the monitor could potentially be cheaper in the future, too), which just came out today.

A screenshot of Corsair iCUE configuring the Xeneon Edge.

I’d like to see timer and alarm widgets added to the companion app.

Credit: Scharon Harding/Corsair

I’d like to see timer and alarm widgets added to the companion app. Credit: Scharon Harding/Corsair

Occasionally I had trouble navigating websites within the monitor’s URL widget. It was fine for leaving my favorite website up, for example. But the widget sometimes cut off certain areas, such as menu bars, on other websites. When I used the widget to display the website for an RSS feed reader, I sometimes got logged out when exiting iCUE. When I reopened iCUE, the widget wouldn’t let me type within the widget in order to log back in, unless I had iCUE up on my other screen. Scrolling through the Ars Technica website looked choppy, too. Notably, iCUE emphasizes that “some websites do not permit their content to be displayed in an iFrame.

Corsair Xeneon Edge

The Ars Technica website within Corsair’s URL widget.

Credit: Scharon Harding

The Ars Technica website within Corsair’s URL widget. Credit: Scharon Harding

Corsair’s rep told me that the URL widget uses a “customized flavor of Chromium.” Of course, the widget doesn’t offer nearly the same functionality as a standard browser. You can’t store bookmarks or enter new URLs within the widget, for example.

If the monitor is using widgets, you can’t use it like a regular monitor, so you can’t drag or view windows on it. This was limiting and prevented me from displaying widgets and other apps fit for a secondary screen, like Slack, simultaneously. As of my writing, the only dedicated chat widget is for Twitch Chat.

Corsair’s rep told me that the company is currently “working on more features and widgets, so things should open up pretty soon.” He pointed to upcoming widgets for Discord, stocks, a virtual keyboard and mouse, and SimHub, plus a widget builder.

I think most users will end up choosing between having the display typically run widgets or serving as a monitor. For Team Widget, there’s a handy feature where you can swipe left or right on the screen to quickly toggle different widget layouts that you’ve saved.

As good as it gets, with room for improvement

Corsair’s Xeneon Edge isn’t the only 14.5-inch touchscreen monitor out there, but it certainly has an edge over its nondescript rivals. The Xeneon Edge is more expensive than most of its competition. But during my testing with the display, I never felt like I was looking at something cheap. The IPS panel appeared bright, colorful, and legible, even in bright rooms and when displaying smaller text (very small text was still readable, but I’d prefer to read small lettering on something sharper).

Many will completely forego Corsair’s widgets. They’ll miss out on some of what makes the Xeneon Edge expensive, but the display’s mounting options, solid build, and image quality, along with Corsair’s reputation, help it make sense over cheaper 14.5-inch touchscreens. Corsair gives the monitor a two-year limited warranty.

Some might consider the software burdensome, but if you choose to use it, the app is modern and effective without making you jump through hoops to do things like adjust the monitor’s brightness, contrast, or sensor logging or set an image as the screen’s background.

More widgets would help this monitor come closer to earning the $250 MSRP. But if you’re looking for a small, premium touchscreen to add to your desk—or mount to your PC—the Xeneon Edge is top of the line.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Corsair’s PC-dockable screen helped me monitor my PC components and news feeds Read More »

4chan-refuses-to-pay-uk-online-safety-act-fines,-asks-trump-admin-to-intervene

4chan refuses to pay UK Online Safety Act fines, asks Trump admin to intervene

4chan’s law firms, Byrne & Storm and Coleman Law, said in a statement on August 15 that “4chan is a United States company, incorporated in Delaware, with no establishment, assets, or operations in the United Kingdom. Any attempt to impose or enforce a penalty against 4chan will be resisted in US federal court. American businesses do not surrender their First Amendment rights because a foreign bureaucrat sends them an e-mail.”

4chan seeks Trump admin’s help

4chan’s lawyers added that US “authorities have been briefed on this matter… We call on the Trump administration to invoke all diplomatic and legal levers available to the United States to protect American companies from extraterritorial censorship mandates.”

The US Federal Trade Commission appears to have a similar concern. FTC Chairman Andrew Ferguson yesterday sent letters to over a dozen social media and technology companies warning them that “censoring Americans to comply with a foreign power’s laws, demands, or expected demands” may violate US law.

Ferguson’s letters directly referenced the UK Online Safety Act. The letters were sent to Akamai, Alphabet, Amazon, Apple, Cloudflare, Discord, GoDaddy, Meta, Microsoft, Signal, Snap, Slack, and X.

“The letters noted that companies might feel pressured to censor and weaken data security protections for Americans in response to the laws, demands, or expected demands of foreign powers,” the FTC said. “These laws include the European Union’s Digital Services Act and the United Kingdom’s Online Safety Act, which incentivize tech companies to censor worldwide speech, and the UK’s Investigatory Powers Act, which can require companies to weaken their encryption measures to enable UK law enforcement to access data stored by users.”

Wikipedia is meanwhile fighting a court battle against a UK Online Safety Act provision that could force it to verify the identity of Wikipedia users. The Wikimedia Foundation said the potential requirement would be burdensome to users and “could expose users to data breaches, stalking, vexatious lawsuits or even imprisonment by authoritarian regimes.”

Separately, the Trump administration said this week that the UK dropped its demand that Apple create a backdoor for government security officials to access encrypted data. The UK made the demand under its Investigatory Powers Act.

4chan refuses to pay UK Online Safety Act fines, asks Trump admin to intervene Read More »

for-some-people,-music-doesn’t-connect-with-any-of-the-brain’s-reward-circuits

For some people, music doesn’t connect with any of the brain’s reward circuits

“I was talking with my colleagues at a conference 10 years ago and I just casually said that everyone loves music,” recalls Josep Marco Pallarés, a neuroscientist at the University of Barcelona. But it was a statement he started to question almost immediately, given there were clinical cases in psychiatry where patients reported deriving absolutely no pleasure from listening to any kind of tunes.

So, Pallarés and his team spent the past 10 years researching the neural mechanisms behind a condition they called specific musical anhedonia: the inability to enjoy music.

The wiring behind joy

When we like something, it is usually a joint effect of circuits in our brain responsible for perception—be it perception of taste, touch, or sound—and reward circuits that give us a shot of dopamine in response to nice things we experience. For a long time, scientists attributed a lack of pleasure from things most people find enjoyable to malfunctions in one or more of those circuits.

You can’t enjoy music when the parts of the brain that process auditory stimuli don’t work properly, since you can’t hear it in the way that you would if the system were intact. You also can’t enjoy music when the reward circuit refuses to release that dopamine, even if you can hear it loud and clear. Pallarés, though, thought this traditional idea lacked a bit of explanatory power.

“When your reward circuit doesn’t work, you don’t experience enjoyment from anything, not just music,” Pallarés says. “But some people have no hearing impairments and can enjoy everything else—winning money, for example. The only thing they can’t enjoy is music.”

For some people, music doesn’t connect with any of the brain’s reward circuits Read More »

deeply-divided-supreme-court-lets-nih-grant-terminations-continue

Deeply divided Supreme Court lets NIH grant terminations continue

The dissents

The primary dissent was written by Chief Justice Roberts, and joined in part by the three Democratic appointees, Jackson, Kagan, and Sotomayor. It is a grand total of one paragraph and can be distilled down to a single sentence: “If the District Court had jurisdiction to vacate the directives, it also had jurisdiction to vacate the ‘Resulting Grant Terminations.’”

Jackson, however, chose to write a separate and far more detailed argument against the decision, mostly focusing on the fact that it’s not simply a matter of abstract law; it has real-world consequences.

She notes that existing law prevents plaintiffs from suing in the Court of Federal Claims while the facts are under dispute in other courts (something acknowledged by Barrett). That would mean that, as here, any plaintiffs would have to have the policy declared illegal first in the District Court, and only after that was fully resolved could they turn to the Federal Claims Court to try to restore their grants. That’s a process that could take years. In the meantime, the scientists would be out of funding, with dire consequences.

Yearslong studies will lose validity. Animal subjects will be euthanized. Life-saving medication trials will be abandoned. Countless researchers will lose their jobs. And community health clinics will close.

Jackson also had little interest in hearing that the government would be harmed by paying out the grants in the meantime. “For the Government, the incremental expenditure of money is at stake,” she wrote. “For the plaintiffs and the public, scientific progress itself hangs in the balance along with the lives that progress saves.”

With this decision, of course, it no longer hangs in the balance. There’s a possibility that the District Court’s ruling that the government’s policy was arbitrary and capricious will ultimately prevail; it’s not clear, because Barrett says she hasn’t even seen the government make arguments there, and Roberts only wrote regarding the venue issues. In the meantime, even with the policy stayed, it’s unlikely that anyone will focus grant proposals on the disfavored subjects, given that the policy might be reinstated at any moment.

And even if that ruling is upheld, it will likely take years to get there, and only then could a separate case be started to restore the funding. Any labs that had been using those grants will have long since moved on, and the people working on those projects scattered.

Deeply divided Supreme Court lets NIH grant terminations continue Read More »

is-the-ai-bubble-about-to-pop?-sam-altman-is-prepared-either-way.

Is the AI bubble about to pop? Sam Altman is prepared either way.

Still, the coincidence between Altman’s statement and the MIT report reportedly spooked tech stock investors earlier in the week, who have already been watching AI valuations climb to extraordinary heights. Palantir trades at 280 times forward earnings. During the dot-com peak, ratios of 30 to 40 times earnings marked bubble territory.

The apparent contradiction in Altman’s overall message is notable. This isn’t how you’d expect a tech executive to talk when they believe their industry faces imminent collapse. While warning about a bubble, he’s simultaneously seeking a valuation that would make OpenAI worth more than Walmart or ExxonMobil—companies with actual profits. OpenAI hit $1 billion in monthly revenue in July but is reportedly heading toward a $5 billion annual loss. So what’s going on here?

Looking at Altman’s statements over time reveals a potential multi-level strategy. He likes to talk big. In February 2024, he reportedly sought an audacious $5 trillion–7 trillion for AI chip fabrication—larger than the entire semiconductor industry—effectively normalizing astronomical numbers in AI discussions.

By August 2025, while warning of a bubble where someone will lose a “phenomenal amount of money,” he casually mentioned that OpenAI would “spend trillions on datacenter construction” and serve “billions daily.” This creates urgency while potentially insulating OpenAI from criticism—acknowledging the bubble exists while positioning his company’s infrastructure spending as different and necessary. When economists raised concerns, Altman dismissed them by saying, “Let us do our thing,” framing trillion-dollar investments as inevitable for human progress while making OpenAI’s $500 billion valuation seem almost small by comparison.

This dual messaging—catastrophic warnings paired with trillion-dollar ambitions—might seem contradictory, but it makes more sense when you consider the unique structure of today’s AI market, which is absolutely flush with cash.

A different kind of bubble

The current AI investment cycle differs from previous technology bubbles. Unlike dot-com era startups that burned through venture capital with no path to profitability, the largest AI investors—Microsoft, Google, Meta, and Amazon—generate hundreds of billions of dollars in annual profits from their core businesses.

Is the AI bubble about to pop? Sam Altman is prepared either way. Read More »

neolithic-people-took-gruesome-trophies-from-invading-tribes

Neolithic people took gruesome trophies from invading tribes

A local Neolithic community in northeastern France may have clashed with foreign invaders, cutting off limbs as war trophies and otherwise brutalizing their prisoners of war, according to a new paper published in the journal Science Advances. The findings challenge conventional interpretations of prehistoric violence as bring indiscriminate or committed for pragmatic reasons.

Neolithic Europe was no stranger to collective violence of many forms, such as the odd execution and massacres of small communities, as well as armed conflicts. For instance, we recently reported on an analysis of human remains from 11 individuals recovered from El Mirador Cave in Spain, showing evidence of cannibalism—likely the result of a violent episode between competing Late Neolithic herding communities about 5,700 years ago. Microscopy analysis revealed telltale slice marks, scrape marks, and chop marks, as well as evidence of cremation, peeling, fractures, and human tooth marks.

This indicates the victims were skinned, the flesh removed, the bodies disarticulated, and then cooked and eaten. Isotope analysis indicated the individuals were local and were probably eaten over the course of just a few days. There have been similar Neolithic massacres in Germany and Spain, but the El Mirador remains provide evidence of a rare systematic consumption of victims.

Per the authors of this latest study, during the late Middle Neolithic, the Upper Rhine Valley was the likely site of both armed conflict and rapid cultural upheaval, as groups from the Paris Basin infiltrated the region between 4295 and 4165 BCE. In addition to fortifications and evidence of large aggregated settlements, many skeletal remains from this period show signs of violence.

Friends or foes?

Overhead views of late Middle Neolithic violence-related human mass deposits of the Alsace region, France

Overhead views of late Middle Neolithic violence-related human mass deposits in Pit 124 of the Alsace region, France. Credit: Philippe Lefranc, INRAP

Archaeologist Teresa Fernandez-Crespo of Spain’s Valladolid University and co-authors focused their analysis on human remains excavated from two circular pits at the Achenheim and Bergheim sites in Alsace in northwestern France. Fernandez-Crespo et al. examined the bones and found that many of the remains showed signs of unhealed trauma—such as skull fractures—as well as the use of excessive violence (overkill), not to mention quite a few severed left upper limbs. Other skeletons did not show signs of trauma and appeared to have been given a traditional burial.

Neolithic people took gruesome trophies from invading tribes Read More »

at-the-top-of-the-market,-ev-hypercars-are-a-disappearing-breed

At the top of the market, EV hypercars are a disappearing breed


Seven-figure EV hypercars are struggling to make an emotional connection with buyers.

Monterey Car Week is an annual celebration of automotive culture at the extremes: extreme performance, extreme rarity, and extreme value. Cars offering more than 1,000 hp (746 kW) are de rigueur, “unique” models are everywhere you look, and machines costing well into seven figures are entry-level.

A few years ago, many of the new cars debuting during Car Week focused on outright speed and performance above all else, relying on electric powertrains to deliver physics-defying acceleration and ballistic speed. Lately, there’s been a shift back toward the fundamentals of driver engagement, emotional design, and purity of feel.

Internal combustion is again at the fore. One of the main reasons is a renewed interest in what was old—so long as that old thing is actually new.

They’re called restomods, classic cars brought up to date with modern drivability but keeping the original feel. LA-based Singer Vehicle Design is the Porsche-based poster child for this movement, but San Marino-based Eccentrica earned plenty of attention in Monterey for its reimagining of one of the ultimate icons of the ’90s, the Lamborghini Diablo.

This is Eccentrica’s restomod of the Lamborghini Diablo. Tim Stevens

The company’s latest creation, Titano, promises “Raw ’90s soul meet[ing] purposeful modern craft.”

Maurizio Reggiani, former Lamborghini CTO and now advisor to Eccentrica, told me that feel is far more important than outright performance in this segment. “We want the people sitting in Eccentrica to really perceive the street, perceive the acceleration, perceive the braking, perceive the steering,” he said.

Commoditization

“The power to have 1,000 hp is easy. I don’t want to say it is a commodity, but more or less,” Reggiani continued.

Eccentrica’s Titano makes 550 hp (410 kW). The machine Bugatti unveiled, the new Brouillard, nearly tripled that number, offering 1,578 hp (1,177 kW) from an 8.3-liter W16 engine paired with a hybrid system. It’s a one-off, a completely bespoke design created at the request of one very lucky, very well-heeled buyer, part of the company’s new Programme Solitaire.

That’s an impressive figure, but Frank Heyl, Bugatti’s director of design, told me the real focus is on creating something timeless. Bugatti has been making cars for 101 years, and today’s astonishing power figures won’t matter in 2126. Instead, Heyl said to focus on the interior. “If you look at the Tourbillon instrument cluster, it’s a titanium housing with real sapphire glass. The bearings are made from ruby stones with aluminum needles,” he said. “People will have a fascination with that in 100 years’ time. I’m sure about that.”

This is the Bugatti Solitaire. Bugatti

For its part, modern Lamborghini seems much happier to focus on the best of the modern era, taking advantage of EV-derived technology paired with an internal combustion engine tasked with providing both power and adrenaline.

Lamborghini unveiled the Fenomeno, a “few-off” version of the Revuelto offered to just 29 buyers. Lamborghini’s current CTO, Rouven Mohr, told me this wasn’t just a reskinning. The company’s engineers re-did the car’s tech stack, including its battery pack, adopting lessons learned from the latest EVs. “Completely new battery hardware. New cell chemistry, new cell type,” he said. “So we double the energy content in the same space.”

It’s similar to what’s in the Temerario, which features a hybrid system paired with a high-strung V8. “This huge effort that we did to have a 10,000-rpm engine is, at the end of the day, engineering overkill,” he said. “It’s a pure investment in the emotional side.”

Lamborghini designer Mitja Borkert said this kind of hybrid tech can actually make the cars more likeable. “Our cars are polarizing; they are creating reactions,” he said, admitting those reactions are sometimes negative. “But if you drive a Revuelto in electric mode, the people can enjoy the design better because it’s unexpected that this spaceship is coming around the corner.”

When it comes to exterior design, Karma is one brand that has always stood out. But its cars, extended-range EVs with onboard generators, have historically struggled to perfect the needed mix of emotionality and electrification. A fix is on the way, CEO Marques McCammon told me. The company’s Amaris coupe, coming next year for roughly $200,000, generates 708 hp (528 kW) from a pair of electric motors, plus a new onboard engine designed to thrill, not just recharge a battery.

“I’ve got side exhaust. It’s real. There’s no synthetic sound. When you hit the throttle, you’re gonna hear a blow-off valve on the turbo, and you’re gonna hear exhaust coming out of the side pipes that we’ve tuned,” he said. “You can have it all.”

You need to hear it

For many, authentic sound is key to the experience. Eccentrica’s Reggiani told me that the synthesized noises emitted by cars like Hyundai’s Ioniq 5 N are not a solution. Reggiani said an EV can never provide a truly emotional experience with sound “because you need to do something fake.”

But Iliya and Nikita Bridan, who run Oilstainlab, might have devised a solution with their $1.8-million-dollar HF-11: a cooling fan for the electric motor run through a ducted exhaust.

That fan exhaust is being tuned and tweaked to create an evocative sound, a process that Nikita Bridan says is no less authentic than tuning the exhaust of a car with an internal combustion engine. Indeed, with many modern sports cars featuring digitally generated pops and crackles in Sport mode, the HF-11’s acoustic affect might be even more authentic.

That’s just part of what Bridan says should be a compelling package, even for anti-EV zealots. “What we’re promising is basically a 2,000 pound, six-speed manual EV with an exhaust. I think that’s interesting enough for people to maybe abandon combustion,” he said.

And the HF-11 has another trick up its sleeve: an air-cooled, flat-six engine (àla classic Porsches), which owners can swap in if they’re feeling old-school. It’s a unique solution to the challenges of shifting consumer demand. So far, about 30 percent of the buyers of the HF-11 are exclusively interested in the electric powertrain. Thirty percent want only internal combustion, while the rest want both.

The Czinger 21C doesn’t have a swappable powertrain, but it mixes electric and internal combustion to deliver outright performance. Very extreme performance, as it were, with the 1,250-hp (932-kW), $2 million (and up) hybrid hypercar taking an extended, 1,000-mile road trip on the way to Monterey, setting five separate track records along the way.

That car’s hallmark is the intricate 3D-printed structure beneath the skin, but despite the space-age tech, CEO Lukas Czinger told me that emotionality is key.

A green Czinger C21

The Czinger C21 features tandem seating. Credit: Czinger

Buyer motivation

“Why would you buy a $3 million car? Well, you’re buying it because you appreciate the brand and the engineering level, and there’s new technology in it, right?” Czinger said. “But the product ultimately needs to be thrilling to drive.”

Czinger said the combination of a hybrid system and an 11,000 RPM twin-turbo V8 offers “the best of both worlds” and that an eventual 21C successor will “definitely have a combustion engine.”

For Automobili Pininfarina, an all-electric powertrain was not a concern for its first car, the $2.5-million, 1,900-hp (1,417-kW) Battista. That’s despite some initial skepticism that, CEO Paolo Dellachà said, evaporates as soon as a potential buyer gets behind the wheel.

But most didn’t need convincing. “All our or our clients do have eight-cylinder, 12-cylinder, or even 16-cylinder engines,” he said. “This is just something additional to their collection. So it’s not one or the other to them. Eventually, it’s both.”

The Automobil Pininfarina Battista.

All-electric hypercars like the Battista are a hard sell in 2025. Credit: Automobil Pininfarina

Residuals matter

It’s easy to think that the buyers of these cars simply have bottomless discretionary funds, and many do. But unproven long-term value is a key reason why these battery-powered projectiles seem a little less common than they used to be.

“At the moment, no one has proven yet that the electric super sports car is holding the financial index,” Lamborghini CTO Mohr said. “And the people who are usually investing in this, buying this kind of car, usually they have the money because they are quite financially oriented. They don’t want to destroy their investment.”

In other words, it’s all fun and games until someone loses money. If electric hypercars can’t prove their value in the long run, they don’t have a chance.

This is something that Automobili Pininfarina CEO Dellachà is certainly watching, but he doesn’t seem concerned. “It’s very difficult to say right now because none of our clients yet have sold their car,” he said. “And this is something that, by the way, makes us very proud, because they love the car, they love driving it, or they love keeping in their collection.”

That said, he’s not yet committing to an EV drivetrain for an eventual Battista successor. “Maybe next time we might combine electrification with a combustion engine. We will see. It will be an interesting time to come.”

At the top of the market, EV hypercars are a disappearing breed Read More »