Author name: Mike M.

man’s-ghastly-festering-ulcer-stumps-doctors—until-they-cut-out-a-wedge-of-flesh

Man’s ghastly festering ulcer stumps doctors—until they cut out a wedge of flesh


The man made a full recovery, but this tale is not for the faint of heart.

If you were looking for some motivation to follow your doctor’s advice or remember to take your medicine, look no further than this grisly tale.

A 64-year-old man went to the emergency department of Brigham and Women’s Hospital in Boston with a painful festering ulcer spreading on his left, very swollen ankle. It was a gruesome sight; the open sore was about 8 by 5 centimeters (about 3 by 2 inches) and was rimmed by black, ashen, and dark purple tissue. Inside, it oozed with streaks and fringes of yellow pus around pink and red inflamed flesh. It was 2 cm deep (nearly an inch). And it smelled.

The man told doctors it had all started two years prior, when dark, itchy lesions appeared in the area on his ankle—the doctors noted that there were multiple patches of these lesions on both his legs. But about five months before his visit to the emergency department, one of the lesions on his left ankle had progressed to an ulcer. It was circular, red, tender, and deep. He sought treatment and was prescribed antibiotics, which he took. But they didn’t help.

You can view pictures of the ulcer and its progression here, but be warned, it is graphic. (Panel A shows the ulcer five months prior to the emergency department visit. Panel B shows the ulcer one month prior. Panel C shows the wound on the day of presentation at the emergency department. Panel D shows the area three months after hospital discharge.)

Gory riddle

The ulcer grew. In fact, it seemed as though his leg was caving in as the flesh around it began rotting away. A month before the emergency room visit, the ulcer was a gaping wound that was already turning gray and black at the edges. It was now well into the category of being a chronic ulcer.

In a Clinical Problem-Solving article published in the New England Journal of Medicine this week, doctors laid out what they did and thought as they worked to figure out what was causing the man’s horrid sore.

With the realm of possibilities large, they started with the man’s medical history. The man had immigrated to the US from Korea 20 years ago. He owned and worked at a laundromat, which involved standing for more than eight hours a day. He had a history of eczema on his legs, high cholesterol, high blood pressure, and Type 2 diabetes. For these, he was prescribed a statin for his cholesterol, two blood pressure medications (hydrochlorothiazide and losartan), and metformin for his diabetes. He told doctors he was not good at taking the regimen of medicine.

His diabetes was considered “poorly controlled.” A month prior, he had a glycated hemoglobin (A1C or HbA1C) test—which indicates a person’s average blood sugar level over the past two or three months. His result was 11 percent, while the normal range is between 4.2 and 5.6 percent.

His blood pressure, meanwhile, was 215/100 mm Hg at the emergency department. For reference, readings higher than 130/80 mm Hg on either number are considered the first stage of high blood pressure. Over the past three years, the man’s blood pressure had systolic readings (top number, pressure as heart beats) ranging from 160 to 230 mm Hg and diastolic readings (bottom number, pressure as heart relaxes) ranging from 95 to 120 mm Hg.

Clinical clues

Given the patient’s poorly controlled diabetes, a diabetic ulcer was initially suspected. But the patient didn’t have any typical signs of diabetic neuropathy that are linked to ulcers. These would include numbness, unusual sensations, or weakness. His responses on a sensory exam were all normal. Diabetic ulcers also typically form on the foot, not the lower leg.

X-rays of the ankle showed swelling in the soft tissue but without some signs of infection. The doctors wondered if the man had osteomyelitis, an infection in the bone, which can be a complication in people with diabetic ulcers. The large size and duration of the ulcer matched with a bone infection, as well as some elevated inflammatory markers he had on his blood tests.

To investigate the bone infection further, they admitted the man to the hospital and ordered magnetic resonance imaging (MRI). But the MRI showed only a soft-tissue defect and a normal bone, ruling out a bone infection. Another MRI was done with a contrast agent. That showed that the man’s large arteries were normal and there were no large blood clots deep in his veins—which is sometimes linked to prolonged standing, as the man did at his laundromat job.

As the doctors were still working to root out the cause, they had started him on a heavy-duty regimen of antibiotics. This was done with the assumption that on top of whatever caused the ulcer, there was now also a potentially aggressive secondary infection—one not knocked out by the previous round of antibiotics the man had been given.

With a bunch of diagnostic dead ends piling up, the doctors broadened their view of possibilities, newly considering cancers, rare inflammatory conditions, and less common conditions affecting small blood vessels (as the MRI has shown the larger vessels were normal). This led them to the possibility of a Martorell’s ulcer.

These ulcers, first described in 1945 by a Spanish doctor named Fernando Martorell, form when prolonged, uncontrolled high blood pressure causes the teeny arteries below the skin to stiffen and narrow, which blocks the blood supply, leading to tissue death and then ulcers. The ulcers in these cases tend to start as red blisters and evolve to frank ulcers. They are excruciatingly painful. And they tend to form on the lower legs, often over the Achilles’ tendon, though it’s unclear why this location is common.

What the doctor ordered

The doctors performed a punch biopsy of the man’s ulcer, but it was inconclusive—which is common with Martorell’s ulcers. The doctors turned to a “deep wedge biopsy” instead, which is exactly what it sounds like.

A pathology exam of the tissue slices from the wedge biopsy showed blood vessels that had thickened and narrowed. It also revealed extensive inflammation and necrosis. With the pathology results as well as the clinical presentation, the doctors diagnosed the man with a Martorell’s ulcer.

They also got back culture results from deep-tissue testing, finding that the man’s ulcer had also become infected with two common and opportunistic bacteria—Serratia marcescens and Enterococcus faecalis. Luckily, these are generally easy to treat, so the doctors scaled back his antibiotic regimen to target just those germs.

The man underwent three surgical procedures to clean out the dead tissue from the ulcer, then a skin graft to repair the damage. Ultimately, he made a full recovery. The doctors at first set him on an aggressive regimen to control his blood pressure, one that used four drugs instead of the two he was supposed to be taking. But the four-drug regimen caused his blood pressure to drop too low, and he was ultimately moved back to his original two-drug treatment.

The finding suggests that if he had just taken his original medications as prescribed, he would have kept his blood pressure in check and avoided the ulcer altogether.

In the end, “the good outcome in this patient with a Martorell’s ulcer underscores the importance of blood-pressure control in the management of this condition,” the doctors concluded.

Photo of Beth Mole

Beth is Ars Technica’s Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes.

Man’s ghastly festering ulcer stumps doctors—until they cut out a wedge of flesh Read More »

tesla-q2-2025-sales-dropped-more-than-13%-year-over-year

Tesla Q2 2025 sales dropped more than 13% year over year

Tesla sold 384,122 electric vehicles during the months of April, May, and June of this year. That’s a double-digit decline compared to the same three months of last year—itself no peach of a quarter for a car company with a stratospheric valuation based on the supposition of eternal sales growth.

The automaker faces a number of problems that are getting in the way of that perpetual growth. In some regions, CEO Elon Musk’s right-wing politics have driven away customers in droves. Another issue is the company’s small, infrequently updated model lineup, which is a problem even in parts of the world that care little about US politics.

Most Tesla sales are of the Model 3 midsize electric sedan and the Model Y, its electric crossover. For Q2 2025, Tesla sold 373,728 of the Models 3 and Y across North America, Europe, China, and its other markets. But that’s an 11.5 percent decrease compared to the 422,405 Models 3 and Y that Tesla sold in Q2 2024, a quarter that itself saw a year-on-year decline.

The rest of Tesla’s sales are a mix of the increasingly elderly Model S sedan and Model X SUV, as well as the US-only Cybertruck. Here, the decline is far more severe—with just 10,394 sold, that’s a 22.5 percent decrease on Q2 2024. Tesla does not break these numbers out with more granularity, so it’s unclear just how few Cybertrucks were among that, but it does bring to mind Musk’s claims that Tesla would sell between 250,000 and 500,000 Cybertrucks a year.

Tesla Q2 2025 sales dropped more than 13% year over year Read More »

rfk-jr.’s-health-department-calls-nature-“junk-science,”-cancels-subscriptions

RFK Jr.’s health department calls Nature “junk science,” cancels subscriptions

The move comes after HHS Secretary and anti-vaccine activist Robert F. Kennedy Jr. said on a May 27 podcast that prestigious medical journals are “corrupt.”

“We’re probably going to stop publishing in the Lancet, New England Journal of Medicine, JAMA, and those other journals because they’re all corrupt,” he said. He accused the journals collectively of being a “vessel for pharmaceutical propaganda.” He went on to say that “unless these journals change dramatically,” the federal government would “stop NIH scientists from publishing there” and create “in-house” journals instead.

Kennedy’s criticism largely stems from his belief that modern medicine and mainstream science are part of a global conspiracy to generate pharmaceutical profits. Kennedy is a germ-theory denier who believes people can maintain their health not by relying on evidence-based medicine, such as vaccines, but by clean living and eating—a loose concept called “terrain theory.”

Access to top scientific and medical journals is essential for federal scientists to keep up to date with their fields and publicize high-impact results. One NIH employee added to Nature news that it “suppresses our scientific freedom, to pursue information where it is present.”

RFK Jr.’s health department calls Nature “junk science,” cancels subscriptions Read More »

pentagon-may-put-spacex-at-the-center-of-a-sensor-to-shooter-targeting-network

Pentagon may put SpaceX at the center of a sensor-to-shooter targeting network


Under this plan, SpaceX’s satellites would play a big role in the Space Force’s kill chain.

The Trump administration plans to cancel a fleet of orbiting data relay satellites managed by the Space Development Agency and replace it with a secretive network that, so far, relies primarily on SpaceX’s Starlink Internet constellation, according to budget documents.

The move prompted questions from lawmakers during a Senate hearing on the Space Force’s budget last week. While details of the Pentagon’s plan remain secret, the White House proposal would commit $277 million in funding to kick off a new program called “pLEO SATCOM” or “MILNET.”

The funding line for a proliferated low-Earth orbit satellite communications network hasn’t appeared in a Pentagon budget before, but plans for MILNET already exist in a different form. Meanwhile, the budget proposal for fiscal year 2026 would eliminate funding for a new tranche of data relay satellites from the Space Development Agency. The pLEO SATCOM or MILNET program would replace them, providing crucial support for the Trump administration’s proposed Golden Dome missile defense shield.

“We have to look at what are the other avenues to deliver potentially a commercial proliferated low-Earth orbit constellation,” Gen. Chance Saltzman, chief of space operations, told senators last week. “So, we are simply looking at alternatives as we look to the future as to what’s the best way to scale this up to the larger requirements for data transport.”

What will these satellites do?

For six years, the Space Development Agency’s core mission has been to provide the military with a more resilient, more capable network of missile tracking and data relay platforms in low-Earth orbit. Those would augment the Pentagon’s legacy fleet of large, billion-dollar missile warning satellites that are parked more than 20,000 miles away in geostationary orbit.

These satellites detect the heat plumes from missile launches—and also large explosions and wildfires—to provide an early warning of an attack. The US Space Force’s early warning satellites were critical in allowing interceptors to take out Iranian ballistic missiles launched toward Israel last month.

Experts say there are good reasons for the SDA’s efforts. One motivation was the realization over the last decade or so that a handful of expensive spacecraft make attractive targets for an anti-satellite attack. It’s harder for a potential military adversary to go after a fleet of hundreds of smaller satellites. And if they do take out a few of these lower-cost satellites, it’s easier to replace them with little impact on US military operations.

Missile-tracking satellites in low-Earth orbit, flying at altitudes of just a few hundred miles, are also closer to the objects they are designed to track, meaning their infrared sensors can detect and locate dimmer heat signatures from smaller projectiles, such as hypersonic missiles.

The military’s Space Development Agency is in the process of buying, building, and launching a network of hundreds of missile-tracking and communications satellites. Credit: Northrop Grumman

But tracking the missiles isn’t enough. The data must reach the ground in order to be useful. The SDA’s architecture includes a separate fleet of small communications satellites to relay data from the missile tracking network, and potentially surveillance spacecraft tracking other kinds of moving targets, to military forces on land, at sea, or in the air through a series of inter-satellite laser crosslinks.

The military refers to this data relay component as the transport layer. When it was established in the first Trump administration, the SDA set out to deploy tranches of tracking and data transport satellites. Each new tranche would come online every couple of years, allowing the Pentagon to tap into new technologies as fast as industry develops them.

The SDA launched 27 so-called “Tranche 0” satellites in 2023 to demonstrate the concept’s overall viability. The first batch of more than 150 operational SDA satellites, called Tranche 1, is due to begin launching later this year. The SDA plans to begin deploying more than 250 Tranche 2 satellites in 2027. Another set of satellites, Tranche 3, would have followed a couple of years later. Now, the Pentagon seeks to cancel the Tranche 3 transport layer, while retaining the Tranche 3 tracking layer under the umbrella of the Space Development Agency.

Out of the shadows

While SpaceX’s role isn’t mentioned explicitly in the Pentagon’s budget documents, the MILNET program is already on the books, and SpaceX is the lead contractor. It has been made public in recent months, after years of secrecy, although many details remain unclear. Managed in a partnership between the Space Force and the National Reconnaissance Office (NRO), MILNET is designed to use military-grade versions of Starlink Internet satellites to create a “hybrid mesh network” the military can rely on for a wide range of applications.

The military version of the Starlink platform is called Starshield. SpaceX has already launched nearly 200 Starshield satellites for the NRO, which uses them for intelligence, surveillance, and reconnaissance missions.

At an industry conference last month, the Space Force commander in charge of operating the military’s communications satellites revealed new information about MILNET, according to a report by Breaking Defense. The network uses SpaceX-made user terminals with additional encryption to connect with Starshield satellites in orbit.

Col. Jeff Weisler, commander of a Space Force unit called Delta 8, said MILNET will comprise some 480 satellites operated by SpaceX but overseen by a military mission director “who communicates to the contracted workforce to execute operations at the timing and tempo of warfighting.”

The Space Force has separate contracts with SpaceX to use the commercial Starlink service. MILNET’s dedicated constellation of more secure Starshield satellites is separate from Starlink, which now has more 7,000 satellites in space.

“We are completely relooking at how we’re going to operate that constellation of capabilities for the joint force, which is going to be significant because we’ve never had a DoD hybrid mesh network at LEO,” Weisler said last month.

So, the Pentagon already relies on SpaceX’s communication services, not to mention the company’s position as the leading launch provider for Space Force and NRO satellites. With MILNET’s new role as a potential replacement for the Space Development Agency’s data relay network, SpaceX’s satellites would become a cog in combat operations.

Gen. Chance Saltzman, chief of Space Operations in the US Space Force, looks on before testifying before a House Defense Subcommittee on May 6, 2025. Credit: Brendan Smialowski/AFP via Getty Images

The data transport layer, whether it’s SDA’s architecture or a commercial solution like Starshield, will “underpin” the Pentagon’s planned Golden Dome missile defense system, Saltzman said.

But it’s not just missiles. Data relay satellites in low-Earth orbit will also have a part in the Space Force’s initiatives to develop space-based platforms to track moving targets on the ground and in the air. Eventually, all Space Force satellites could have the ability to plug into MILNET to send their data to the ground.

A spokesperson for the Department of the Air Force, which includes the Space Force, told Air & Space Forces Magazine that the pLEO, or MILNET, constellation “will provide global, integrated, and resilient capabilities across the combat power, global mission data transport, and satellite communications mission areas.”

That all adds up to a lot of bits and bytes, and the Space Force’s need for data backhaul is only going to increase, according to Col. Robert Davis, head of the Space Sensing Directorate at Space Systems Command.

He said the SDA’s satellites will use onboard edge processing to create two-dimensional missile track solutions. Eventually, the SDA’s satellites will be capable of 3D data fusion with enough fidelity to generate a full targeting solution that could be transmitted directly to a weapons system for it to take action without needing any additional data processing on the ground.

“I think the compute [capability] is there,” Davis said Tuesday at an event hosted by the Mitchell Institute, an aerospace-focused think tank in Washington, DC. “Now, it’s a comm[unication] problem and some other technical integration challenges. But how do I do that 3D fusion on orbit? If I do 3D fusion on orbit, what does that allow me to do? How do I get low-latency comms to the shooter or to a weapon itself that’s in flight? So you can imagine the possibilities there.”

The possibilities include exploiting automation, artificial intelligence, and machine learning to sense, target, and strike an enemy vehicle—a truck, tank, airplane, ship, or missile—nearly instantaneously.

“If I’m on the edge doing 3D fusion, I’m less dependent on the ground and I can get around the globe with my mesh network,” Davis said. “There’s inherent resilience in the overall architecture—not just the space architecture, but the overall architecture—if the ground segment or link segment comes under attack.”

Questioning the plan

Military officials haven’t disclosed the cost of MILNET, either in its current form or in the future architecture envisioned by the Trump administration. For context, SDA has awarded fixed-price contracts worth more than $5.6 billion for approximately 340 data relay satellites in Tranches 1 and 2.

That comes out to roughly $16 million per spacecraft, at least an order of magnitude more expensive than a Starlink satellite coming off of SpaceX’s assembly line. Starshield satellites, with their secure communications capability, are presumably somewhat more expensive than an off-the-shelf Starlink.

Some former defense officials and lawmakers are uncomfortable with putting commercially operated satellites in the “kill chain,” the term military officials use for the process of identifying threats, making a targeting decision, and taking military action.

It isn’t clear yet whether SpaceX will operate the MILNET satellites in this new paradigm, but the company has a longstanding preference for doing so. SpaceX built a handful of tech demo satellites for the Space Development Agency a few years ago, but didn’t compete for subsequent SDA contracts. One reason for this, sources told Ars, is that the SDA operates its satellite constellation from government-run control centers.

Instead, the SDA chose L3Harris, Lockheed Martin, Northrop Grumman, Rocket Lab, Sierra Space, Terran Orbital, and York Space Systems to provide the next batches of missile tracking and data transport satellites. RTX, formerly known as Raytheon, withdrew from a contract after the company determined it couldn’t make money on the program.

The tracking satellites will carry different types of infrared sensors, some with wide fields of view to detect missile launches as they happen, and others with narrow-angle sensors to maintain custody of projectiles in flight. The data relay satellites will employ different frequencies and anti-jam waveforms to supply encrypted data to military forces on the ground.

This frame from a SpaceX video shows a stack of Starlink Internet satellites attached to the upper stage of a Falcon 9 rocket, moments after the launcher’s payload fairing is jettisoned. Credit: SpaceX

The Space Development Agency’s path hasn’t been free of problems. The companies the agency selected to build its spacecraft have faced delays, largely due to supply chain issues, and some government officials have worried the Army, Navy, Air Force, and Marine Corps aren’t ready to fully capitalize on the information streaming down from the SDA’s satellites.

The SDA hired SAIC, a government services firm, earlier this year with a $55 million deal to act as a program integrator with responsibility to bring together satellites from multiple contractors, keep them on schedule, and ensure they provide useful information once they’re in space.

SpaceX, on the other hand, is a vertically integrated company. It designs, builds, and launches its own Starlink and Starshield satellites. The only major components of SpaceX’s spy constellation for the NRO that the company doesn’t build in-house are the surveillance sensors, which come from Northrop Grumman.

Buying a service from SpaceX might save money and reduce the chances of further delays. But lawmakers argued there’s a risk in relying on a single company for something that could make or break real-time battlefield operations.

Sen. Chris Coons (D-Del.), ranking member of the Senate Appropriations Subcommittee on Defense, raised concerns that the Space Force is canceling a program with “robust competition and open standards” and replacing it with a network that is “sole-sourced to SpaceX.”

“This is a massive and important contract,” Coons said. “Doesn’t handing this to SpaceX make us dependent on their proprietary technology and avoid the very positive benefits of competition and open architecture?”

Later in the hearing, Sen. John Hoeven (R-N.D.) chimed in with his own warning about the Space Force’s dependence on contractors. Hoeven’s state is home to one of the SDA’s satellite control centers.

“We depend on the Air Force, the Space Force, the Department of Defense, and the other services, and we can’t be dependent on private enterprise when it comes to fighting a war, right? Would you agree with that?” Hoeven asked Saltzman.

“Absolutely, we can’t be dependent on it,” Saltzman replied.

Air Force Secretary Troy Meink said military officials haven’t settled on a procurement strategy. He didn’t mention SpaceX by name.

As we go forward, MILNET, the term, should not be taken as just a system,” Meink said. “How we field that going forward into the future is something that’s still under consideration, and we will look at the acquisition of that.”

An Air Force spokesperson confirmed the requirements and architecture for MILNET are still in development, according to Air & Space Forces Magazine. The spokesperson added that the department is “investigating” how to scale MILNET into a “multi-vendor satellite communication architecture that avoids vendor lock.”

This doesn’t sound all that different than the SDA’s existing technical approach for data relay, but it shifts more responsibility to commercial companies. While there’s still a lot we don’t know, contractors with existing mega-constellations would appear to have an advantage in winning big bucks under the Pentagon’s new plan.

There are other commercial low-Earth orbit constellations coming online, such as Amazon’s Project Kuiper broadband network, that could play a part in MILNET. However, if the Space Force is looking for a turnkey commercial solution, Starlink and Starshield are the only options available today, putting SpaceX in a strong position for a massive windfall.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Pentagon may put SpaceX at the center of a sensor-to-shooter targeting network Read More »

gop-wants-ev-tax-credit-gone;-it-would-be-a-disaster-for-tesla

GOP wants EV tax credit gone; it would be a disaster for Tesla

The Republican Party’s opposition to tax credits for electric vehicles has stepped up a notch. As its members in the US Senate add their input to the budget bill that came from their colleagues in the House of Representatives, among the changes they want to see is a faster eradication of the IRS clean vehicle tax credit. The tax credit provides up to $7,500 off the price of an EV as long as certain conditions are met, and the language from the House would have given it until the end of the year. Now, it might be gone by the end of September.

The looming passage of the bill appears to have reopened the rift between Tesla CEO Elon Musk and the Republican Party, which the billionaire funded to the tune of hundreds of millions of dollars in the last election. After a brief war of words earlier this month that was quickly smoothed over when Musk apologized to President Trump, it seems there’s the potential for strife again.

Yesterday, Musk once again took to his social media platform to denounce the budget bill, threatening to form a third political party should it pass and reposting content critical of the GOP spending plan.

The changes to the budget would be quite deleterious for Tesla. Although its sales have collapsed in Europe and are flagging in China, the US has remained something of a bulwark in terms of EV sales. Most of the EVs that Tesla offers for sale in the US are eligible for the $7,500 credit, which can be applied to the car’s price immediately at purchase, as long as the buyer meets the income cap. That means all these cars will become significantly more expensive on October 1, should the bill pass.

GOP wants EV tax credit gone; it would be a disaster for Tesla Read More »

research-roundup:-6-cool-science-stories-we-almost-missed

Research roundup: 6 cool science stories we almost missed


Final Muon g-2 results, an ultrasonic mobile brain imaging helmet, re-creating Egyptian blue, and more.

The “world’s smallest violin” created by Loughborough University physicists. Credit: Loughborough University

It’s a regrettable reality that there is never enough time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. June’s list includes the final results from the Muon g-2 experiment, re-creating the recipe for Egyptian blue, embedding coded messages in ice bubbles, and why cats seem to have a marked preference for sleeping on their left sides.

Re-creating Egyptian blues

Closeup image of an ancient wooden Egyptian falcon. Researchers have found a way to repoduce the blue pigment visible on the artifact

Close-up image of an ancient wooden Egyptian falcon. Researchers have found a way to reproduce the blue pigment visible on the artifact. Credit: Matt Unger, Carnegie Museum of Natural History

Artists in ancient Egypt were particularly fond of the color known as Egyptian blue—deemed the world’s oldest synthetic pigment—since it was a cheap substitute for pricier materials like lapis lazuli or turquoise. But archaeologists have puzzled over exactly how it was made, particularly given the wide range of hues, from deep blue to gray or green. That knowledge had long been forgotten. However, scientists at Washington State University have finally succeeded in recreating the recipe, according to a paper published in the journal npj Heritage Science.

The interdisciplinary team came up with 12 different potential recipes using varying percentages of silicon dioxide, copper, calcium, and sodium carbonate. They heated the samples to 1,000° Celsius (about what ancient artists could have achieved), varying the time between one and 11 hours. They also cooled the samples at different rates. Then they analyzed the samples using microscopy and other modern techniques and compared them to the Egyptian blue on actual Egyptian artifacts to find the best match.

Their samples are now on display at the Carnegie Museum of Natural History in Pittsburgh. Apart from its historical interest, Egyptian blue also has fascinating optical, magnetic, and biological properties that could prove useful in practical applications today, per the authors. For instance, it might be used for counterfeit-proof inks, since it emits light in the near-infrared, and its chemistry is similar to high-temperature superconductors.

npj Heritage Science, 2025. DOI: 10.1038/s40494-025-01699-7  (About DOIs).

World’s smallest violin

It’s an old joke, possibly dating back to the 1970s. Whenever someone is complaining about an issue that seems trivial in the grand scheme of things, it’s tradition to rub one’s thumb and forefinger together and declare, “This the world’s smallest violin playing just for you.” (In my snarky circles we used to say the violin was “playing ‘My Heart Bleeds for You.'”) Physicists at Loughborough University have now made what they claim really is the world’s smallest violin, just 35 microns long and 13 microns wide.

There are various lithographic methods for creating patterned electronic devices, such as photolithography, which can be used either with a mask or without. The authors relied on scanning probe thermal lithography instead, specifically a cutting-edge nano-sculpting machine they dubbed the NanoFrazor. The first step was to coat a small chip with two layers of a gel material and then place it under the NanoFrazor. The instrument’s heated tip burned the violin pattern into the gel. Then they “developed” the gel by dissolving the underlayer so that only a violin-shaped cavity remained.

Next, they poured on a thin layer of platinum and rinsed off the chip with acetone. The resulting violin is a microscopic image rather than a playable tiny instrument—you can’t even see it without a microscope—but it’s still an impressive achievement that demonstrates the capabilities of the lab’s new nano lithography system. And the whole process can take as little as three hours.

Muon g-2 anomaly no more?

overhead view of the Muon g-2 experiment at Fermilab

Overhead view of the Muon g-2 experiment at Fermilab. Credit: Fermilab

The Muon g-2 experiment (pronounced “gee minus two”) is designed to look for tantalizing hints of physics beyond the Standard Model of particle physics. It does this by measuring the magnetic field (aka the magnetic moment) generated by a subatomic particle known as the muon. Back in 2001, an earlier run of the experiment at Brookhaven National Laboratory found a slight discrepancy, hinting at possible new physics, but that controversial result fell short of the critical threshold required to claim discovery.

Physicists have been making new measurements ever since in hopes of resolving this anomaly. For instance, in 2021, we reported on data from the updated Muon g-2 experiment that showed “excellent agreement” with the discrepancy Brookhaven recorded. They improved on their measurement precision in 2023. And now it seems the anomaly is very close to being resolved, according to a preprint posted to the physics arXiv based on analysis of a data set triple the size as the one used for the 2023 analysis. (You can watch a video explanation here.)

The final Muon g-2 result is in agreement with the 2021 and 2023 results, but much more precise, with error bars four times smaller than those of the original Brookhaven experiment. Combine that with new predictions by the related Muon g-2 Theory Initiative using a new means of calculating the muon’s magnetic moment, and the discrepancy between theoretical prediction and experiment narrows even further.

While some have declared victory, and the Muon g-2 experiment is completed, theorists are still sounding a note of caution as they seek to further refine their models. Meanwhile, Fermilab is building a new experiment designed to hunt for muon-to-electron conversions. If they find any, that would definitely comprise new physics beyond the Standard Model.

arXiv, 2025. DOI: 10.48550/arXiv.2506.03069 (About DOIs).

Message in a bubble

Physicists have embedded Morse code messages in ice bubbles.

Physicists have embedded Morse code messages in ice bubbles. Credit: Keke Shao et al., 2025

Forget sending messages in a bottle. Scientists have figured out how to encode messages in both binary and Morse code in air bubbles trapped in ice, according to a paper published in the journal Cell Physical Science. Trapped air bubbles are usually shaped like eggs or needles, and the authors discovered that they could manipulate the sizes, shapes, and distribution of those ice bubbles by varying the freezing rate. (Faster rates produce egg-shaped bubbles, slower rates produce needle-shaped ones, for example.)

To encode messages, the researchers assigned different bubble sizes, shapes, and orientations to Morse code and binary characters and used their freezing method to produce ice bubbles representing the desired characters. Next, they took a photograph of the ice layer and converted it to gray scale, training a computer to identify the position and the size of the bubbles and decode the message into English letters and Arabic numerals. The team found that binary coding could store messages 10 times longer than Morse code.

Someday, this freezing method could be used for short message storage in Antarctica and similar very cold regions where traditional information storage methods are difficult and/or too costly, per the authors. However, Qiang Tang of the University of Australia, who was not involved in the research, told New Scientist that he did not see much practical application for the breakthrough in cryptography or security, “unless a polar bear may want to tell someone something.”

Cell Physical Science, 2025. DOI: 10.1016/j.xcrp.2025.102622 (About DOIs).

Cats prefer to sleep on left side

sleepy tuxedo cat blissfully stretched out on a blue rug

Caliban marches to his own drum and prefers to nap on his right side. Credit: Sean Carroll

The Internet was made for cats, especially YouTube, which features millions of videos of varying quality, documenting the crazy antics of our furry feline friends. Those videos can also serve the interests of science, as evidenced by the international team of researchers who analyzed 408 publicly available videos of sleeping cats to study whether the kitties showed any preference for sleeping on their right or left sides. According to a paper published in the journal Current Biology, two-thirds of those videos showed cats sleeping on their left sides.

Why should this behavioral asymmetry be the case? There are likely various reasons, but the authors hypothesize that it has something to do with kitty perception and their vulnerability to predators while asleep (usually between 12 to 16 hours a day). The right hemisphere of the brain dominates in spatial attention, while the right amygdala is dominant for processing threats. That’s why most species react more quickly when a predator approaches from the left. Because a cat’s left visual field is processed in the dominant right hemisphere of their brains, “sleeping on the left side can therefore be a survival strategy,” the authors concluded.

Current Biology, 2025. DOI: 10.1016/j.cub.2025.04.043 (About DOIs).

A mobile ultrasonic brain imaging helmet

A personalized 3D-printed helmet for mobile functional ultrasound brain imaging.

A personalized 3D-printed helmet for mobile functional ultrasound brain imaging. Credit: Sadaf Soloukey et al., 2025

Brain imaging is a powerful tool for both medical diagnosis and neuroscience research, from noninvasive methods like EEGs, MRI,  fMRI, and diffuse optical tomography, to more invasive techniques like intracranial EEG. But the dream is to be able to capture the human brain functioning in real-world scenarios instead of in the lab. Dutch scientists are one step closer to achieving that goal with a specially designed 3D-printed helmet that relies upon functional ultrasound imaging (fUSi) to enable high-quality 2D imaging, according to a paper published in the journal Science Advances.

Unlike fMRI, which requires subjects to remain stationary, the helmet monitors the brain as subjects are walking and talking (accompanied by a custom mobile fUSi acquisition cart). The team recruited two 30-something male subjects who had undergone cranioplasty to embed an implant made of polyetheretherketone (PEEK). While wearing the helmet, the subjects were asked to perform stationary motor and sensory tasks: pouting or brushing their lips, for example. Then the subjects walked in a straight line, pushing the cart for a minute up to 30 meters while licking their lips to demonstrate multitasking. The sessions ran over a 20-month period, thereby demonstrating that the helmet is suitable for long-term use. The next step is to improve the technology to enable mobile 3D imaging of the brain.

Science Advances, 2025. DOI: 10.1126/sciadv.adu9133  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 6 cool science stories we almost missed Read More »

a-neural-brain-implant-provides-near-instantaneous-speech

A neural brain implant provides near instantaneous speech


Focusing on sound production instead of word choice makes for a flexible system.

The participant’s implant gets hooked up for testing. Credit: UC Regents

Stephen Hawking, a British physicist and arguably the most famous man suffering from amyotrophic lateral sclerosis (ALS), communicated with the world using a sensor installed in his glasses. That sensor used tiny movements of a single muscle in his cheek to select characters on a screen. Once he typed a full sentence at a rate of roughly one word per minute, the text was synthesized into speech by a DECtalk TC01 synthesizer, which gave him his iconic, robotic voice.

But a lot has changed since Hawking died in 2018. Recent brain-computer-interface (BCI) devices have made it possible to translate neural activity directly into text and even speech. Unfortunately, these systems had significant latency, often limiting the user to a predefined vocabulary, and they did not handle nuances of spoken language like pitch or prosody. Now, a team of scientists at the University of California, Davis has built a neural prosthesis that can instantly translate brain signals into sounds—phonemes and words. It may be the first real step we have taken toward a fully digital vocal tract.

Text messaging

“Our main goal is creating a flexible speech neuroprosthesis that enables a patient with paralysis to speak as fluently as possible, managing their own cadence, and be more expressive by letting them modulate their intonation,” says Maitreyee Wairagkar, a neuroprosthetics researcher at UC Davis who led the study. Developing a prosthesis ticking all these boxes was an enormous challenge because it meant Wairagkar’s team had to solve nearly all the problems BCI-based communication solutions have faced in the past. And they had quite a lot of problems.

The first issue was moving beyond text—most successful neural prostheses developed so far have translated brain signals into text—the words a patient with an implanted prosthesis wanted to say simply appeared on a screen. Francis R. Willett led a team at Stanford University that achieved brain-to-text translation with around a 25 percent error rate. “When a woman with ALS was trying to speak, they could decode the words. Three out of four words were correct. That was super exciting but not enough for daily communication,” says Sergey Stavisky, a neuroscientist at UC Davis and a senior author of the study.

Delays and dictionaries

One year after the Stanford work, in 2024, Stavisky’s team published its own research on a brain-to-text system that bumped the accuracy to 97.5 percent. “Almost every word was correct, but communicating over text can be limiting, right?” Stavisky said. “Sometimes you want to use your voice. It allows you to make interjections, it makes it less likely other people interrupt you—you can sing, you can use words that aren’t in the dictionary.” But the most common approach to generating speech relied on synthesizing it from text, which led straight into another problem with BCI systems: very high latency.

In nearly all BCI speech aids, sentences appeared on a screen after a significant delay, long after the patient finished stringing the words together in their mind. The speech synthesis part usually happened after the text was ready, which caused even more delay. Brain-to-text solutions also suffered from a limited vocabulary. The latest system of this kind supported a dictionary of roughly 1,300 words. When you tried to speak a different language, use more elaborate vocabulary, or even say the unusual name of a café just around the corner, the systems failed.

So, Wairagkar designed her prosthesis to translate brain signals into sounds, not words—and do it in real time.

Extracting sound

The patient who agreed to participate in Wairagkar’s study was codenamed T15 and was a 46-year-old man suffering from ALS. “He is severely paralyzed and when he tries to speak, he is very difficult to understand. I’ve known him for several years, and when he speaks, I understand maybe 5 percent of what he’s saying,” says David M. Brandman, a neurosurgeon and co-author of the study. Before working with the UC Davis team, T15 communicated using a gyroscopic head mouse to control a cursor on a computer screen.

To use an early version of Stavisky’s brain-to-text system, the patient had 256 microelectrodes implanted into his ventral precentral gyrus, an area of the brain responsible for controlling vocal tract muscles.

For the new brain-to-speech system, Wairagkar and her colleagues relied on the same 256 electrodes. “We recorded neural activities from single neurons, which is the highest resolution of information we can get from our brain,” Wairagkar says. The signal registered by the electrodes was then sent to an AI algorithm called a neural decoder that deciphered those signals and extracted speech features such as pitch or voicing. In the next step, these features were fed into a vocoder, a speech synthesizing algorithm designed to sound like the voice that T15 had when he was still able to speak normally. The entire system worked with latency down to around 10 milliseconds—the conversion of brain signals into sounds was effectively instantaneous.

Because Wairagkar’s neural prosthesis converted brain signals into sounds, it didn’t come with a limited selection of supported words. The patient could say anything he wanted, including pseudo-words that weren’t in a dictionary and interjections like “um,” “hmm,” or “uh.” Because the system was sensitive to features like pitch or prosody, he could also vocalize questions saying the last word in a sentence with a slightly higher pitch and even sing a short melody.

But Wairagkar’s prosthesis had its limits.

Intelligibility improvements

To test the prosthesis’s performance, Wairagkar’s team first asked human listeners to match a recording of some synthesized speech by the T15 patient with one transcript from a set of six candidate sentences of similar length. Here, the results were completely perfect, with the system achieving 100 percent intelligibility.

The issues began when the team tried something a bit harder: an open transcription test where listeners had to work without any candidate transcripts. In this second test, the word error rate was 43.75 percent, meaning participants identified a bit more than half of the recorded words correctly. This was certainly an improvement compared to the intelligibility of the T15’s unaided speech where the word error in the same test with the same group of listeners was 96.43 percent. But the prosthesis, while promising, was not yet reliable enough to use it for day-to-day communication.

“We’re not at the point where it could be used in open-ended conversations. I think of this as a proof of concept,” Stavisky says. He suggested that one way to improve future designs would be to use more electrodes. “There are a lot of startups right now building BCIs that are going to have over a thousand electrodes. If you think about what we’ve achieved with just 250 electrodes versus what could be done with a thousand or two thousand—I think it would just work,” he argued. And the work to make that happen is already underway.

Paradromics, a BCI-focused startup based in Austin, Texas, wants to go ahead with clinical trials of a speech neural prosthesis and is already seeking FDA approval. “They have a 1,600 electrode system, and they publicly stated they are going to do speech,” Stavisky says. “David Brandman, our co-author, is going to be the lead principal investigator for these trials, and we’re going to do it here at UC Davis.”

Nature, 2025.  DOI: 10.1038/s41586-025-09127-3

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

A neural brain implant provides near instantaneous speech Read More »

supreme-court-upholds-texas-porn-law-that-caused-pornhub-to-leave-the-state

Supreme Court upholds Texas porn law that caused Pornhub to leave the state

Justice Elena Kagan filed a dissenting opinion that was joined by Sonia Sotomayor and Ketanji Brown Jackson. Kagan said that in similar cases, the court applied strict scrutiny, “a highly rigorous but not fatal form of constitutional review, to laws regulating protected speech based on its content.”

“Texas’s law defines speech by content and tells people entitled to view that speech that they must incur a cost to do so,” Kagan wrote. “That is, under our First Amendment law, a direct (not incidental) regulation of speech based on its content—which demands strict scrutiny.”

The Texas law applies to websites in which more than one-third of the content “is sexual material harmful to minors.” Kagan described the law’s ID requirement as a deterrent to exercising one’s First Amendment rights.

“It is turning over information about yourself and your viewing habits—respecting speech many find repulsive—to a website operator, and then to… who knows? The operator might sell the information; the operator might be hacked or subpoenaed,” Kagan’s dissent said. The law requires website users to verify their ages by submitting “a ‘government-issued identification’ like a driver’s license or ‘transactional data’ associated with things like a job or mortgage,” Kagan wrote.

Limiting no more speech than necessary

Under strict scrutiny, the court must ask whether the law is “the least restrictive means of achieving a compelling state interest,” Kagan wrote. A state facing that standard must show it has limited no more adult speech than is necessary to achieve its goal.

“Texas can of course take measures to prevent minors from viewing obscene-for-children speech. But if a scheme other than H. B. 1181 can just as well accomplish that objective and better protect adults’ First Amendment freedoms, then Texas should have to adopt it (or at least demonstrate some good reason not to),” Kagan wrote.

The majority decision said that applying strict scrutiny “would call into question all age-verification requirements, even longstanding in-person requirements.” It also said the previous rulings cited in the dissent “all involved laws that banned both minors and adults from accessing speech that was at most obscene only to minors. The Court has never before considered whether the more modest burden of an age-verification requirement triggers strict scrutiny.”

Supreme Court upholds Texas porn law that caused Pornhub to leave the state Read More »

stung-by-customer-losses,-comcast-says-all-its-new-plans-have-unlimited-data

Stung by customer losses, Comcast says all its new plans have unlimited data

The five-year guarantee would be a better deal in the long run because of the rise in price once the deal wears off. Comcast’s “everyday prices” for these plans range from $70 to $130 a month. Comcast said the one- and five-year guarantees are “available with no contracts” and that “all plans include a line of Xfinity Mobile at no additional cost for a year.”

Comcast exec: “We are not winning”

The Comcast data caps and their associated overage fees for exceeding the monthly limit have long been a major frustration for customers. Comcast has enforced the cap (currently 1.2TB a month) in most of its territory, but not in its Northeast markets where it faces competition from Verizon FiOS.

Comcast recently started offering five-year price guarantees and said it would continue adding more customer-friendly plans because of its recent struggles. After reporting a net loss of 183,000 residential broadband customers in Q1 2025, Comcast President Mike Cavanagh said during an April earnings call that “in this intensely competitive environment, we are not winning in the marketplace in a way that is commensurate with the strength of [our] network and connectivity products.”

Cavanagh said Comcast executives “identified two primary causes. One is price transparency and predictability and the other is the level of ease of doing business with us.” He said Comcast planned to simplify “our pricing construct to make our price-to-value proposition clearer to consumers across all broadband segments” and to make these changes “with the highest urgency.”

Even after the recent customer loss, Comcast had 29.19 million residential Internet customers.

Stung by customer losses, Comcast says all its new plans have unlimited data Read More »

apple-gives-eu-users-app-store-options-in-attempt-to-avoid-massive-fines

Apple gives EU users App Store options in attempt to avoid massive fines

Apple is changing its App Store policies in the EU in a last-minute attempt to avoid a series of escalating fines from Brussels.

The $3 trillion iPhone maker will allow developers in the bloc to offer apps designed for the iOS operating system in places other than Apple’s App Store, the company said.

Apple has been negotiating for two months with the European Commission after being fined €500 million for breaching the EU’s Digital Markets Act, the landmark legislation designed to curtail the power of Big Tech groups.

Throughout the process, Apple has accused the commission of moving the goalposts on what the company needs to do to comply with the EU’s digital rule book.

Apple announced the measures on Thursday, the deadline for the company to comply with the bloc’s rules in order to avoid new levies. The financial penalties can escalate over time and reach up to 5 percent of average daily worldwide revenue.

Still, an Apple spokesperson said that “the European Commission is requiring Apple to make a series of additional changes to the App Store. We disagree with this outcome and plan to appeal.”

In a reaction to the changes, a European Commission spokesperson said that “the commission will now assess these new business terms for DMA compliance.”

The spokesperson added that “the commission considers it particularly important to obtain the views of market operators and interested third parties before deciding on next steps.”

The decision on the new fines under the Digital Markets Act comes as Brussels and Washington near a July 9 deadline to agree on a trade deal.

The EU’s rules on Big Tech are a flashpoint between Brussels and US President Donald Trump. But commission leaders have indicated they would not change their rule book as a part of trade negotiations with the US.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Apple gives EU users App Store options in attempt to avoid massive fines Read More »

google’s-spotty-find-hub-network-could-get-better-thanks-to-a-small-setup-tweak

Google’s spotty Find Hub network could get better thanks to a small setup tweak

Bluetooth trackers have existed for quite a while, but Apple made them worthwhile when it enlisted every iPhone to support AirTags. The tracking was so reliable that Apple had to add anti-stalking features. And although there are just as many Android phones out there, Google’s version of mobile device tracking, known as Find Hub, has been comparatively spotty. Now, Google is about to offer users a choice that could fix Bluetooth tracking on Android.

According to a report from Android Authority, Google is preparing to add a new screen to the Android setup process. This change, integrated with Play Services version 25.24, has yet to roll out widely, but it will allow anyone setting up an Android phone to choose a more effective method of tracking that will bolster Google’s network. This is included in the Play Services changelog as, “You can now configure Find Hub when setting up your phone, allowing the device to be located remotely.”

Trackable devices like AirTags and earbuds work by broadcasting a Bluetooth LE identifier, which phones in the area can see. Our always-online smartphones then report the approximate location of that signal, and with enough reports, the owner can pinpoint the tag. Perhaps wary of the privacy implications, Google rolled out its Find Hub network (previously Find My Device) with harsh restrictions on where device finding would work.

By default, Find Hub only works in busy areas where multiple phones can contribute to narrowing down the location. That’s suboptimal if you actually want to find things. The setting to allow finding in all areas is buried several menus deep in the system settings where no one is going to see it. Currently, the settings for Find Hub are under the security menu of your phone, but the patch may vary from one device to the next. For Pixels, it’s under Security > Device finders > Find Hub > Find your offline devices. Yeah, not exactly discoverable.

Google’s spotty Find Hub network could get better thanks to a small setup tweak Read More »

gemini-cli-is-a-free,-open-source-coding-agent-that-brings-ai-to-your-terminal

Gemini CLI is a free, open source coding agent that brings AI to your terminal

Some developers prefer to live in the command line interface (CLI), eschewing the flashy graphics and file management features of IDEs. Google’s latest AI tool is for those terminal lovers. It’s called Gemini CLI, and it shares a lot with Gemini Code Assist, but it works in your terminal environment instead of integrating with an IDE. And perhaps best of all, it’s free and open source.

Gemini CLI plugs into Gemini 2.5 Pro, Google’s most advanced model for coding and simulated reasoning. It can create and modify code for you right inside the terminal, but you can also call on other Google models to generate images or videos without leaving the security of your terminal cocoon. It’s essentially vibe coding from the command line.

This tool is fully open source, so developers can inspect the code and help to improve it. The openness extends to how you configure the AI agent. It supports Model Context Protocol (MCP) and bundled extensions, allowing you to customize your terminal as you see fit. You can even include your own system prompts—Gemini CLI relies on GEMINI.md files, which you can use to tweak the model for different tasks or teams.

Now that Gemini 2.5 Pro is generally available, Gemini Code Assist has been upgraded to use the same technology as Gemini CLI. Code Assist integrates with IDEs like VS Code for those times when you need a more feature-rich environment. The new agent mode in Code Assist allows you to give the AI more general instructions, like “Add support for dark mode to my application” or “Build my project and fix any errors.”

Gemini CLI is a free, open source coding agent that brings AI to your terminal Read More »