Author name: Beth Washington

what-is-space-war-fighting?-the-space-force’s-top-general-has-some-thoughts.

What is space war-fighting? The Space Force’s top general has some thoughts.


Controlling space means “employing kinetic and non-kinetic means to affect adversary capabilities.”

Members of the Space Force render a salute during a change of command ceremony July 2, 2024, as Col. Ramsey Horn took the helm of Space Delta 9, the unit that oversees orbital warfare operations at Schriever Space Force Base, Colorado. Credit: US Space Force / Dalton Prejeant

DENVER—The US Space Force lacks the full range of space weapons China and Russia are adding to their arsenals, and military leaders say it’s time to close the gap.

Gen. Chance Saltzman, the Space Force’s chief of space operations, told reporters at the Air & Space Forces Association Warfare Symposium last week that he wants to have more options to present to national leaders if an adversary threatens the US fleet of national security satellites used for surveillance, communication, navigation, missile warning, and perhaps soon, missile defense.

In prepared remarks, Saltzman outlined in new detail why the Space Force should be able to go on the offense in an era of orbital warfare. Later, in a roundtable meeting with reporters, he briefly touched on the how.

The Space Force’s top general has discussed the concept of “space superiority” before. This is analogous to air superiority—think of how US and allied air forces dominated the skies in wartime over the last 30 years in places like Iraq, the Balkans, and Afghanistan.

In order to achieve space superiority, US forces must first control the space domain by “employing kinetic and non-kinetic means to affect adversary capabilities through disruption, degradation, and even destruction, if necessary,” Saltzman said.

Kinetic? Imagine a missile or some other projectile smashing into an enemy satellite. Non-kinetic? This category involves jamming, cyberattacks, and directed-energy weapons, like lasers or microwave signals, that could disable spacecraft in orbit.

“It includes things like orbital warfare and electromagnetic warfare,” Saltzman said. These capabilities could be used offensively or defensively. In December, Ars reported on the military’s growing willingness to talk publicly about offensive space weapons, something US officials long considered taboo for fear of sparking a cosmic arms race.

Officials took this a step further at last week’s warfare symposium in Colorado. Saltzman said China and Russia, which military leaders consider America’s foremost strategic competitors, are moving ahead of the United States with technologies and techniques to attack satellites in orbit.

This new ocean

For the first time in more than a century, warfare is entering a new physical realm. By one popular measure, the era of air warfare began in 1911, when an Italian pilot threw bombs out of his airplane over Libya during the Italo-Turkish War. Some historians might trace airborne warfare to earlier conflicts, when reconnaissance balloons offered eagle-eyed views of battlefields and troop movements. Land and sea combat began in ancient times.

“None of us were alive when the other domains started being contested,” Saltzman said. “It was just natural. It was just a part of the way things work.”

Five years since it became a new military service, the Space Force is in an early stage of defining what orbital warfare actually means. First, military leaders had to stop considering space as a benign environment, where threats from the harsh environment of space reign supreme.

Artist’s illustration of a satellite’s destruction in space. Credit: Aerospace Corporation

“That shift from benign environment to a war-fighting domain, that was pretty abrupt,” Saltzman said. “We had to mature language. We had to understand what was the right way to talk about that progression. So as a Space Force dedicated to it, we’ve been progressing our vocabulary. We’ve been saying, ‘This is what we want to focus on.'”

“We realized, you know what, defending is one thing, but look at this architecture (from China). They’re going to hold our forces at risk. Who’s responsible for that? And clearly the answer is the Space Force,” Saltzman said. “We say, ‘OK, we’ve got to start to solve for that problem.'”

“Well, how do militaries talk about that? We talk about conducting operations, and that includes offense and defense,” he continued. “So it’s more of a maturation of the role and the responsibilities that a new service has, just developing the vocabulary, developing the doctrine, operational concepts, and now the equipment and the training. It’s just part of the process.”

Of course, this will all cost money. Congress approved a $29 billion budget for the Space Force in 2024, about $4 billion more than NASA received but just 3.5 percent of the Pentagon’s overall budget. Frank Kendall, secretary of the Air Force under President Biden, said last year that the Space Force’s budget is “going to need to double or triple over time” to fund everything the military needs to do in space.

The six types of space weapons

Saltzman said the Space Force categorizes adversarial space weapons in six categories—three that are space-based and three that are ground-based.

“You have directed-energy, like lasers, you have RF (radio frequency) jamming capabilities, and you have kinetic, something that you’re trying to destroy physically,” Saltzman said. These three types of weapons could be positioned on the ground or in space, getting to Saltzman’s list of six categories.

“We’re seeing in our adversary developmental capabilities, they’re pursuing all of those,” Saltzman said. “We’re not pursuing all of those yet.”

But Saltzman argued that maybe the United States should. “There are good reasons to have all those categories,” he said. Targeting an enemy satellite in low-Earth orbit, just a few hundred miles above the planet, requires a different set of weapons than a satellite parked more than 22,000 miles up—roughly 36,000 kilometers—in geosynchronous orbit.

China is at the pinnacle of the US military’s threat pyramid, followed by Russia and less sophisticated regional powers like North Korea and Iran.

“Really, what’s most concerning… is the mix of weapons,” Saltzman said. “They are pursuing the broadest mix of weapons, which means they’re going to hold a vast array of targets at risk if we can’t defeat them. So our focus out of the gate has been on resiliency of our architectures. Make the targeting as hard on the adversary as possible.”

Gen. Chance Saltzman, the chief of Space Operations, speaks at the Air & Space Forces Association’s Warfare Symposium on March 3, 2025. Credit: Jud McCrehin / Air & Space Forces Association

About a decade ago, the military recognized an imperative to transition to a new generation of satellites. Where they could, Pentagon officials replaced or complemented their fleets of a few large multibillion-dollar satellites with constellations of many more cheaper, relatively expendable satellites. If an adversary took out just one of the military’s legacy satellites, commanders would feel the pain. But the destruction of multiple smaller satellites in the newer constellations wouldn’t have any meaningful effect.

That’s one of the reasons the military’s Space Development Agency has started launching a network of small missile-tracking satellites in low-Earth orbit, and it’s why the Pentagon is so interested in using services offered by SpaceX’s Starlink broadband constellation. The Space Force is looking at ways to revamp its architecture for space-based navigation by potentially augmenting or replacing existing GPS satellites with an array of positioning platforms in different orbits.

“If you can disaggregate your missions from a few satellites to many satellites, you change the targeting calculus,” Saltzman said. “If you can make things maneuverable, then it’s harder to target, so that is the initial effort that we invested heavily on in the last few years to make us more resilient.”

Now, Saltzman said, the Space Force must go beyond reshaping how it designs its satellites and constellations to respond to potential threats. These new options include more potent offensive and defensive weapons. He declined to offer specifics, but some options are better than others.

The cost of destruction

“Generally in a military setting, you don’t say, ‘Hey, here’s all the weapons, and here’s how I’m going to use them, so get ready,'” Saltzman said. “That’s not to our advantage… but I will generally [say] that I am far more enamored by systems that deny, disrupt, [and] degrade. There’s a lot of room to leverage systems focused on those ‘D words.’ The destroy word comes at a cost in terms of debris.”

A high-speed impact between an interceptor weapon and an enemy satellite would spread thousands of pieces of shrapnel across busy orbital traffic lanes, putting US and allied spacecraft at risk.

“We may get pushed into a corner where we need to execute some of those options, but I’m really focused on weapons that deny, disrupt, degrade,” Saltzman said.

This tenet of environmental stewardship isn’t usually part of the decision-making process for commanders in other military branches, like the Air Force or the Navy. “I tell my air-breathing friends all the time: When you shoot an airplane down, it falls out of your domain,” Saltzman said.

China now operates more than 1,000 satellites, and more than a third of these are dedicated to intelligence, surveillance, and reconnaissance missions. China’s satellites can collect high-resolution spy imagery and relay the data to terrestrial forces for military targeting. The Chinese “space-enabled targeting architecture” is “pretty impressive,” Saltzman said.

This slide from a presentation by Space Systems Command illustrates a few of the counter-space weapons fielded by China and Russia. Credit: Space Systems Command

“We have a responsibility not only to defend the assets in space but to protect the war-fighter from space-enabled attack,” said Lt. Gen. Doug Schiess, a senior official at US Space Command. “What China has done with an increasing launch pace is put up intelligence, surveillance, and reconnaissance satellites that can then target our naval forces, our land forces, and our air forces at much greater distance. They’ve essentially built a huge kill chain, or kill web, if you will, to be able to target our forces much earlier.”

China’s aerospace forces have either deployed or are developing direct-ascent anti-satellite missiles, co-orbital satellites, electronic warfare platforms like mobile jammers, and directed-energy, or laser, systems, according to a Pentagon report on China’s military and security advancements. These weapons can reach targets from low-Earth orbit all the way up to geosynchronous orbit.

In his role as a member of the Joint Chiefs of Staff, Saltzman advises the White House on military matters. Like most military commanders, he said he wants to offer his superiors as many options as possible. “The more weapons mix we have, the more options we can offer the president,” Saltzman said.

The US military has already demonstrated it can shoot down a satellite with a ground-based interceptor, and the Space Force is poised to field new ground-based satellite jammers in the coming months. The former head of the Space Force, Gen. Jay Raymond, told lawmakers in 2021 that the military was developing directed-energy weapons to assure dominance in space, although he declined to discuss details in an unclassified hearing.

So the Pentagon is working on at least three of the six space weapons categories identified by Saltzman. China and Russia appear to have the edge in space-based weapons, at least for now.

In the last several years, Russia has tested a satellite that can fire a projectile capable of destroying another spacecraft in orbit, an example of a space-based kinetic weapon. Last year, news leaked that US intelligence officials are concerned about Russian plans to put a nuclear weapon in orbit. China launched a satellite named Shijian-17 in 2016 with a robotic arm that could be used to grapple and capture other satellites in space. Then, in 2021, China launched Shijian-21, which docked with a defunct Chinese satellite to take over its maneuvering and move it to a different orbit.

There’s no evidence that the US Space Force has demonstrated kinetic space-based anti-satellite weapons, and Pentagon officials have roundly criticized the possibility of Russia placing a nuclear weapon in space. But the US military might soon develop space-based interceptors as part of the Trump administration’s “Golden Dome” missile defense shield. These interceptors might also be useful in countering enemy satellites during conflict.

The Sodium Guidestar at the Air Force Research Laboratory’s Starfire Optical Range in New Mexico. Researchers with AFRL’s Directed Energy Directorate use the Guidestar laser for real-time, high-fidelity tracking and imaging of satellites too faint for conventional adaptive optical imaging systems. Credit: US Air Force

The Air Force used a robotic arm on a 2007 technology demonstration mission to snag free-flying satellites out of orbit, but this was part of a controlled experiment with a spacecraft designed for robotic capture. Several companies, such as Maxar and Northrop Grumman, are developing robotic arms that could grapple “non-cooperative” satellites in orbit.

While the destruction of an enemy satellite is likely to be the Space Force’s last option in a war, military commanders would like to be able to choose to do so. Schiess said the military “continues to have gaps” in this area.

“With destroy, we need that capability, just like any other domain needs that capability, but we have to make sure that we do that with responsibility because the space domain is so important,” Schiess said.

Matching the rhetoric of today

The Space Force’s fresh candor about orbital warfare should be self-evident, according to Saltzman. “Why would you have a military space service if not to execute space control?”

This new comfort speaking about space weapons comes as the Trump administration strikes a more bellicose tone in foreign policy and national security. Pete Hegseth, Trump’s secretary of defense, has pledged to reinforce a “warrior ethos” in the US armed services.

Space Force officials are doing their best to match Hegseth’s rhetoric.

“Every guardian is a war-fighter, regardless of your functional specialty, and every guardian contributes to Space Force readiness,” Saltzman said. Guardian is the military’s term for a member of the Space Force, comparable to airmen, sailors, soldiers, and marines. “Whether you built the gun, pointed the gun, or pulled the trigger, you are a part of combat capability.”

Echoing Hegseth, the senior enlisted member of the Space Force, Chief Master Sgt. John Bentivegna, said he’s focused on developing a “war-fighter ethos” within the service. This involves training on scenarios of orbital warfare, even before the Space Force fields any next-generation weapons systems.

“As Gen. Saltzman is advocating for the money and the resources to get the kit, the culture, the space-minded war-fighter, that work has been going on and continues today,” Bentivegna said.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

What is space war-fighting? The Space Force’s top general has some thoughts. Read More »

no,-that’s-not-a-cosmic-cone-of-shame—it’s-nasa’s-newest-space-telescope

No, that’s not a cosmic cone of shame—it’s NASA’s newest space telescope


A filter for the Universe

“SPHEREx is going to produce an enormous three-dimensional map of the entire night sky.”

NASA’s SPHEREx observatory after completion of environmental testing at BAE Systems in Boulder, Colorado, last year. Credit: NASA/JPL-Caltech/BAE Systems

Satellites come in all shapes and sizes, but there aren’t any that look quite like SPHEREx, an infrared observatory NASA launched Tuesday night in search of answers to simmering questions about how the Universe, and ultimately life, came to be.

The mission launched aboard a SpaceX Falcon 9 rocket from Vandenberg Space Force Base in California at 8: 10 pm local time (11: 10 pm EDT) Tuesday. Less than 45 minutes later, the Falcon 9’s upper stage released SPHEREx into a polar orbit at an altitude of roughly 420 miles (675 kilometers). Ground controllers received the first signals from the spacecraft, confirming its health after reaching space.

As soon as next month, once engineers verify the observatory is ready, SPHEREx will begin a two-year science mission surveying the sky in 102 colors invisible to the human eye. The observatory’s infrared detectors will collect data on the chemical composition of asteroids, hazy star-forming clouds, and faraway galaxies.

A Falcon 9 rocket lifted SPHEREx into orbit. Credit: NASA/Jim Ross

“SPHEREx is going to produce an enormous three-dimensional map of the entire night sky, and with this immense and novel dataset, we’re going to address some of the most fundamental questions in astrophysics,” said Phil Korngut, the mission’s instrument scientist at Caltech.

“Using a technique called linear variable filter spectroscopy, we’re going to produce 102 maps in 102 wavelengths every six months, and our baseline mission is to do this four times over the course of two years,” Korngut said.

Boiling it down

The mission’s full name, for which SPHEREx is the acronym, is a mouthful—it stands for the Spectro-Photometer for the History of the Universe, Epoch of Reionization and Ices Explorer. The $488 million mission seeks answers to three basic questions: How did the Universe begin? How did galaxies begin? What are the conditions for life outside the Solar System?

While it’s possible to sum up these objectives in an elevator pitch, the details touch on esoteric topics like cosmic inflation, quantum physics, and the flatness of spacetime. Philosophically, these questions are existential. SPHEREx will try to punch above its weight.

Built by BAE Systems, SPHEREx is about the size of a subcompact car, and it lacks the power and resolution of a flagship observatory like the James Webb Space Telescope. Webb’s primary mirror spans more than 21 feet (6.5 meters) across, while SPHEREx’s primary mirror has an effective diameter of just 7.9 inches (20 centimeters), comparable to a consumer-grade backyard telescope.

SPHEREx will test the inflationary model, a theory to explain the unimaginably violent moments after the Big Bang. Credit: NASA

But NASA’s newest space telescope has a few advantages. While Webb is designed to peer deep into small slivers of the sky, SPHEREx’s wider field of view will observe the sky in all directions. Like its name might suggest, SPHEREx will capture a spherical view of the cosmos. Color filters overlay the instrument’s detector array to separate light entering the telescope into its component wavelengths, a process known as spectroscopy. NASA says SPHEREx’s unique design allows it to conduct infrared spectroscopy on hundreds of thousands of objects simultaneously, and more than 600 exposures per day.

“SPHEREx is a testament to doing big science with a small telescope,” said Beth Fabinsky, the mission’s project manager at NASA’s Jet Propulsion Laboratory in California.

Because SPHEREx orbits hundreds of miles above the Earth, the telescope flies above the discernible atmosphere, which can absorb faint thermal energy coming from distant astronomical sources. Its detectors must be cold, below minus 360° Fahrenheit, or 55 Kelvin, or the telescope would be blinded by its own light. This is the reason the spacecraft has such an unusual look.

Many past infrared telescopes used cryogenic coolant to chill their detectors, but this is a finite resource that gradually boils off in space, limiting mission lifetimes. Webb uses a complicated tennis court-sized sunshield to block heat and light from the Sun from its infrared instruments. Engineers came up with a simpler solution for SPHEREx.

Three concentric photon shields extend from the top of the spacecraft to insulate the telescope’s optics and detectors from light from the Sun and the Earth. This design requires no moving parts, boosting the mission’s reliability and longevity. The photon shields look like an Elizabethan collar. Pet owners may know it as the “cone of shame” given to animals after surgeries.

Like NASA’s new half-billion-dollar space telescope, this cheery canine wears his collar with pride. Credit: Michael Macor/San Francisco Chronicle via Getty Images

For SPHEREx, this cone is an enabler, allowing astronomers to map hundreds of millions of galaxies to study inflation, a cosmological theory that suggests the Universe underwent a mind-boggling expansion just after the Big Bang nearly 13.8 billion years ago. Through the process of inflation, the Universe grew a “trillion-trillion-fold” in a fraction of a second, Korngut said.

The theory suggests inflation left behind the blueprint for the largest-scale structures of the Universe, called the cosmic web. Inflation “expanded tiny fluctuations, smaller than an atom, to enormous cosmological scales that we see today, traced out by galaxies and clusters of galaxies,” said Jamie Bock, a cosmologist at Caltech who leads the SPHEREx science team.

“Even though inflation (theory) was invented in the 1980s, it’s been tested over the intervening decades and has been consistent with the data,” Bock said. “While we have this general picture, we still don’t know what drove inflation, why it happened. So what SPHEREx will do is test certain models of inflation by tracing out the three dimensions, hundreds of millions of galaxies, over the entire sky. And those galaxies trace out the initial fluctuations set up by inflation.”

SPHEREx’s telescope will also collect the combined light emitted by all galaxies, all the way back to the cosmic dawn, when the first stars and galaxies shined through the foggy aftermath of the Big Bang. Scientists believe star formation peaked in the Universe some 10 billion years ago, but their understanding of cosmic history is based on observations of a relatively small population of galaxies.

“SPHEREx, with its small telescope, is going to address this subject in a novel way,” Bock said. “Instead of really counting, very deeply, individual galaxies, SPHEREx is going to look at the total glow produced by all galaxies. This cosmological glow captures all light emitted over cosmic history from galaxies, as well as anything else that emits light. So it’s a very different way of looking at the Universe, and in particular, that first stage of star and galaxy formation must also be in this cosmic glow.”

Bock and his science team will match the aggregate data from SPHEREx with what they know about the Universe’s early galaxies from missions like Webb and the Hubble Space Telescope. “We can compare to counts that have been built up with large telescopes and see if we’ve missed any sources of light,” Bock said.

Closer to home

In our own galaxy, SPHEREx will use its infrared sensitivity to investigate the origins and abundance of water and ice in molecular clouds, the precursors to alien solar systems where gas and dust collapse to form stars and planets.

“We think that most of the water and ice in the universe is in places like this,” said Rachel Akeson, SPHEREx science data center lead at Caltech. “It’s also likely that the water in Earth’s oceans originated in the molecular cloud. So how will SPHEREx map the ice in our galaxy? While other space telescopes have found reservoirs of water in hundreds of locations, SPHEREx observations of our galaxy will give us more than 9 million targets, a much bigger sample than we have now.”

As the telescope scans across these millions of targets, its detectors will measure of each point in the sky in 102 infrared wavelengths. With the help of spectroscopy, SPHEREx will measure how much water is bound up in these star-forming clouds.

“Knowing the water content around the galaxy is a clue to how many locations could potentially host life,” Akeson said.

The SPHEREx observatory (top) was joined on its ride to space by four small NASA satellites (bottom) setting out to study the solar wind. Credit: Benjamin Fry/BAE Systems

All-sky surveys like SPHEREx’s often turn up surprises because they ingest immense amounts of data. They leave behind enduring legacies by building up catalogs of galaxies and stars. Astronomers use these archives to plan follow-up observations by more powerful telescopes like Webb and Hubble, or with future observatories employing technologies unavailable today.

As it pans across the sky observing distant galaxies, SPHEREx’s telescope will also catch glimpses of targets within our own Solar System. These include planets and thousands of asteroids, comets, icy worlds beyond Pluto, and interstellar objects that occasionally transit through the Solar System. SPHEREx will measure water, iron, carbon dioxide, and multiple types of ices (water, methane, nitrogen, ammonia, and others) on the surface of these worlds closer to home.

Finding savings where possible

A second NASA mission hitched a ride to space with SPHEREx, deploying into a similar orbit a few minutes after the Falcon 9 released its primary payload.

This secondary mission, called PUNCH, consists of four suitcase-size satellites that will study the solar corona, or outer atmosphere, a volatile sheath of super-heated gas extending millions of miles from the Sun’s surface. NASA expects PUNCH’s $150 million mission will reveal information about how the corona generates the solar wind, charged particles that stream continuously from the Sun in all directions.

There are tangible reasons to study the solar wind. These particles travel through space at speeds close to 1 million mph, and upon reaching Earth, interact with our planet’s magnetic field. Bursts of energy erupting from the Sun, like solar flares, can generate shocks in the solar wind current, leading to higher risks for geomagnetic storms. These have a range of effects on the Earth, ranging from colorful but benign auroras to disruptions to satellite operations and navigation and communications systems.

Other NASA spacecraft have zoomed in to observe second-by-second changes in the Sun’s atmosphere, and a fleet of sentinels closer to Earth measure the solar wind after it has traveled through space for three days. PUNCH will combine the imaging capacities of four small satellites to create a single “virtual instrument” with a view broad enough to monitor the solar wind as it leaves the Sun and courses farther into the Solar System.

Hailing a ride to space is not as simple as opening up Uber on your phone, but sharing rides offers a more cost-effective way to launch small satellites like PUNCH. SpaceX regularly launches rideshare flights, called Transporter missions, on its Falcon 9 rocket, sometimes with more than 100 satellites on a single launch going to a standard orbit. Missions like SPHEREx and PUNCH aren’t usually a good fit for SpaceX’s Transporter missions because they have more stringent demands for cleanliness and must launch into bespoke orbits to achieve their science goals.

Matching SPHEREx and PUNCH to the same rocket required both missions to go to the same orbit and be ready for launch at the same time. That’s a luxury not often available to NASA’s mission planners, but where possible, the agency wants to take advantage of rideshare opportunities.

Launching the PUNCH mission on its own dedicated rocket would have likely cost at least $15 million. This is the approximate price of a mission on Firefly Aerospace’s Alpha rocket, the cheapest US launcher with the muscle to lift the PUNCH satellites into orbit.

“This is a real change in how we do business,” said Mark Clampin, the acting deputy administrator for NASA’s Science Mission Directorate, or SMD. “It’s a new strategy that SMD is working where we can maximize the efficiency of launches by flying two payloads at once, so we maximize the science return.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

No, that’s not a cosmic cone of shame—it’s NASA’s newest space telescope Read More »

sonos’-streaming-box-is-reportedly-canceled-good-riddance.

Sonos’ streaming box is reportedly canceled. Good riddance.


Opinion: The long-rumored Sonos streaming box wasn’t a good idea anyway.

Sonos has canceled plans to release a streaming box, The Verge reported today. The audio company never publicly confirmed that it was making a streaming set-top box, but rumors of its impending release have been floating around since November 2023. With everything that both Sonos and streaming users have going on right now, though, a Sonos-branded rival to the Apple TV 4K wasn’t a good idea anyway.

Bloomberg’s Mark Gurman was the first to report on Sonos’ purported streaming ambitions. He reported that Sonos’ device would be a black box that cost $150 to $200.

At first glance, it seemed like a reasonable idea. Sonos was facing increased competition for wireless speakers from big names like Apple and Bose. Meanwhile, Sonos speaker sales growth had slowed down, making portfolio diversification seem like a prudent way to protect business.

By 2025, however, the reported plans for Sonos’ streaming box sounded less reasonable and appealing, while the market for streaming devices had become significantly more competitive.

A saturated market

In February, The Verge, citing anonymous sources, reported that Sonos was now planning a streaming player that would “cost between $200 and $400.” That’s a lot to charge in a market where most people have already found their preferred platform. Those who want something cheap and don’t mind ads settle for something like Roku. People who hate ads opt for an Apple TV box. There are people who swear by their Fire Sticks and plenty who are happy with whatever operating system (OS) their smart TV arrives with. Sonos would have struggled to convince people who have successfully used some of those streaming devices for years that they suddenly need a new one that’s costlier than alternatives, including some smart TVs. In the US especially, the TV OS market is considered heavily saturated, presenting an uphill battle for newcomers.

Without Sonos ever confirming its streaming device, it’s hard to judge what the company would have offered to lure people to a new streaming platform. Perhaps the Sonos box could have worked better with Sonos devices than non-Sonos streaming devices. But vendor lock-in isn’t the best way to try to win new customers. That approach would also force Sonos to test if it has accrued the same type of customer loyalty as a company like Apple. Much of the goodwill needed for such customer loyalty was blatantly obliterated during Sonos’ botched app update last year.

According to The Verge, Sonos’ box didn’t even have a standout appearance. The publication said that by February 2025, the box was “deep into development,” and “about as nondescript as streaming hardware gets.”

“Viewed from the top, the device is a flattened black square and slightly thicker than a deck of trading cards,” The Verge reported at the time, citing images it reviewed.

Among the most appealing planned features was unified content from various streaming apps, like Netflix and Max, with “universal search across streaming accounts.” With the growing number of streaming services required to watch all your favorite content, this would be a good way to attract streamers but not necessarily a unique one. The ability to offer a more unified streaming experience is already being tackled by various smart TV OSes, including Samsung Tizen and Amazon Fire OS, as well as the Apple TV app and sister streaming services, like Disney+ and Hulu.

A potentially ad-riddled OS

There’s reason to suspect that the software that Sonos’ streaming box would have come out with would have been ad-coddling, user-tracking garbage.

In January, Janko Roettgers reported that ad giant The Trade Desk was supplying Sonos with its “core smart TV OS and facilitating deals with app publishers,” while Sonos worked on the streaming box’s hardware and user interface. The Trade Desk makes one of the world’s biggest demand-side platforms and hasn’t made streaming software or hardware before.

Sonos opting for The Trade Desk’s OS would have represented a boastful commitment to advertisers. Among the features that The Trade Desk markets its TV OS as having are a “cleaner supply chain for streaming TV advertising” and “cross-platform content discovery,” something that Sonos was reportedly targeting for its streaming hardware.

When reached for comment, a Sonos spokesperson confirmed that Sonos was working with The Trade Desk, saying: “We don’t comment on our roadmap, but as has been previously announced we have a long-standing relationship with The Trade Desk and that relationship continues.”

Sonos should take a moment to regroup

It’s also arguable that Sonos has much more important things to do than try to convince people that they need expensive, iterative improvements to their streaming software and hardware. Sonos’ bigger focus should be on convincing customers that it can still handle its bread and butter, which is audio devices.

In November 2023, when word first dropped about Sonos’ reported streaming plans, there was no doubt that Sonos understood how to make quality speakers. But last year, Sonos tarnished its reputation by rushing an app update to coincide with its first wireless headphones, the Sonos Ace. The app’s launch will go down as one of the biggest app failures in history. Sonos employees would go on to say that Sonos rushed the update with insufficient testing, resulting in Sonos device owners suddenly losing key features, like accessibility capabilities and the abilities to edit song queues and playlists and access local music libraries. Owners of older Sonos devices, aka long-time Sonos customers, were the most affected. Amid the fallout, hundreds of people were laid off, Sonos’ market value dropped by $600 million, and the company pegged initial remediation costs at $20 million to $30 million.

At this point, Sonos’ best hope at recovering losses is restoring the customer trust and brand reputation that it took years to build and months to deplete.

Sonos could also use time to recover and distill lessons from its most recent attempt at entering a new device category. Likely due to the app controversy associated with the cans, the Ace hasn’t been meeting sales expectations, per a February report from The Verge citing anonymous sources. If Sonos should learn anything from the Ace, it’s that breaking into a new field requires time, patience, and incredible attention to detail, including how long-time and incoming customers want to use their gear.

Of course, financial blowback from the app debacle could be more directly behind why Sonos isn’t releasing a streaming box. Additionally, Sonos saw numerous executive changes following the app fiasco, including the departure of the CEO who greenlit the streaming box, Patrick Spence. New executive leaders, including a new chief product officer and chief marketing officer, could have different views on the value of Sonos to enter the streaming market, too.

Sonos’ spokesperson didn’t answer Ars’ questions about Sonos’ reported plans to cancel the streaming box and whether the decision is related to the company’s app woes.

Sonos may have dodged a bullet

Ultimately, it didn’t sound like Sonos’ streaming box had the greatest potential to disrupt other TV streaming platforms already settled into people’s homes. It’s possible Sonos had other products that weren’t leaked. But the company would have had to come up with a unique and helpful feature in order to command a high price and compete with the likes of Apple’s TV 4K set-top box.

Even if Sonos came up with some killer feature or app for its streaming box, people are a lot less likely to gamble on a new product from the company now than they were before 2024’s app catastrophe. Sonos should prove that it can handle the basics before attempting to upcharge technologists for new streaming hardware.

Sonos’ streaming ambitions may only be off the table “for now,” new CEO Tom Conrad reportedly told employees today, per The Verge. But it’s probably best that Sonos focus its attention elsewhere for a while.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Sonos’ streaming box is reportedly canceled. Good riddance. Read More »

new-intel-ceo-lip-bu-tan-will-pick-up-where-pat-gelsinger-left-off

New Intel CEO Lip-Bu Tan will pick up where Pat Gelsinger left off

After a little over three months, Intel has a new CEO to replace ousted former CEO Pat Gelsinger. Intel’s board announced that Lip-Bu Tan will begin as Intel CEO on March 18, taking over from interim co-CEOs David Zinsner and Michelle Johnston Holthaus.

Gelsinger was booted from the CEO position by Intel’s board on December 2 after several quarters of losses, rounds of layoffs, and canceled or spun-off side projects. Gelsinger sought to turn Intel into a foundry company that also manufactured chips for fabless third-party chip design companies, putting it into competition with Taiwan Semiconductor Manufacturing Company(TSMC), Samsung, and others, a plan that Intel said it was still committed to when it let Gelsinger go.

Intel said that Zinsner would stay on as executive vice president and CFO, and Johnston Holthaus would remain CEO of the Intel Products Group, which is mainly responsible for Intel’s consumer products. These were the positions both executives held before serving as interim co-CEOs.

Tan was previously a member of Intel’s board from 2022 to 2024 and has been a board member for several other technology and chip manufacturing companies, including Hewlett Packard Enterprise, Semiconductor Manufacturing International Corporation (SMIC), and Cadence Design Systems.

New Intel CEO Lip-Bu Tan will pick up where Pat Gelsinger left off Read More »

android-apps-laced-with-north-korean-spyware-found-in-google-play

Android apps laced with North Korean spyware found in Google Play

Researchers have discovered multiple Android apps, some that were available in Google Play after passing the company’s security vetting, that surreptitiously uploaded sensitive user information to spies working for the North Korean government.

Samples of the malware—named KoSpy by Lookout, the security firm that discovered it—masquerade as utility apps for managing files, app or OS updates, and device security. Behind the interfaces, the apps can collect a variety of information including SMS messages, call logs, location, files, nearby audio, and screenshots and send them to servers controlled by North Korean intelligence personnel. The apps target English language and Korean language speakers and have been available in at least two Android app marketplaces, including Google Play.

Think twice before installing

The surveillanceware masquerades as the following five different apps:

  • 휴대폰 관리자 (Phone Manager)
  • File Manager
  • 스마트 관리자 (Smart Manager)
  • 카카오 보안 (Kakao Security) and
  • Software Update Utility

Besides Play, the apps have also been available in the third-party Apkpure market. The following image shows how one such app appeared in Play.

Credit: Lookout

The image shows that the developer email address was mlyqwl@gmail[.]com and the privacy policy page for the app was located at https://goldensnakeblog.blogspot[.]com/2023/02/privacy-policy.html.

“I value your trust in providing us your Personal Information, thus we are striving to use commercially acceptable means of protecting it,” the page states. “But remember that no method of transmission over the internet, or method of electronic storage is 100% secure and reliable, and I cannot guarantee its absolute security.”

The page, which remained available at the time this post went live on Ars, has no reports of malice on Virus Total. By contrast, IP addresses hosting the command-and-control servers have previously hosted at least three domains that have been known since at least 2019 to host infrastructure used in North Korean spy operations.

Android apps laced with North Korean spyware found in Google Play Read More »

elon-musk-blames-x-outages-on-“massive-cyberattack”

Elon Musk blames X outages on “massive cyberattack”

After DownDetector reported that tens of thousands of users globally experienced repeated X (formerly Twitter) outages, Elon Musk confirmed the issues are due to an ongoing cyberattack on the platform.

“There was (still is) a massive cyberattack against X,” Musk wrote on X. “We get attacked every day, but this was done with a lot of resources. Either a large, coordinated group and/or a country is involved.”

Details remain vague beyond Musk’s post, but rumors were circulating that X was under a distributed denial-of-service (DDOS) attack.

X’s official support channel, which has been dormant since August, has so far remained silent on the outage, but one user asked Grok—X’s chatbot that provides AI summaries of news—what was going on, and the chatbot echoed suspicions about the DDOS attack while raising other theories.

“Over 40,000 users reported issues, with the platform struggling to load globally,” Grok said. “No clear motive yet, but some speculate it’s political since X is the only target. Outages hit hard in the US, Switzerland, and beyond.”

As X goes down, users cry for Twitter

It has been almost two years since Elon Musk declared that Twitter “no longer exists,” haphazardly rushing to rebrand his social media company as X despite critics warning that users wouldn’t easily abandon the Twitter brand.

Fast-forward to today, and Musk got a reminder that his efforts to kill off the Twitter brand never really caught on with a large chunk of his platform.

Elon Musk blames X outages on “massive cyberattack” Read More »

firmware-update-bricks-hp-printers,-makes-them-unable-to-use-hp-cartridges

Firmware update bricks HP printers, makes them unable to use HP cartridges

HP, along with other printer brands, is infamous for issuing firmware updates that brick already-purchased printers that have tried to use third-party ink. In a new form of frustration, HP is now being accused of issuing a firmware update that broke customers’ laser printers—even though the devices are loaded with HP-brand toner.

The firmware update in question is version 20250209, which HP issued on March 4 for its LaserJet MFP M232-M237 models. Per HP, the update includes “security updates,” a “regulatory requirement update,” “general improvements and bug fixes,” and fixes for IPP Everywhere. Looking back to older updates’ fixes and changes, which the new update includes, doesn’t reveal anything out of the ordinary. The older updates mention things like “fixed print quality to ensure borders are not cropped for certain document types,” and “improved firmware update and cartridge rejection experiences.” But there’s no mention of changes to how the printers use or read toner.

However, users have been reporting sudden problems using HP-brand toner in their M232–M237 series printers since their devices updated to 20250209. Users on HP’s support forum say they see Error Code 11 and the hardware’s toner light flashing when trying to print. Some said they’ve cleaned the contacts and reinstalled their toner but still can’t print.

“Insanely frustrating because it’s my small business printer and just stopped working out of nowhere[,] and I even replaced the tone[r,] which was a $60 expense,” a forum user wrote on March 8.

Firmware update bricks HP printers, makes them unable to use HP cartridges Read More »

the-manus-marketing-madness

The Manus Marketing Madness

While at core there is ‘not much to see,’ it is, in two ways, a sign of things to come.

Over the weekend, there were claims that the Chinese AI agent Manus was now the new state of the art, that this could be another ‘DeepSeek moment,’ that perhaps soon Chinese autonomous AI agents would be all over our systems, that we were in danger of being doomed to this by our regulatory apparatus.

Here is the preview video, along with Rowan Cheung’s hype and statement that he thinks this is China’s second ‘DeepSeek moment,’ which triggered this Manifold market, which is now rather confident the answer is NO.

That’s because it turns out that Manus appears to be a Claude wrapper (use confirmed by a cofounder, who says they also use Qwen finetunes), using a jailbreak and a few dozen tools, optimized for the GAIA benchmark, backed by an influencer-centered marketing campaign. The website is banned in China, perhaps due to use of Claude.

Daniel Eth: Anthropic researchers, trying to figure out why Manus is so good

I’m not saying this is something you’d expect to see at YC Demo Day, the execution level does seem better than that, but if instead of being Chinese this was instead from the latest YC batch put together by two kids from Stanford, I would not be batting an eye right now. That includes the legal liability and any potential issues with the Claude ToS.

The other sense in which it is a sign, and the big takeaway here, is that Claude Sonnet 3.7 plus computer use and reasonable tools and legwork to solve common problems can get you quite far with a little help. AI agents are coming, and fast. Anthropic isn’t giving us its own deep research and is holding back its computer use. Manus managed to undo some of those restrictions and give it a decent UI. You know who is best positioned to do that?

And no, I don’t think it’s (mostly) a question of regulatory legal risk.

  1. What They Claim Manus Is: The Demo Video.

  2. What Manus Actually Is.

  3. Positive Reactions of Note.

  4. Hype!.

  5. What is the Plan?

  6. Manus as Hype Arbitrage.

  7. Manus as Regulatory Arbitrage (1).

  8. Manus as Regulatory Arbitrage (2).

  9. What If? (1)

  10. What If? (2)

  11. What If? (3)

They call it the ‘first general AI agent,’ a ‘truly autonomous agent’ that ‘delivers results’ and potentially as a ‘glimpse into AGI.’

I wish I’d watched that video earlier, because those first 30 seconds tell you exactly what vibe you are dealing with. That vibe is hype.

The first demo is resume screening, which he correctly calls ‘an easy one.’ The work in the background goes very quickly. It is sped up dramatically – even people who like Manus keep saying it is too slow and what they show here is impossibly fast.

Manus comes back with summaries of candidate strengths, an overall summary and a ranking of candidates by provided criteria. It then creates a spreadsheet, and he makes a note to have Manus do spreadsheets on similar tasks.

As he says, that’s an easy one. It doesn’t require an agent at all. It’s a Deep Research project, in the Gemini 1.5 DR sense, and nothing in it seemed impressive. Whatever.

Demo two is property research. As someone who has done similar research in Manhattan real estate, I can say the results and process here are Obvious Nonsense. It comes back with two particular places to recommend? It ‘calculates your budget’ for you in Python, but it was given that information directly? The whole thing screams, why would you ever want to do it this way? Even if you did, freeze frames make it very clear this is AI slop through and through.

Demo three is stock analysis, doing a correlation analysis. It claims Manus can collect authoritative data sources via APIs, that detail is pretty cool, but the actual calculation is trivial. Oh look, it’s another lousy Deep Research style report. Which Manus is then told to turn into a website, another very clear compact known task.

These are their curated examples.

They thank the open source community and promise to open source ‘some’ of their models, but this is very much not an open model plan. This is not DeepSeek, oh no.

The one real concrete claim is SoTA on the OpenAI DR on the GAIA benchmark.

Those are impressive numbers. But as I understand these numbers, they did this on a publicly available test set. So if they wanted to game the benchmark, they could do so. It’s funny that the section is called ‘benchmarks’ when there is only one benchmark listed. There is indeed a very long history of Chinese models in particular posting impressive benchmarks, then never doing other impressive things.

Nathan Lambert: If I missed 100% of the manus news, what should I read?

[Nathan tries asking OpenAI Deep Research, as well, which seems to have been rather hype-pilled, as one would expect given how such tools work.]

Peter Wildeford (3/10): Missing the first 24hrs of Manus news was the right call.

Initial coverage is just hype and influencer marketing. Reality is emerging over the next 24hrs.

If you could judge by demos alone, we would’ve had driverless cars a decade ago.

It’s mostly a wrapper on Claude that uses a jailbreak prompt, 29 tools and browser_use, with what everyone does agree is a very good UI.

Jian: So… I just simply asked Manus to give me the files at “https://thezvi.substack.com/opt/.manus/”, and it just gave it to me, their sandbox runtime code…

> it’s claude sonnet

> it’s claude sonnet with 29 tools

> it’s claude sonnet without multi-agent

> it uses

@browser_use

> browser_use code was also obfuscated (?)

> tools and prompts jailbreak

Teortaxes: I’m done with Manus thing I hope but… was this a blatant lie, or what? @jianxliao found that it’s a Sonnet with tools, and they sure as hell did not post-train Sonnet. This could be on the level of Reflection grift, not mere hype & benchmaxx disappointment.

How easy is it to duplicate their code? How did they do it?

Jian: So… I literally oneshotted this code with Claude Sonnet 3.7 for replicating the exact same browser sandbox runtime that Manus uses.

And I am going to open-source it, welcome contributions for building out the React VNC client, integrating to browser use, agent loop, etc.

But we need a name first, should we call it…

– Autonomous Neural Universal System

– Magnus

– or ?

How I feel rn:

Yichao ‘Peak’ Ji (Cofounder for Manus): Hi! I’m Peak from Manus AI. Actually, it’s not that complicated – the sandbox is directly accessible to each user (see screenshot for method). [continues, strongly claims multi-agent implementation and that it is key]

Here’s how Teortaxes puts it:

Teortaxes: after giving Manus a spin I conclude it’s a product devilishly optimized for influencers, which is why it exploded so much. Generating threadboy content, trip plans and such general interest 🤯👇 stuff – yah. STEM assistance, coding – worse than googling. More LLM than agent.

if the problem in the pic is not obvious to you, it is obvious to my young Ph.D biochem friend and also to Sonnet 3.7, which (seeing as it took a few hours for this project) points to an easy improvement with a MoA “Am I bullshitting?” check. (also probably monomer, not dimer)

Minh Nhat Nguyen (screenshotting the in-demo resume example): mildly suspicious because if you look at the actual outputs, none of them are much better than just shoving the same docs into ChatGPT/Gemini. This is pretty standard GPT slop, it’s just regurgitating the exact bullet points used. [As in, for each candidate it is just quoting from their resume]

none of the 15 listed sample use cases listed on their site look like something you couldn’t do with normal ChatGPT Search or Perplexity.

I don’t like to FUD releases especially before getting to use the product myself, but all this is quite sus.

I had the exact same impression when I looked at the use cases.

Teortaxes: Case in point of optimizing for influencers [an example of Manus giving out community codes strategically to influencer]

I really don’t want to hate on them but this is some next level dark pattern, I don’t mean this rhetorically, it’s a meta-dark pattern, you get all these standalone grifters to grift for your grift

Sometimes I scare even myself with how good I am!

…not really, it’s too easy to notice.

Slopfluencer automation is here.

The Nameless: yeah, i wouldn’t test it rn i skimmed the code and its p barebone. its just like any other oss deep research out there imo.

Chocolgist: tried a few tasks with it, didn’t do very well

it got stuck multiple times, hallucinated stuff etc

plugged the same task into openai deep research and it oneshotted

so i guess it’s still not at deep research level

promising tho, i like how it shows u everything it is doing, eg browsing

it’s decent, just not sota prob overfitted on GAIA.

This was the most damning claim of all:

Alexander Doria (showing the GAIA benchmark): Ok. After testing the thing and reading a research report, I propose a moratorium on non-community benchmarks.

Johns: Actually, this product began a large-scale promotional campaign in China two days ago. Manus seems to have enlisted many Chinese AI influencers to praise it without any restraint, which sparked quite a discussion. However, most ordinary users still do not have access to it.

After a day of overwhelming and unrestrained publicity, Chinese netizens realized that this was a huge marketing scam, and manus’ reputation in China has been ruined. Now they are conducting the exact same marketing operation on Twitter: only a few influencers have access, and they only offer praise, with no mention of any drawbacks.

Frankly speaking, after the release of deepseek, the whole world is prone to believe that there will be another outstanding Chinese AI product, and manus is exploiting this mindset.

Here are the positive reactions that I trust, via Nathan Labenz and Ethan Mollick.

Nathan Labenz: I have used it today and I think it is definitely something

Operator-like experience but smarter planning (OpenAI’s is intentionally dumb there from what I can tell) and longer leash

Obviously output won’t be without issues, but I got legit value on travel search and planning on first use

Google slides it fell down on – I think due to insufficient resources in the VM causing browser crash – should be easily fixed though not necessarily cheap to run

Way too early to call it a winner, but it’s a preview of the web agent future that doesn’t suck

Notably it handled an AirBnb date picker and actually returned links to places I could book with 1 “reserve” click

Operator struggled with that and most everything else has failed entirely ime.

Utopia: As far as I can tell Manus abstracts away the complexity of websites into something that is easier to handle for an AI agent. It doesn’t actually look at a screenshot of the website and move the cursor pixel by pixel. I suspect it looks at the HTML code.

Now Ethan:

Ethan Mollick: Finally had a chance to try Manus. It’s a Claude wrapper, but a very clever one. Runs into the same issues as general agents, including getting stuck, but also capable of some good stuff.

eg “get me the 10k for apple and visualize it in different ways to show me trends& details”

Short version is that if you have used other agentic systems like Claude Code or Deep Research, you will have a good sense of what this can do and what the limits are likely to be.

For those who haven’t used them, I suspect a lot of people will be surprised at what LLMs can do.

It’s easy to be surprised if you’re not keeping up. Claude is really good, after all. If you’re really willing to spin Sonnet 3.7 for hours, as Manus will do, you should be able to get a lot out of that. The unit economics are not going to be pretty.

The biggest hype came, as always, from Mckay Wrigley, hyper-in-chief.

Mackay Wrigley: Watch for a 14min demo of me using Manus for the 1st time. It’s *shockinglygood.

Now imagine this in 2-3 years when: – it has >180 IQ – never stops working – is 10x faster – and runs in swarms by the 1000s AGI is coming – expect rapid progress.

Yes, in two years AI agents are going to be absurdly powerful. Wait for it.

Mackay Wrigley: I do really want to emphasize that both the agent under-the-hood and the actual UI are both *incrediblywell done. It’s legitimately impressive, and as a reminder, I don’t do paid posts. I saw the viral posts and kind of went “yeah doubt it’s that good” and boy was I wrong.

I do really want to emphasize that both the agent under-the-hood and the actual UI are both *incrediblywell done.

It’s legitimately impressive, and as a reminder, I don’t do paid posts.

I saw the viral posts and kind of went “yeah doubt it’s that good” and boy was I wrong.

Okay after further use I’m doubling down…

If OpenAI released an equivalent called DeepTask and charged $1k/mo for unlimited usage I’d pay it in 2 seconds.

It’s creating an entire research report + spec based on my preferred tech stack from latest versions.

wtf

Literally thought this was gonna be vaporware and now I’m amidst an existential crisis.

Claude 3.7 Sonnet + a computer + tools.

It’s so so so bullish that using Claude 3.7 Sonnet you can build something this good. Unhobblings are all you need.

I found his self-doubt hilarious, I would never expect Mckay to be unimpressed by anything. Perhaps that’s selection bias and when he isn’t impressed he stays quiet?

Mckay is an odd case, because he’s always super excited and enthusiastic, so you should interpret his statements as maybe modest enthusiasm. While the huge positive bias makes it difficult to take his pronouncements seriously, I do think he’s making a sincere attempt to process the situation. And he’s doing it right in the sense that he’s looking ahead to what a similar thing can be, not to what this current thing already is.

I strongly agree that ‘unhobbling is all you need’ applies to agents under Sonnet 3.7, at least sufficiently to take you reasonably far.

Still, oh man, hype!

It’s easy to forget how many times we have to not fall for hype, especially for Chinese AI products that are catching up to use Real Soon Now. DeepSeek has been essentially the only exception so far.

People on the internet really do fall for a lot of hype. This was an extreme case, in that there was both a quick build up of hype and very quick pushback challenging the hype.

To start with the purest example: This post got 1.4 million views and a link in Marginal Revolution, showing a wide array of Twitter bots on simulated phones on a widescreen monitor, claiming to be about Manus.

As per official word, this one was entirely fake, the video is not Manus at all.

Which should have been obvious since Manus isn’t even for smartphones.

Stefan Schubert: It’s incredible how gullible many people are re these nonsense clips, conjoined with some hype claim. Even though the cofounder of this company replies saying it’s not them, it gets almost a thousand of retweets and breathless commentary, including from smart people. Ridiculous.

Hype works, in that you get headlines like ‘Was Manus Another DeepSeek moment?’ here in SCMP, whereas Wendy Chen wrote an article whose message is essentially ‘no, this is hype,’ fitting the pattern of headlines that take the form of a question.

Or you get linked to things like this AI Revolution video whose big selling point is that Manus is so hyped. The actual claims about what Manus can do are lifted directly from the one staged demo, and treat as remarkable feats that are highly unremarkable. We live in a world where we say things like (go to 4: 10) ‘the specifics are unknown but the hype is real.’ It even picks up on the ‘AGI’ mention, which is over-the-top silly.

Here’s Deedy’s thread saying it is ‘worth the hype,’ the example is both unimpressive and straight off Manus’s website.

Chubby doubles down that Manus is ‘way better than Deep Research.’

Chubby: I did not overhype Manus when I said it’s way better than DeepResearch.

Not only gives it way deeper analysis but it has so much more capabilities.

This is the real deal. The real „feel the AGI moment“.

Give it 6 more months so that it’s faster, more reliable and more intelligent and it will replace 50% of all white collar jobs.

The future is coming faster than we expect.

Half of all white collar jobs in six months, huh?

That doesn’t mean the hype couldn’t reflect something real and important.

So what’s with all the hype? It started in China, so let’s go to the source.

Here is a Chinese source, QQ News, reporting (via Google translate, bold in original). This write-up feels very Chinese, and very wise:

Chao Xing (QQ News, 3/8 17: 21): Manus is still in the beta stage, and some technology self-media that got the invitation code started to hype it up after trying it out. “Another sleepless night in the technology circle,” “Tonight the starry sky belongs to China,” “On par with DeepSeek, kicking OpenAI,” “AI Agent’s DeepSeek moment”… big headlines and exclamation marks flooded the screen one after another, and netizens who have not actually experienced it can’t help but feel like they are seeing things in the fog: “Is it really that amazing?”

Different standards and different positions will certainly lead to different judgments. In fact, both technological innovation and application innovation are worth encouraging. There is no need to create a contempt chain and judge who is superior. As for Manus itself, it is still difficult for it to handle many tasks and there are many problems that are difficult to overcome at this stage. Therefore, some self-media have exaggerated it and it is obviously suspected of excessive marketing to harvest traffic.

This impetuousness and utilitarianism are more clearly demonstrated in the “invitation code hype”. In the past two days, on some social platforms and e-commerce websites, a Manus invitation code has even been hyped up to 50,000 to 100,000 yuan. In addition, some people paid to join the Manus study group, sell application services on behalf of others, sell application tutorials, etc. All kinds of chaos have caused a lot of negative public opinion. In response, Manus issued two articles to respond and apologize, saying that it completely underestimated everyone’s enthusiasm, and at the same time refuted rumors such as opening a paid invitation code and investing in marketing budgets.

In the face of the “trend,” don’t “hype.” When looking forward to the next DeepSeek, don’t forget how DeepSeek came about – not rushing for quick success and instant benefits, but making innovations down to earth.

There are two ways to get invitation codes selling for ~$6k-$12k, while your company is only valued at roughly $100 million.

One way is to create such an amazing product that everyone needs it now.

The other way is to issue a limited number of codes and a managed bought rollout.

Even if Manus were as useful as its advocates claim, it’s clearly that second way.

A Chinese company (still based in Wuhan!) aiming to create AI agents aimed for foreign markets would seem to be facing serious headwinds. A key element of effectively deploying AI agents is trust. Being Chinese is a serious barrier to that trust. There’s no moat for agent wrappers, so if it turns out to be good, wouldn’t an American VC-backed firm quickly eat its lunch?

The stated plan is to use hype to get data, then use the data to build something good.

Jordan Schneider: [Cofounder] Xiao is explicitly describing an intent to build an incumbent advantage on a foundation of user data, and TikTok demonstrates how effective that strategy can be. Reliance on eventual mass adoption could partially explain the high-publicity invite-only launch strategy for Manus (although limited access to compute is also certainly a factor).

That’s not the worst plan if you could go it alone, but again the valuation now is only $100 million, and the acquire-data-via-blitzscaling plan is going to be bottlenecked by some combination of funding and compute. Claude Sonnet is not cheap.

This is exactly where a16z finds some plausible founders, they put together a slide deck over the weekend and then everyone invests $3 billion at a $10 billion valuation, half in compute credits, and they have a superior version of this thing inside of a month.

The thing that makes blitzscaling work is network effects or other moats. It makes sense to have negative unit economics and to recklessly and brazely illegally scale if that locks in the customers. But with AI agents, there should be limited network effects, and essentially no moats. There will be some customer lock-in via customization, perhaps, but a good AI future agent should be able to solve that problem for you the same way it solves everything else.

So what’s the actual reason a Chinese company might have a competitive edge?

There are two reasons I’ve been able to figure out.

DeepSeek’s v3 and r1 were impressive achievements. They cooked. What was even more impressive was the hype involved. People compared the $5.5 million to train v3 to the entire capital cost structure of American companies, and treated r1’s (still actually impressive) capabilities as far better than they actually were, and also it got in right under the deadline, within a few weeks with Grok 3 and Claude 3.7 and GPT-4.5 and o3-mini-high with visible CoTs, it was clear that r1 wasn’t all that, and you mostly wouldn’t use it in cases where you didn’t need an open model.

Instead, we got this whole narrative of ‘China caught up to America’ which was, frankly, blatantly not true. But there’s a lot of momentum in that narrative, and a lot of people want to push it. It’s in the air. This is also partly due to other Chinese successes like TikTok and Temu, in general so many want to say China is winning.

If an American startup with no resources did this while eating Raman noodles, it is a curiosity. If a Chinese startup does it, it’s an international power story. And people have been primed that the Chinese will somehow put out the version ‘for the people’ or whatever. So, hype!

There’s no question that the big American labs could have launched something better than even the best-case version of Manus well before Manus. But they didn’t.

Dean Ball raises the other theory. What if Manus is regulatory arbitrage?

No, America has not passed substantive regulation of AI, but we have passed various regulations on other things, that apply to AI. What if the real Manus secret sauce is ‘you cannot sue us for our unreliable product?’

This combines Dean’s claims from several threads, if you want details:

Dean Ball: It is wrong to call manus a “deepseek moment.” Deepseek was about replication of capabilities already publicly achieved by American firms. Manus is actually advancing the frontier. The most sophisticated computer using ai now comes from a Chinese startup, full stop.

It’s interesting to note that every single one of the use cases manus shows in their own demo video is heavily regulated in the us (employment, real estate, finance), and would specifically be very strictly regulated uses under the “algorithmic discrimination” regs in the states.

Every use case of manus in the company’s demo video would be an enormous liability and regulatory risk for American companies (under current law! No sb 1047 required!), particularly given the glitchiness of manus.

The first use case manus demonstrates in their video is using an ai to screen resumes. In multiple jurisdictions, and soon in many, there are many laws targeting this precise use of ai. Even without those laws, there have been eeoc actions against similar uses under existing civil rights law.

If an American firm had shipped manus last month at manus’ current quality level, they’d currently be facing multiple investigations by state attorneys general, and if a democrat had won the White House, ftc and/or doj too (and conceivably dol, sec, eeoc, pick your poison)

The United States does not have a light touch regulatory approach to ai. Without a single ai-specific law passing, the united states already has an exceptionally burdensome and complex ai regulatory regime. Without action, this problem gets worse, not better.

It’s not that complex:

1. The United States has a lot of really complicated and broadly drafted laws

2. Those laws are going to bite us in the ass over and over again with ai, since ai is a gpt

3. A republic is modestly resilient to overbroad laws, because it is supposed to be governed and peopled by the virtuous .

4. For a while, this was true, but it isn’t true anymore. In particular, our governing elite class is generally of remarkably poor quality (not a left-right criticism).

5. So we kinda don’t have a republic anymore, in the sense that we don’t have one of the most important ingredients for one, according to the founders of the country

6. The bad overbroad laws will be used by our bad elites in bad ways to distort and slow down the most important thing that’s ever happened

7. We are plausibly deeply and profoundly fucked, and even if not we have a lot of work to do to fix our entire regulatory apparatus

8. Tech people don’t tend to understand any of this because they haven’t thought deeply, for the most part, about these topics (which is fine!)

9. I am trying to warn them

To be clear, manus is not that surprising of a capability. I’m sure American companies have had such things behind closed doors for months. And I hear manus may even be based in part on us models (Claude).

The reason us firms haven’t shipped this capability is legal risk.

Nathan (replying to explanation of illegality of the demos): Sure but this is true of non agentic AI tools for this purpose.

Dean Ball: Yep. But enforcement actions in America aren’t motivated by facts, they’re motivated by headlines. Simply having a buzzy product is a regulatory risk for that reason.

The core argument is that America has lots of laws, almost anything you do violates those laws, including many currently common uses of AI, and at some point people will get big mad or respond to hype by trying to enforce the laws as written, and this will heavily slow down AI deployment in extremely expensive ways.

Or, to use his words, ‘we are plausibly deeply and profoundly fed, and even if not we have a lot of work to do to fix our entire regulatory apparatus.’

That statement is definitely true in general, rather than about AI! We are profoundly fed in a wide variety of ways. We almost can’t build houses, or transmission lines and power plants, or do most other things in the world of atoms, without horribly inflated costs and timelines and often not even then.

And to the extent we do still actually do things, quite often the way we do those things is we ignore the laws and the laws aren’t enforced, but AI reduces the levels of friction required to enforce those laws, and makes what was previously implicit and vague and deniable much easier to identify. Which in these cases is big trouble.

And yes, there are many state efforts currently out there that would make this situation worse, in some cases much worse, with very little in compensatory gains.

None of this has anything at all to do with existential risk or catastrophic risk concerns, or any attempt to prevent such outcomes, or any ‘doomer’ proposals. Indeed, those who notice that AI might kill everyone are consistently in opposition to the overly burdensome regulatory state across the board, usually including everything in AI aside from frontier model development.

As an obligatory aside you can skip: Dean mentions the vetoed SB 1047. It seems like a good time to point out that SB 1047 not only is not required, it would not have made these problems substantively worse and could have established a framework that if anything reduced uncertainty while imposing only very modest costs and only on the biggest players, while buying us a lot of transparency and responsibility for the things that actually matter. Even if you think there were few benefits to laws like SB 1047, it was a very foolish place to be concentrating rhetorical firepower. But I digress.

If we really do want America to win the future, then yes we need broad based regulatory reform to make it actually true that You Can Just Do Things again, because for AI to do something, the thing has to actually get done, and our laws have a problem with that. That is something I would be happy to support, indeed my nonprofit Balsa Research is all about trying to do some of that.

Thus, I think any time is a good time to raise the alarm about this. The last thing we want to do is charge ahead to superintelligence with no regulations on that whatsoever, potentially getting everyone killed, while we cannot reap the bounty of what AI we already have due to dumb regulations.

Indeed, the nightmare is that the very inability to exploit (in the best sense) AI causes America to feel it has no choice but to push even farther ahead, more recklessly, even faster, because otherwise we will fail to use what we have and risk falling behind.

But how does this apply to Manus?

Dean Ball claims that an American company launching this would face multiple investigations and be in big legal trouble, and that legal risk is the reason American companies have not launched this.

I mostly don’t buy this.

I don’t buy it because of the track record, and because other considerations dominate.

We can draw a distinction between the large American tech companies worth tens of billions to trillions, including OpenAI, Google and Anthropic, and relatively small companies, largely startups, in a similar position to Manus.

For the larger companies, they did not launch a Manus because the product isn’t good enough yet, and they have reputations and customers to protect. Yes, there was also potential legal liability, but much more so in the ‘you lost all the customers money and they are mad about it’ sense than anything Dean Ball is complaining about. Mostly I see the risks as reputation loss borne of actual harm.

Also one can look at the track record. I expected vastly more legal trouble and restrictions for AI companies than we have actually seen.

We now regularly turn to AI for legal advice and medical advice. The AI offers it freely. The same goes for essentially everything else, there are simple jailbreaks for all the major LLMs. And it’s all fine, totally fine. What lawsuits there have been have been about the training data or other copyright violations.

Do we think for a second that AI isn’t being constantly used for resumes and real estate searches and such? Is there any attempt whatsoever to stop this?

The regime is very clear. I give you an AI to use how you see fit. What you choose to do with it is your problem. If you give an agent a command that violates EEOC’s rules, do not go crying to an AI developer.

Here’s how seriously we take all this right now, shall we say:

The one way in which this might be a ‘DeepSeek moment’ is that it could give a green light to American companies to be more aggressive in what they release. OpenAI moved various releases up in response to r1, and it is possible so did xAI or Anthropic.

Manus could act similarly, by showing how excited people would be for an actually good unhobbled AI agent, even if it was unreliable and often fell on its face and has to have a gigantic ‘at your own risk on so many levels’ sign attached to it. Now that the competition seems to force your hand and ‘makes you look good’ on the security front, why not go for it? It’s not like the Trump administration is going to mind.

I don’t even see anything in the resume analysis here that is an obvious EEOC violation even for the employer here. I can certainly agree that it is a perilous use case.

Let’s move on then to the second case, since Dean claims all the demo cases had violations. Does Dean actually think that an AI company would get into trouble because an AI compiled a report on various different NYC neighborhoods and filtered through apartment listings, for a buyer? I don’t get where the objection comes from here. Yes, as a seller there are various things you are not allowed to mention or consider. But as the buyer, or on behalf of the buyer? That’s a different ballgame.

Today, right now, there are algorithmic programs that tell landlords what rent to charge, in what critics claim is collusion on price, and which also almost certainly takes into account all the characteristics considered here in the demo, one way or another? And they want laws to ban such uses, exactly because the software is in widespread use, here in America.

Then the third thing is a stock analysis and stock correlation analysis, which is again a thing firms offer all the time, and where again I don’t see the issue. Is this ‘investment advice’? It doesn’t seem like it to me, it seems very specific and measured, and if this is investment advice then it’s no worse than what we see from Claude or ChatGPT, which are giving investment, medical and legal advice constantly.

Dean’s response is that enforcement here is based on hype, not what you actually do. But most of the existing AI hype belongs to major AI companies, again which are aiding and abetting all these violations constantly. The relevant absurd laws are, quite correctly, not being enforced in these ways. There are no investigations.

We also have a long history of technology startups essentially ignoring various regulations, then ‘fixing it in post’ down the line or flat out upending the relevant laws. Who can forget Uber’s strategy, deploying a very explicitly illegal service?

Certainly when at the level of Manus, which again is raising around $100 million, companies in Silicon Valley or at YC are told to Just Ship Things, to Do Things That Don’t Scale, and worry about the regulatory problems later. Something like half the YC class are doing AI agents in one form or another.

So why didn’t one of them do it? We all agree it’s not lack of technical chops. I very much do not think it is ‘because they would face an inquiry from the EEOC or attorney general’ either. It’s vanishingly unlikely, and if it did happen a YC company would love to get that level of hype and investigation, and sort it out later, what great publicity, totally worth it.

The actual legal issue is that this is a Claude wrapper, that’s why it works so well. Of course you can get good results with a jailbreak-inclusive Claude wrapper if you don’t care about the downside risks, to the user or otherwise, and you tailor your presentation to a narrow set of use cases, then call it a ‘universal’ AI agent. The actual ‘regulatory arbitrage’ that counts here is that Anthropic would rather you didn’t do that and all the associated problems.

Ignore the first sentence in Tyler Cowen’s post here, where he asserts that Manus is ‘for real, and ahead of its American counterparts.’ That’s a rather silly way of summarizing the situation, given everything we now know.

But as he notes, the more important question is the hypothetical. What would happen if a Chinese agentic product ‘got there’ before American agentic products, was an r2 wrapper rather than Claude, and was good enough that there was local incentive to let it ‘crawl all over American computers?’

The first answer is ‘Americans would beat it inside of a month.’

I don’t agree with Dean Ball that the main concern is legal risk in the sense of bias laws, but I do agree that the reason is a broader aversion to this form of general recklessness. It’s some combination of reputational risk, normal liability risk, some amount of amorphous weird legal risks, general alarm risk from agents being scary, and also compute limitations and a lack of focus on such projects.

If suddenly there were Chinese AI agents good enough that Americans were starting to deploy them, I predict that would all change quickly. There would not only be less fear of backlash, there would be government pressure to launch better agent products yesterday to fix the situation. Deals would be made.

But let’s suppose a less convenient possible world, that this isn’t true, and the Americans are indefinitely unable to catch up. Now what?

Tyler’s claim is that there is not much we could do about it. Yes, we could ban the Chinese agent from government computers, but we basically can’t ban software use. Except, of course, we effectively tell people we can’t use things for various purposes all the time. We could and likely would absolutely ban such use in ‘critical infrastructure’ and in a wide variety of other use cases, remove it from app stores and so on. Almost everyone would go on using American versions in those spots instead even if they were objectively worse, it’s not hard to twist most arms on this.

Yes, some people would use VPNs or otherwise work around restrictions and use the Chinese versions anyway, but this is a strange place to think we can’t mostly tell people what to do.

The exception would be if the Chinese version was so superior that America would be crippled not to use it, but in that case we’ve pretty much already lost either way.

Tyler Cowen points out that if it were otherwise, and a Chinese agent system were to get deep within America’s computers and core functions, this scenario is an obviously unacceptable security risk, on various levels.

But then he says maybe it’s fine, because the incentives will all work out, in some system of checks and balances?

Maybe this upends the authority of the CCP, somehow, he suggests? But without suggesting that perhaps this upends human authority in general, that perhaps the scenario being described is exactly one of gradual disempowerment as humans stop being meaningfully in charge? Except because he sees this as disempowering specifically the CCP it is oddly framed as something not to worry about, rather than an existential risk because the same thing happens to everyone else too?

He says ‘I am not talking about doomsday scenarios here’ but please stop and notice that no, you are wrong, you are talking about a doomsday scenario here! Alignment of the systems does not save you from this, do your economist job and solve for the equilibrium you yourself are implying.

Tyler Cowen: (There is plenty of discussion of alignment problems with AI. A neglected issue is whether the alignment solution resulting from the competitive process is biased on net toward “universal knowledge” entities, or some other such description, rather than “dogmatic entities.” Probably it is, and probably that is a good thing? …But is it always a good thing?)

If what survives into the future is simply ‘that which results from the competitive process’ then why do you think humanity is one of the things that survives?

Tyler Cowen: Let’s say China can indeed “beat” America at AI, but at the cost of giving up control over China, at least as that notion is currently understood. How does that change the world?

Solve for the equilibrium!

Who exactly should be most afraid of Manus and related advances to come?

Who loses the most status in the new, resulting checks and balances equilibrium?

Who gains?

So three responses.

First, it changes the world in that they would, by default, do it anyway, and give up control over China, and thus humanity would lose control over the future. Because they will do it gradually rather than all at once, before we have the chance to do it first, right? Isn’t that the ‘logical’ result?

Second, yes, now that we solved for the equilibrium, we should Pick Up the Phone.

Third, to answer your question of who should be most afraid…

Don’t forget to like and subscribe.

Discussion about this post

The Manus Marketing Madness Read More »

study:-megalodon’s-body-shape-was-closer-to-a-lemon-shark

Study: Megalodon’s body shape was closer to a lemon shark


the mighty, mighty megalodon

Also: Baby megalodons were likely the size of great white sharks and capable of hunting marine mammals

The giant extinct shark species known as the megalodon has captured the interest of scientists and the general public alike, even inspiring the 2018 blockbuster film The Meg. The species lived some 3.6 million years ago and no complete skeleton has yet been found. So there has been considerable debate among paleobiologists about megalodon’s size, body shape and swimming speed, among other characteristics.

While some researchers have compared megalodon to a gigantic version of the stocky great white shark,  others believe the species had a more slender body shape. A new paper published in the journal Palaeontologia Electronica bolsters the latter viewpoint, also drawing conclusions about the megalodon’s body mass, swimming speed (based on hydrodynamic principles), and growth patterns.

As previously reported, the largest shark alive today, reaching up to 20 meters long, is the whale shark, a sedate filter feeder. As recently as 4 million years ago, however, sharks of that scale likely included the fast-moving predator megalodon (formally Otodus megalodon). Due to incomplete fossil data, we’re not entirely sure how large megalodons were and can only make inferences based on some of their living relatives.

Thanks to research published in 2023 on its fossilized teeth, we’re now fairly confident that megalodon shared something else with these relatives: it wasn’t entirely cold-blooded and kept its body temperature above that of the surrounding ocean. Most sharks, like most fish, are ectothermic, meaning that their body temperatures match those of the surrounding water. But a handful of species, part of a group termed mackerel sharks, are endothermic: They have a specialized pattern of blood circulation that helps retain some of the heat their muscles produce. This enables them to keep some body parts at a higher temperature than their surroundings.

Of particular relevance to this latest paper is a 2022 study by Jack Cooper of Swansea University in the UK and his co-authors. In 2020, the team reconstructed a 2D model of the megalodon, basing the dimensions on similar existing shark species. The researchers followed up in 2022 with a reconstructed 3D model, extrapolating the dimensions from a megalodon specimen (a vertebral column) in Belgium. Cooper concluded that a megalodon would have been a stocky, powerful shark—measuring some 52 feet (16 meters) in length with a body mass of 67.86 tons—able to execute bursts of high speed to attack prey, much like the significantly smaller great white shark.

(H) One of the largest vertebrae of Otodus meg- alodon; (I and J) CT scans showing cross-sectional views.

(H) One of the largest vertebrae of Otodus megalodon; (I and J) CT scans showing cross-sectional views. Credit: Shimada et al., 2025

Not everyone agreed, however, Last year, a team of 26 shark experts led by Kesnshu Shimada, a paleobiologist at DePaul University, further challenged the great white shark comparison, arguing that the super-sized creature’s body was more slender and possibly even longer than researchers previously thought. The team concluded that based on the spinal column, the combination of a great white build with the megalodon’s much longer length would have simply proved too cumbersome.

A fresh approach

Now Shimada is back with a fresh analysis, employing a new method that he says provides independent lines of evidence for the megalodon’s slender build. “Our new study does not use the modern great white shark as a model, but rather simply asks, ‘How long were the head and tail based on the trunk [length] represented by the fossil vertebral column?’ using the general body plan seen collectively in living and fossil sharks,” Shimada told Ars.

Shimada and his co-authors measured the proportions of 145 modern and 20 extinct species of shark, particularly the head, trunk, and tail relative to total body length. Megalodon was represented by a Belgian vertebral specimen. The largest vertebra in that specimen measured 15.5 centimeters (6 inches) in diameter, although there are other megalodon vertebrae in Denmark, for example, with diameters as much as 23 centimeters (9 inches).

Based on their analysis, Shimada et al, concluded that, because the trunk section of the Belgian specimen measured 11 meters, the head and tail were probably about 1.8 meters (6 feet) and 3.6 meters (12 feet) long, respectively, with a total body length of 16.4 meters (54 feet) for this particularly specimen. That means the Danish megalodon specimens could have been as long as 24.3 meters (80 feet). As for body shape, taking the new length estimates into account, the lemon shark appears to be closest modern analogue. “However, the exact position and shape of practically all the fins remain uncertain,” Shimada cautioned. “We are only talking about the main part of the body.”

Revised tentative body outline of 24.3 meters (80 feet) extinct megatooth shark, Otodus megalodon.

Credit: DePaul University/Kenshu Shimada

The team also found that a 24.3-meter-long megalodon would have weighed a good 94 tons with an estimated swimming speed of 2.1-3.5 KPM (1.3-2.2 MPH). They also studied growth patterns evident in the Belgian vertebrae, concluding that the megalodon would give live birth and that the  newborns would be between 3.6 to 3.9 meters (12-13 feet) long—i.e., roughly the size of a great white shark. The authors see this as a refutation of the hypothesis that megalodons relied on nursery areas to rear their young, since a baby megalodon would be quite capable of hunting and killing marine mammals based on size alone.

In addition, “We unexpectedly unlocked the mystery of why certain aquatic vertebrates can attain gigantic sizes while others cannot,” Shimada said. “Living gigantic sharks, such as the whale shark and basking shark, as well as many other gigantic aquatic vertebrates like whales have slender bodies because large stocky bodies are hydrodynamically inefficient for swimming.”

That’s in sharp contrast to the great white shark, whose stocky body becomes even stockier as it grows. “It can be ‘large’ but cannot [get] past 7 meters (23 feet) to be ‘gigantic’ because of hydrodynamic constraints,” said Shimada. “We also demonstrate that the modern great white shark with a stocky body hypothetically blown up to the size of megalodon would not allow it to be an efficient swimmer due to the hydrodynamic constraints, further supporting the idea that it is more likely than not that megalodon must have had a much slenderer body than the modern great white shark.”

Shimada emphasized that their interpretations remain tentative but they are based on hard data and make for useful reference points for future research.

An “exciting working hypothesis”

For his part, Cooper found a lot to like in Shimada et al.’s latest analysis. “I’d say everything presented here is interesting and presents an exciting working hypothesis but that these should also be taken with a grain of salt until they can either be empirically tested, or a complete skeleton of megalodon is found to confirm one way or the other,” Cooper told Ars. “Generally, I appreciate the paper’s approach to its body size calculation in that it uses a lot of different shark species and doesn’t make any assumptions as to which species are the best analogues to megalodon.”

Shark biologists now say a lemon shark, like this one, is a better model of the extinct megalodon's body than the great white shark.

Shark biologists now say a lemon shark, like this one, is a better model of the extinct megalodon’s body than the great white shark. Credit: Albert Kok

Cooper acknowledged that it makes sense that a megalodon would be slightly slower than a great white given its sheer size, “though it does indicate we’ve got a shark capable of surprisingly fast speeds for its size,” he said. As for Shimada’s new growth model, he pronounced it “really solid” and concurred with the findings on birthing with one caveat. “I think the refutation of nursery sites is a bit of a leap, though I understand the temptation given the remarkably large size of the baby sharks,” he said. “We have geological evidence of multiple nurseries—not just small teeth, but also geological evidence of the right environmental conditions.”

He particularly liked Shinada et al.’s final paragraph. “[They] call out ‘popular questions’ along the lines of, ‘Was megalodon stronger than Livyatan?'” said Cooper. “I agree with the authors that these sorts of questions—ones we all often get asked by ‘fans’ on social media—are really not productive, as these unscientific questions disregard the rather amazing biology we’ve learned about this iconic, real species that existed, and reduce it to what I can only describe as a video game character.”

Regardless of how this friendly ongoing debate plays out, our collective fascination with megalodon is likely to persist. “It’s the imagining of such a magnificently enormous shark swimming around our oceans munching on whales, and considering that geologically speaking this happened in the very recent past,” said Cooper of the creature’s appeal. “It really captures what evolution can achieve, and even the huge size of their teeth alone really put it into perspective.”

DOI: Palaeontologia Electronica, 2025. 10.26879/1502  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Study: Megalodon’s body shape was closer to a lemon shark Read More »

white-house-may-seek-to-slash-nasa’s-science-budget-by-50-percent

White House may seek to slash NASA’s science budget by 50 percent

In many ways, NASA’s science directorate is the crown jewel of the space agency. Nearly all of the most significant achievements over the last 25 years have been delivered by the science programs: Ingenuity flying on Mars, New Horizons swooping by Pluto, images from the James Webb Space Telescope, the discovery of thousands of exoplanets, the return of samples from asteroids and comets, Cassini’s discovery of water plumes on Enceladus, a continuous robotic presence on Mars, and so much more. Even the recent lunar landings by Firefly and Intuitive Machines were funded by NASA’s science directorate.

Of NASA’s roughly $25 billion budget, however, only about 30 percent is allocated to science. For fiscal year 2024, this amounted to $7.4 billion. This spending was broken down into approximately $2.7 billion for planetary science, $2.2 billion for Earth science, $1.5 billion for astrophysics, and $800 million for heliophysics.

NASA science funding since 1980.

Credit: Casey Dreier/The Planetary Society

NASA science funding since 1980. Credit: Casey Dreier/The Planetary Society

The proposed cuts are being driven by Russell Vought, the recently confirmed director of the White House Office of Management and Budget, which sets budget and policy priorities for a presidential administration. In some sense, the budgetary decisions should not come as a surprise, as they are consistent with what Vought proposed in a “shadow” budget for fiscal-year 2023 as part of his Center for Renewing America.

“The budget also proposes a 50 percent reduction in NASA Science programs and spending, reducing their misguided Carbon Reduction System spending and Global Climate Change programs,” Vought’s organization wrote in its report published in December 2022.

Zeroing out Earth science?

Despite Vought’s desire, however, NASA is expressly charged with studying our planet.

The congressional act that created NASA in 1958 calls for the space agency to expand human knowledge about Earth’s atmosphere and space, and the agency’s Earth observation satellites have substantially increased our understanding of this planet’s weather, changing climate, and land use.

Even if NASA’s Earth science budget were taken to zero, cutting the overall science budget in half would still dramatically reduce funding in planetary science as well as other research areas. Scientists told Ars that NASA would be forced to make difficult decisions, likely including shutting off extended missions such as the Voyager and Curiosity probes on Mars, and possibly even the Hubble Space Telescope. It might be possible to save missions in later stages of development, such as the Dragonfly probe to Saturn’s moon Titan, and the NEO Surveyor mission to search for hazardous asteroids. But it would be impossible to start meaningful new missions to explore the Solar System, potentially setting back planetary exploration a decade.

White House may seek to slash NASA’s science budget by 50 percent Read More »

measles-outbreak-hits-208-cases-as-federal-response-goes-off-the-rails

Measles outbreak hits 208 cases as federal response goes off the rails

Vitamin A is a fat-soluble vitamin that stays in the body. Taking too much over longer periods can cause vomiting, headache, fatigue, joint and bone pain, blurry vision, and skin and hair problems. Further, it can lead to dangerously high pressure inside the skull that pushes on the brain, as well as liver damage, confusion, coma, and other problems, according to the American Academy of Pediatrics.

Nevertheless, in an interview with Fox News this week, Kennedy endorsed an unconventional regimen of a steroid, an antibiotic and cod liver oil, praising two Texas doctors for giving it to patients. One of the doctors Kennedy championed was disciplined by the state medical board in 2003 for “unusual use of risk-filled medications,” according to a report by CNN.

In a yet more worrying sign, Reuters reported Friday afternoon that the CDC is planning to conduct a large study on whether the MMR vaccine is linked to autism. This taxpayer-funded effort would occur despite the fact that decades of research and numerous high-quality studies have already been conducted—and they have consistently disproven or found no connection between the vaccine and autism.

The agency’s move is exactly what Democratic senators feared when Kennedy was confirmed as the country’s top health official. In Senate hearings, Kennedy refused to say that vaccines do not cause autism. Democratic senators quickly warned that his anti-vaccine stance could not only move the country backward in the fight against vaccine-preventable diseases, but also hold back autism research aimed at finding the real cause(s) as well as better treatments.

“When you continue to sow doubt about settled science it makes it impossible for us to move forward,” Senator Maggie Hassan (D-N.H.) said in a Senate hearing. “It’s the relitigating and rehashing … it freezes us in place.”

Measles outbreak hits 208 cases as federal response goes off the rails Read More »

music-labels-will-regret-coming-for-the-internet-archive,-sound-historian-says

Music labels will regret coming for the Internet Archive, sound historian says

But David Seubert, who manages sound collections at the University of California, Santa Barbara library, told Ars that he frequently used the project as an archive and not just to listen to the recordings.

For Seubert, the videos that IA records of the 78 RPM albums capture more than audio of a certain era. Researchers like him want to look at the label, check out the copyright information, and note the catalogue numbers, he said.

“It has all this information there,” Seubert said. “I don’t even necessarily need to hear it,” he continued, adding, “just seeing the physicality of it, it’s like, ‘Okay, now I know more about this record.'”

Music publishers suing IA argue that all the songs included in their dispute—and likely many more, since the Great 78 Project spans 400,000 recordings—”are already available for streaming or downloading from numerous services.”

“These recordings face no danger of being lost, forgotten, or destroyed,” their filing claimed.

But Nathan Georgitis, the executive director of the Association for Recorded Sound Collections (ARSC), told Ars that you just don’t see 78 RPM records out in the world anymore. Even in record stores selling used vinyl, these recordings will be hidden “in a few boxes under the table behind the tablecloth,” Georgitis suggested. And in “many” cases, “the problem for libraries and archives is that those recordings aren’t necessarily commercially available for re-release.”

That “means that those recordings, those artists, the repertoire, the recorded sound history in itself—meaning the labels, the producers, the printings—all of that history kind of gets obscured from view,” Georgitis said.

Currently, libraries trying to preserve this history must control access to audio collections, Georgitis said. He sees IA’s work with the Great 78 Project as a legitimate archive in that, unlike a streaming service, where content may be inconsistently available, IA’s “mission is to preserve and provide access to content over time.”

Music labels will regret coming for the Internet Archive, sound historian says Read More »