Author name: DJ Henderson

elon-musk:-there-is-an-80-percent-chance-starship’s-engine-bay-issues-are-solved

Elon Musk: There is an 80 percent chance Starship’s engine bay issues are solved

Ars: Ten years ago you kind of made big bets on Starship and Starlink, and most people probably expected one or both of them to fail.

Musk: Including me.

Ars: Yeah. These were huge bets.

Musk: I was interviewed in the early days of Starlink, and they were asking me what’s the goal of Starlink? I said goal number one: don’t go bankrupt, as every other [low-Earth orbit] communications constellation has gone bankrupt, and we don’t want to join them in the cemetery. So any outcome that does not result in death would be a good outcome.

Ars: Starlink has become really successful. It helped me during a hurricane. And Starship is coming along. As you look out for the next 10 years, what are you betting on big now that will really bear fruit for SpaceX a decade from now?

Musk: Well, by far the biggest thing is Starship. If the Starship program is successful—and we see a path to success—it’s just a question of when we will have created the first fully reusable orbital launch vehicle, which is the holy grail of rocketry, as you know. So no one has ever made a fully reusable orbital vehicle, and even the parts that have been reusable have been extremely arduous to reuse, such that the economics actually were worse than an expendable rocket in a lot of cases. The canonical example being the shuttle, where the shuttle’s fully loaded, cost of the whole program, I believe, was about a billion dollars a flight.

Ars: I saw one research paper that estimated the fully loaded cost was about $1.5 billion.

Musk. Yeah. And that is roughly equivalent to a Saturn V cost. But the Saturn V as an expendable rocket had four times the payload capacity of the shuttle. So the shuttle was like the principle of reusability was a good one, but the execution, unfortunately, was not. The shuttle got burdened by so many crazy requirements. You know, I’ve got this five-step first principles process thing for making things better. And step one of my five-step process is make the requirements less dumb. And for the government, it’s the opposite. The government is making requirements more dumb.

Ars: So getting a rapid and reusable Starship is the main goal for SpaceX over the next 5 to 10 years?

Musk: Yeah, absolutely.

Ars: You’ve been in the space industry now for almost 25 years. And in that time, SpaceX has gone a long way toward solving launch. So if you were coming into the industry today as a 20-something, you know, with a couple $100 million, what would be the problem you would want to solve? What should new companies, philanthropists, and others be working on in space?

Musk: We’re building the equivalent of the Union Pacific Railroad and the train. So once you have the transportation system to Mars, then there’s a vast set of opportunities that open up to do anything on the surface of Mars, which includes, you know, doing everything from building a semiconductor fab to a pizza joint, basically building a civilization. So we want to solve the transport problem, and that can enable philanthropists and entrepreneurs to do things on Mars, which is everything needed for civilization. Look at, say, California. There were very few people in California until the Union Pacific was completed, and then California became the most populous state in the nation. And look at Silicon Valley and Hollywood and everything. So that’s our goal. We want to get people there, and if we can get people there, then there’s a literal world of opportunity.

Elon Musk: There is an 80 percent chance Starship’s engine bay issues are solved Read More »

the-key-to-a-successful-egg-drop-experiment?-drop-it-on-its-side

The key to a successful egg drop experiment? Drop it on its side

There was a key difference, however, between how vertically and horizontally  squeezed eggs deformed in the compression experiments—namely, the former deformed less than the latter. The shell’s greater rigidity along its long axis was an advantage because the heavy load was distributed over the surface. (It’s why the one-handed egg-cracking technique targets the center of a horizontally held egg.)

But the authors found that this advantage when under static compression proved to be a disadvantage when dropping eggs from a height, with the horizontal position emerging as the optimal orientation.  It comes down to the difference between stiffness—how much force is needed to deform the egg—and toughness, i.e., how much energy the egg can absorb before it cracks.

Cohen et al.’s experiments showed that eggs are tougher when loaded horizontally along their equator, and stiffer when compressed vertically, suggesting that “an egg dropped on its equator can likely sustain greater drop heights without cracking,” they wrote. “Even if eggs could sustain a higher force when loaded in the vertical direction, it does not necessarily imply that they are less likely to break when dropped in that orientation. In contrast to static loading, to remain intact following a dynamic impact, a body must be able to absorb all of its kinetic energy by transferring it into reversible deformation.”

“Eggs need to be tough, not stiff, in order to survive a fall,” Cohen et al. concluded, pointing to our intuitive understanding that we should bend our knees rather than lock them into a straightened position when landing after a jump, for example. “Our results and analysis serve as a cautionary tale about how language can affect our understanding of a system, and improper framing of a problem can lead to misunderstanding and miseducation.”

DOI: Communications Physics, 2025. 10.1038/s42005-025-02087-0  (About DOIs).

The key to a successful egg drop experiment? Drop it on its side Read More »

trump-threatens-apple-with-25%-tariff-to-force-iphone-manufacturing-into-us

Trump threatens Apple with 25% tariff to force iPhone manufacturing into US

Donald Trump woke up Friday morning and threatened Apple with a 25 percent tariff on any iPhones sold in the US that are not manufactured in America.

In a Truth Social post, Trump claimed that he had “long ago” told Apple CEO Tim Cook that Apple’s plan to manufacture iPhones for the US market in India was unacceptable. Only US-made iPhones should be sold here, he said.

“If that is not the case, a tariff of at least 25 percent must be paid by Apple to the US,” Trump said.

This appears to be the first time Trump has threatened a US company directly with tariffs, and Reuters noted that “it is not clear if Trump can levy a tariff on an individual company.” (Typically, tariffs are imposed on countries or categories of goods.)

Apple has so far not commented on the threat after staying silent when Trump started promising US-made iPhones were coming last month. At that time, Apple instead continued moving its US-destined operations from China into India, where tariffs were substantially lower and expected to remain so.

In his social media post, Trump made it clear that he did not approve of Apple’s plans to pivot production to India or “anyplace else” but the US.

For Apple, building an iPhone in the US threatens to spike costs so much that they risk pricing out customers. In April, CNBC cited Wall Street analysts estimating that a US-made iPhone could cost anywhere from 25 percent more—increasing to at least about $1,500—to potentially $3,500 at most. Today, The New York Times cited analysts forecasting that the costly shift “could more than double the consumer price of an iPhone.”

It’s unclear if Trump could actually follow through on this latest tariff threat, but the morning brought more potential bad news for Apple’s long-term forecast in another Truth Social post dashed off shortly after the Apple threat.

In that post, Trump confirmed that the European Union “has been very difficult to deal with” in trade talks, which he fumed “are going nowhere!” Because these talks have apparently failed, Trump ordered “a straight 50 percent tariff” on EU imports starting on June 1.

Trump threatens Apple with 25% tariff to force iPhone manufacturing into US Read More »

rocket-report:-spacex’s-expansion-at-vandenberg;-india’s-pslv-fails-in-flight

Rocket Report: SpaceX’s expansion at Vandenberg; India’s PSLV fails in flight


China’s diversity in rockets was evident this week, with four types of launchers in action.

Dawn Aerospace’s Mk-II Aurora airplane in flight over New Zealand last year. Credit: Dawn Aerospace

Welcome to Edition 7.45 of the Rocket Report! Let’s talk about spaceplanes. Since the Space Shuttle, spaceplanes have, at best, been a niche part of the space transportation business. The US Air Force’s uncrewed X-37B and a similar vehicle operated by China’s military are the only spaceplanes to reach orbit since the last shuttle flight in 2011, and both require a lift from a conventional rocket. Virgin Galactic’s suborbital space tourism platform is also a spaceplane of sorts. A generation or two ago, one of the chief arguments in favor of spaceplanes was that they were easier to recover and reuse. Today, SpaceX routinely reuses capsules and rockets that look much more like conventional space vehicles than the winged designs of yesteryear. Spaceplanes are undeniably alluring in appearance, but they have the drawback of carrying extra weight (wings) into space that won’t be used until the final minutes of a mission. So, do they have a future?

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

One of China’s commercial rockets returns to flight. The Kinetica-1 rocket launched Wednesday for the first time since a failure doomed its previous attempt to reach orbit in December, according to the vehicle’s developer and operator, CAS Space. The Kinetica-1 is one of several small Chinese solid-fueled launch vehicles managed by a commercial company, although with strict government oversight and support. CAS Space, a spinoff of the Chinese Academy of Sciences, said its Kinetica-1 rocket deployed multiple payloads with “excellent orbit insertion accuracy.” This was the seventh flight of a Kinetica-1 rocket since its debut in 2022.

Back in action … “Kinetica-1 is back!” CAS Space posted on X. “Mission Y7 has just successfully sent six satellites into designated orbits, making a total of 63 satellites or 6 tons of payloads since its debut. Lots of missions are planned for the coming months. 2025 is going to be awesome.” The Kinetica-1 is designed to place up to 2 metric tons of payload into low-Earth orbit. A larger liquid-fueled rocket, Kinetica-2, is scheduled to debut later this year.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

French government backs a spaceplane startup. French spaceplane startup AndroMach announced May 15 that it received a contract from CNES, the French space agency, to begin testing an early prototype of its Banger v1 rocket engine, European Spaceflight reports. Founded in 2023, AndroMach is developing a pair of spaceplanes that will be used to perform suborbital and orbital missions to space. A suborbital spaceplane will utilize turbojet engines for horizontal takeoff and landing, and a pressure-fed biopropane/liquid oxygen rocket engine to reach space. Test flights of this smaller vehicle will begin in early 2027.

A risky proposition … A larger ÉTOILE “orbital shuttle” is designed to be launched by various small launch vehicles and will be capable of carrying payloads of up to 100 kilograms (220 pounds). According to the company, initial test flights of ÉTOILE are expected to begin at the beginning of the next decade. It’s unclear how much CNES is committing to AndroMach through this contract, but the company says the funding will support testing of an early demonstrator for its propane-fueled engine, with a focus on evaluating its thermodynamic performance. It’s good to see European governments supporting developments in commercial space, but the path to a small commercial orbital spaceplane is rife with risk. (submitted by EllPeaTea)

Dawn Aerospace is taking orders. Another spaceplane company in a more advanced stage of development says it is now taking customer orders for flights to the edge of space. New Zealand-based Dawn Aerospace said it is beginning to take orders for its remotely piloted, rocket-powered suborbital spaceplane, known as Aurora, with first deliveries expected in 2027, Aviation Week & Space Technology reports. “This marks a historic milestone: the first time a space-capable vehicledesigned to fly beyond the Kármán line (100 kilometers or 328,000 feet)has been offered for direct sale to customers,” Dawn Aerospace said in a statement. While it hasn’t yet reached space, Dawn’s Aurora spaceplane flew to supersonic speed for the first time last year and climbed to an altitude of 82,500 feet (25.1 kilometers), setting a record for the fastest climb from a runway to 20 kilometers.

Further along … Aurora is small in stature, measuring just 15.7 feet (4.8 meters) long. It’s designed to loft a payload of up to 22 pounds (10 kilograms) above the Kármán line for up to three minutes of microgravity, before returning to a runway landing. Eventually, Dawn wants to reduce the turnaround time between Aurora flights to less than four hours. “Aurora is set to become the fastest and highest-flying aircraft ever to take off from a conventional runway, blending the extreme performance of rocket propulsion with the reusability and operational simplicity of traditional aviation,” Dawn said. The company’s business model is akin to commercial airlines, where operators can purchase an aircraft directly from a manufacturer and manage their own operations. (submitted by EllPeaTea)

India’s workhorse rocket falls short of orbit. In a rare setback, Indian Space Research Organisation’s (ISRO) launch vehicle PSLV-C61 malfunctioned and failed to place a surveillance satellite into the intended orbit last weekend, the Times of India reported. The Polar Satellite Launch Vehicle lifted off from a launch pad on the southeastern coast of India early Sunday, local time, with a radar reconnaissance satellite named EOS-09, or RISAT-1B. The satellite was likely intended to gather intelligence for the Indian military. “The country’s military space capabilities, already hindered by developmental challenges, have suffered another setback with the loss of a potential strategic asset,” the Times of India wrote.

What happened? … V. Narayanan, ISRO’s chairman, later said that the rocket’s performance was normal until the third stage. The PSLV’s third stage, powered by a solid rocket motor, suffered a “fall in chamber pressure” and the mission could not be accomplished, Narayanan said. Investigators are probing the root cause of the failure. Telemetry data indicated the rocket deviated from its planned flight path around six minutes after launch, when it was traveling more than 12,600 mph (5.66 kilometers per second), well short of the speed it needed to reach orbital velocity. The rocket and its payload fell into the Indian Ocean south of the launch site. This was the first PSLV launch failure in eight years, ending a streak of 21 consecutive successful flights. (submitted by EllPeaTea)

SES makes a booking with Impulse Space. SES, owner of the world’s largest fleet of geostationary satellites, plans to use Impulse Space’s Helios kick stage to take advantage of lower-cost, low-Earth-orbit (LEO) launch vehicles and get its satellites quickly into higher orbits, Aviation Week & Space Technology reports. SES hopes the combination will break a traditional launch conundrum for operators of medium-Earth-orbit (MEO) and geostationary orbit (GEO). These operators often must make a trade-off between a lower-cost launch that puts them farther from their satellite’s final orbit, or a more expensive launch that can expedite their satellite’s entry into service.

A matter of hours … On Thursday, SES and Impulse Space announced a multi-launch agreement to use the methane-fueled Helios kick stage. “The first mission, currently planned for 2027, will feature a dedicated deployment from a medium-lift launcher in LEO, followed by Helios transferring the 4-ton-class payload directly to GEO within eight hours of launch,” Impulse said in a statement. Typically, this transit to GEO takes several weeks to several months, depending on the satellite’s propulsion system. “Today, we’re not only partnering with Impulse to bring our satellites faster to orbit, but this will also allow us to extend their lifetime and accelerate service delivery to our customers,” said Adel Al-Saleh, CEO of SES. “We’re proud to become Helios’ first dedicated commercial mission.”

Unpacking China’s spaceflight patches. There’s a fascinating set of new patches Chinese officials released for a series of launches with top-secret satellites over the last two months, Ars reports. These four patches depict Buddhist gods with a sense of artistry and sharp colors that stand apart from China’s previous spaceflight emblems, and perhaps—or perhaps not—they can tell us something about the nature of the missions they represent. The missions launched so-called TJS satellites toward geostationary orbit, where they most likely will perform missions in surveillance, signals intelligence, or missile warning. 

Making connections … It’s not difficult to start making connections between the Four Heavenly Gods and the missions that China’s TJS satellites likely carry out in space. A protector with an umbrella? An all-seeing entity? This sounds like a possible link to spy craft or missile warning, but there’s a chance Chinese officials approved the patches to misdirect outside observers, or there’s no connection at all.

China aims for an asteroid. China is set to launch its second Tianwen deep space exploration mission late May, targeting both a near-Earth asteroid and a main belt comet, Space News reports. The robotic Tianwen-2 spacecraft is being integrated with a Long March 3B rocket at the Xichang Satellite Launch Center in southwest China, the country’s top state-owned aerospace contractor said. Airspace closure notices indicate a four-hour-long launch window opening at noon EDT (16: 00–20: 00 UTC) on May 28. Backup launch windows are scheduled for May 29 and 30.

New frontiers … Tianwen-2’s first goal is to collect samples from a near-Earth asteroid designated 469219 Kamoʻoalewa, or 2016 HO3, and return them to Earth in late 2027 with a reentry module. The Tianwen-2 mothership will then set a course toward a comet for a secondary mission. This will be China’s first sample return mission from beyond the Moon. The asteroid selected as the target for Tianwen-2 is believed by scientists to be less than 100 meters, or 330 feet, in diameter, and may be made of material thrown off the Moon some time in its ancient past. Results from Tianwen-2 may confirm that hypothesis. (submitted by EllPeaTea)

Upgraded methalox rocket flies from Jiuquan. Another one of China’s privately funded launch companies achieved a milestone this week. Landspace launched an upgraded version of its Zhuque-2E rocket Saturday from the Jiuquan launch base in northwestern China, Space News reports. The rocket delivered six satellites to orbit for a range of remote sensing, Earth observation, and technology demonstration missions. The Zhuque-2E is an improved version of the Zhuque-2, which became the first liquid methane-fueled rocket in the world to reach orbit in 2023.

Larger envelope … This was the second flight of the Zhuque-2E rocket design, but the first to utilize a wider payload fairing to provide more volume for satellites on their ride into space. The Zhuque-2E is a stepping stone toward a much larger rocket Landspace is developing called the Zhuque-3, a stainless steel launcher with a reusable first stage booster that, at least outwardly, bears some similarities to SpaceX’s Falcon 9. (submitted by EllPeaTea)

FAA clears SpaceX for Starship Flight 9. The Federal Aviation Administration gave the green light Thursday for SpaceX to launch the next test flight of its Starship mega-rocket as soon as next week, following two consecutive failures earlier this year, Ars reports. The failures set back SpaceX’s Starship program by several months. The company aims to get the rocket’s development back on track with the upcoming launch, Starship’s ninth full-scale test flight since its debut in April 2023. Starship is central to SpaceX’s long-held ambition to send humans to Mars and is the vehicle NASA has selected to land astronauts on the Moon under the umbrella of the government’s Artemis program.

Targeting Tuesday, for now … In a statement Thursday, the FAA said SpaceX is authorized to launch the next Starship test flight, known as Flight 9, after finding the company “meets all of the rigorous safety, environmental and other licensing requirements.” SpaceX has not confirmed a target launch date for the next launch of Starship, but warning notices for pilots and mariners to steer clear of hazard areas in the Gulf of Mexico suggest the flight might happen as soon as the evening of Tuesday, May 27. The rocket will lift off from Starbase, Texas, SpaceX’s privately owned spaceport near the US-Mexico border. The FAA’s approval comes with some stipulations, including that the launch must occur during “non-peak” times for air traffic and a larger closure of airspace downrange from Starbase.

Space Force is fed up with Vulcan delays. In recent written testimony to a US House of Representatives subcommittee that oversees the military, the senior official responsible for purchasing launches for national security missions blistered one of the country’s two primary rocket providers, Ars reports. The remarks from Major General Stephen G. Purdy, acting assistant secretary of the Air Force for Space Acquisition and Integration, concerned United Launch Alliance and its long-delayed development of the large Vulcan rocket. “The ULA Vulcan program has performed unsatisfactorily this past year,” Purdy said in written testimony during a May 14 hearing before the House Armed Services Committee’s Subcommittee on Strategic Forces. This portion of his testimony did not come up during the hearing, and it has not been reported publicly to date.

Repairing trust … “Major issues with the Vulcan have overshadowed its successful certification resulting in delays to the launch of four national security missions,” Purdy wrote. “Despite the retirement of highly successful Atlas and Delta launch vehicles, the transition to Vulcan has been slow and continues to impact the completion of Space Force mission objectives.” It has widely been known in the space community that military officials, who supported Vulcan with development contracts for the rocket and its engines that exceeded $1 billion, have been unhappy with the pace of the rocket’s development. It was originally due to launch in 2020. At the end of his written testimony, Purdy emphasized that he expected ULA to do better. As part of his job as the Service Acquisition Executive for Space (SAE), Purdy noted that he has been tasked to transform space acquisition and to become more innovative. “For these programs, the prime contractors must re-establish baselines, establish a culture of accountability, and repair trust deficit to prove to the SAE that they are adopting the acquisition principles necessary to deliver capabilities at speed, on cost and on schedule.”

SpaceX’s growth on the West Coast. SpaceX is moving ahead with expansion plans at Vandenberg Space Force Base, California, that will double its West Coast launch cadence and enable Falcon Heavy rockets to fly from California, Spaceflight Now reports. Last week, the Department of the Air Force issued its Draft Environmental Impact Statement (EIS), which considers proposed modifications from SpaceX to Space Launch Complex 6 (SLC-6) at Vandenberg. These modifications will include changes to support launches of Falcon 9 and Falcon Heavy rockets, the construction of two new landing pads for Falcon boosters adjacent to SLC-6, the demolition of unneeded structures at SLC-6, and increasing SpaceX’s permitted launch cadence from Vandenberg from 50 launches to 100.

Doubling the fun … The transformation of SLC-6 would include quite a bit of overhaul. Its most recent tenant, United Launch Alliance, previously used it for Delta IV rockets from 2006 through its final launch in September 2022. The following year, the Space Force handed over the launch pad to SpaceX, which lacked a pad at Vandenberg capable of supporting Falcon Heavy missions. The estimated launch cadence between SpaceX’s existing Falcon 9 pad at Vandenberg, known as SLC-4E, and SLC-6 would be a 70-11 split for Falcon 9 rockets in 2026, with one Falcon Heavy at SLC-6, for a total of 82 launches. That would increase to a 70-25 Falcon 9 split in 2027 and 2028, with an estimated five Falcon Heavy launches in each of those years. (submitted by EllPeaTea)

Next three launches

May 23: Falcon 9 | Starlink 11-16 | Vandenberg Space Force Base, California | 20: 36 UTC

May 24: Falcon 9 | Starlink 12-22 | Cape Canaveral Space Force Station, Florida | 17: 19 UTC

May 27: Falcon 9 | Starlink 17-1 | Vandenberg Space Force Base, California | 16: 14 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: SpaceX’s expansion at Vandenberg; India’s PSLV fails in flight Read More »

in-35-years,-notepad.exe-has-gone-from-“barely-maintained”-to-“it-writes-for-you”

In 3.5 years, Notepad.exe has gone from “barely maintained” to “it writes for you”

By late 2021, major updates for Windows’ built-in Notepad text editor had been so rare for so long that a gentle redesign and a handful of new settings were rated as a major update. New updates have become much more common since then, but like the rest of Windows, recent additions have been overwhelmingly weighted in the direction of generative AI.

In November, Microsoft began testing an update that allowed users to rewrite or summarize text in Notepad using generative AI. Another preview update today takes it one step further, allowing you to write AI-generated text from scratch with basic instructions (the feature is called Write, to differentiate it from the earlier Rewrite).

Like Rewrite and Summarize, Write requires users to be signed into a Microsoft Account, because using it requires you to use your monthly allotment of Microsoft’s AI credits. Per this support page, users without a paid Microsoft 365 subscription get 15 credits per month. Subscribers with Personal and Family subscriptions get 60 credits per month instead.

Microsoft notes that all AI features in Notepad can be disabled in the app’s settings, and obviously, they won’t be available if you use a local account instead of a Microsoft Account.

Microsoft is also releasing preview updates for Paint and Snipping Tool, two other bedrock Windows apps that hadn’t seen much by way of major updates before the Windows 11 era. Paint’s features are also mostly AI-related, including a “sticker generator” and an AI-powered smart select tool “to help you isolate and edit individual elements in your image.” A new “welcome experience” screen that appears the first time you launch the app will walk you through the (again, mostly AI-related) new features Microsoft has added to Paint in the last couple of years.

In 3.5 years, Notepad.exe has gone from “barely maintained” to “it writes for you” Read More »

SAP Sapphire 2025

I just returned from SAP Sapphire 2025 in Orlando, and while SAP painted a compelling vision of an AI-powered future, I couldn’t help but think about the gap between their shiny new announcements and where most SAP customers actually are today. Let me cut through the marketing hype and give you the analyst perspective on what really matters.

The Cloud Migration Elephant in the Room

SAP’s biggest challenge isn’t building cool AI features – it’s that the vast majority of their customer base is still running on-premise ERP systems. While SAP was busy showcasing their AI Foundation and enhanced Joule capabilities, I kept thinking about the thousands of companies still on SAP ECC 6.0 or older versions, some of which haven’t been updated in years.

Here’s the reality check: nearly every exciting AI announcement at Sapphire requires SAP’s cloud solutions. The AI Foundation? Cloud-based. Enhanced Joule with proactive capabilities? Needs cloud infrastructure. The new Business Data Cloud intelligence offerings? You guessed it – cloud only.

For the average SAP shop running on-premise systems, these announcements might as well be science fiction. They’re dealing with basic integration challenges, struggling with outdated user interfaces, and fighting to get reliable reports out of their current systems. The idea of AI agents autonomously managing their supply chain seems laughably distant.

AI: Useful Tool, Not Magic Wand

Don’t get me wrong – the AI capabilities SAP demonstrated are genuinely impressive. The ability for Joule to anticipate user needs and provide contextual insights could indeed improve productivity. But let’s pump the brakes on SAP’s claim of “up to 30% productivity gains.”

I’ve been analyzing enterprise software implementations for years, and productivity gains of that magnitude typically come from process improvements and workflow optimization, not just from adding AI on top of existing inefficiencies. If your procurement process is broken, an AI agent won’t fix it – it’ll just automate the broken process faster.

The more realistic wins will come from:

  • Reducing time spent searching for information across multiple systems
  • Automating routine data analysis and report generation
  • Providing better decision support through predictive analytics
  • Streamlining repetitive tasks in finance, HR, and supply chain operations

These are valuable improvements, but they’re evolutionary, not revolutionary.

The Partnership Strategy: Hedging Their Bets

SAP’s partnerships tell an interesting story. The Accenture ADVANCE program acknowledges that many mid-market companies need significant hand-holding to modernize their SAP environments. The Palantir integration suggests SAP recognizes they can’t be everything to everyone in the data analytics space. The Perplexity collaboration admits that their AI needs external data sources to be truly useful.

These partnerships are smart business moves, but they also highlight SAP’s dependencies. If you’re planning an SAP transformation, you’re not just buying SAP – you’re buying into an ecosystem of partners and integrations that adds complexity and cost.

What This Means for Your SAP Strategy

If you’re currently running SAP on-premise, Sapphire 2025 should reinforce one key message: the innovation train is leaving the station, and it’s heading to the cloud. But before you panic about missing out on AI capabilities, consider these pragmatic steps:

For On-Premise SAP Customers:

  • Audit your current state first. Most companies I work with aren’t maximizing their existing SAP capabilities, let alone ready for AI enhancements.
  • Plan your cloud migration timeline. SAP’s 2030 end-of-support deadline for older systems isn’t going away. Use that as your forcing function.
  • Focus on data quality. AI is only as good as the data it works with. If your master data is a mess, AI won’t help.
  • Start small with cloud integration. Consider hybrid approaches that connect your on-premise core with cloud-based analytics and AI tools.

For Companies Already in SAP Cloud:

  • Evaluate which AI features actually solve business problems you have today, not theoretical future use cases.
  • Pilot before you scale. The productivity claims sound great, but test them in your environment with your data.
  • Invest in change management. The biggest barrier to AI adoption isn’t technical – it’s getting people to change how they work.

The Bottom Line: Evolution, Not Revolution

SAP Sapphire 2025 showcased legitimate innovations that will improve how businesses operate, but let’s keep expectations realistic. The companies that will benefit most from these AI capabilities are those that have already modernized their SAP infrastructure and cleaned up their business processes.

For the majority of SAP customers still on legacy systems, the real question isn’t whether AI will transform their business – it’s whether they can execute a successful modernization program that positions them to eventually take advantage of these capabilities.

Your Next Steps

Here’s what I recommend you do this week:

  • Assess where you stand on your SAP modernization journey. Are you cloud-ready, or do you have years of technical debt to address first?
  • Map your business cases for the AI capabilities that caught your attention. Can you quantify the value they’d deliver in your specific environment?
  • Build a realistic roadmap that acknowledges both the exciting possibilities and the practical constraints of your current SAP landscape.
  • Start the conversation with your leadership about long-term SAP strategy. The decisions you make in the next two years will determine whether you’re positioned to benefit from the AI revolution or left behind with legacy systems.

The AI future SAP is promising will arrive eventually, but for most companies, the path there runs through cloud migration, data governance, and process optimization. Focus on building that foundation first, and the AI capabilities will follow when you’re actually ready to use them effectively.

 

The post SAP Sapphire 2025 appeared first on Gigaom.

SAP Sapphire 2025 Read More »

what-i-learned-from-my-first-few-months-with-a-bambu-lab-a1-3d-printer,-part-1

What I learned from my first few months with a Bambu Lab A1 3D printer, part 1


One neophyte’s first steps into the wide world of 3D printing.

The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham

The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham

For a couple of years now, I’ve been trying to find an excuse to buy a decent 3D printer.

Friends and fellow Ars staffers who had them would gush about them at every opportunity, talking about how useful they can be and how much can be printed once you get used to the idea of being able to create real, tangible objects with a little time and a few bucks’ worth of plastic filament.

But I could never quite imagine myself using one consistently enough to buy one. Then, this past Christmas, my wife forced the issue by getting me a Bambu Lab A1 as a present.

Since then, I’ve been tinkering with the thing nearly daily, learning more about what I’ve gotten myself into and continuing to find fun and useful things to print. I’ve gathered a bunch of thoughts about my learning process here, not because I think I’m breaking new ground but to serve as a blueprint for anyone who has been on the fence about Getting Into 3D Printing. “Hyperfixating on new hobbies” is one of my go-to coping mechanisms during times of stress and anxiety, and 3D printing has turned out to be the perfect combination of fun, practical, and time-consuming.

Getting to know my printer

My wife settled on the Bambu A1 because it’s a larger version of the A1 Mini, Wirecutter’s main 3D printer pick at the time (she also noted it was “hella on sale”). Other reviews she read noted that it’s beginner-friendly, easy to use, and fun to tinker with, and it has a pretty active community for answering questions, all assessments I agree with so far.

Note that this research was done some months before Bambu earned bad headlines because of firmware updates that some users believe will lead to a more locked-down ecosystem. This is a controversy I understand—3D printers are still primarily the realm of DIYers and tinkerers, people who are especially sensitive to the closing of open ecosystems. But as a beginner, I’m already leaning mostly on the first-party tools and built-in functionality to get everything going, so I’m not really experiencing the sense of having “lost” features I was relying on, and any concerns I did have are mostly addressed by Bambu’s update about its update.

I hadn’t really updated my preconceived notions of what home 3D printing was since its primordial days, something Ars has been around long enough to have covered in some depth. I was wary of getting into yet another hobby where, like building your own gaming PC, fiddling with and maintaining the equipment is part of the hobby. Bambu’s printers (and those like them) are capable of turning out fairly high-quality prints with minimal fuss, and nothing will draw you into the hobby faster than a few successful prints.

Basic terminology

Extrusion-based 3D printers (also sometimes called “FDM,” for “fused deposition modeling”) work by depositing multiple thin layers of melted plastic filament on a heated bed. Credit: Andrew Cunningham

First things first: The A1 is what’s called an “extrusion” printer, meaning that it functions by melting a long, slim thread of plastic (filament) and then depositing this plastic onto a build plate seated on top of a heated bed in tens, hundreds, or even thousands of thin layers. In the manufacturing world, this is also called “fused deposition modeling,” or FDM. This layer-based extrusion gives 3D-printed objects their distinct ridged look and feel and is also why a 3D printed piece of plastic is less detailed-looking and weaker than an injection-molded piece of plastic like a Lego brick.

The other readily available home 3D printing technology takes liquid resin and uses UV light to harden it into a plastic structure, using a process called “stereolithography” (SLA). You can get inexpensive resin printers in the same price range as the best cheap extrusion printers, and the SLA process can create much more detailed, smooth-looking, and watertight 3D prints (it’s popular for making figurines for tabletop games). Some downsides are that the print beds in these printers are smaller, resin is a bit fussier than filament, and multi-color printing isn’t possible.

There are two main types of home extrusion printers. The Bambu A1 is a Cartesian printer, or in more evocative and colloquial terms, a “bed slinger.” In these, the head of the printer can move up and down on one or two rails and from side to side on another rail. But the print bed itself has to move forward and backward to “move” the print head on the Y axis.

More expensive home 3D printers, including higher-end Bambu models in the P- and X-series, are “CoreXY” printers, which include a third rail or set of rails (and more Z-axis rails) that allow the print head to travel in all three directions.

The A1 is also an “open-bed” printer, which means that it ships without an enclosure. Closed-bed printers are more expensive, but they can maintain a more consistent temperature inside and help contain the fumes from the melted plastic. They can also reduce the amount of noise coming from your printer.

Together, the downsides of a bed-slinger (introducing more wobble for tall prints, more opportunities for parts of your print to come loose from the plate) and an open-bed printer (worse temperature, fume, and dust control) mainly just mean that the A1 isn’t well-suited for printing certain types of plastic and has more potential points of failure for large or delicate prints. My experience with the A1 has been mostly positive now that I know about those limitations, but the printer you buy could easily change based on what kinds of things you want to print with it.

Setting up

Overall, the setup process was reasonably simple, at least for someone who has been building PCs and repairing small electronics for years now. It’s not quite the same as the “take it out of the box, remove all the plastic film, and plug it in” process of setting up a 2D printer, but the directions in the start guide are well-illustrated and clearly written; if you can put together prefab IKEA furniture, that’s roughly the level of complexity we’re talking about here. The fact that delicate electronics are involved might still make it more intimidating for the non-technical, but figuring out what goes where is fairly simple.

The only mistake I made while setting the printer up involved the surface I initially tried to put it on. I used a spare end table, but as I discovered during the printer’s calibration process, the herky-jerky movement of the bed and print head was way too much for a little table to handle. “Stable enough to put a lamp on” is not the same as “stable enough to put a constantly wobbling contraption” on—obvious in retrospect, but my being new to this is why this article exists.

After some office rearrangement, I was able to move the printer to my sturdy L-desk full of cables and other doodads to serve as ballast. This surface was more than sturdy enough to let the printer complete its calibration process—and sturdy enough not to transfer the printer’s every motion to our kid’s room below, a boon for when I’m trying to print something after he has gone to bed.

The first-party Bambu apps for sending files to the printer are Bambu Handy (for iOS/Android, with no native iPad version) and Bambu Studio (for Windows, macOS, and Linux). Handy works OK for sending ready-made models from MakerWorld (a mostly community-driven but Bambu-developer repository for 3D printable files) and for monitoring prints once they’ve started. But I’ll mostly be relaying my experience with Bambu Studio, a much more fully featured app. Neither app requires sign-in, at least not yet, but the path of least resistance is to sign into your printer and apps with the same account to enable easy communication and syncing.

Bambu Studio: A primer

Bambu Studio is what’s known in the hobby as a “slicer,” software that takes existing 3D models output by common CAD programs (Tinkercad, FreeCAD, SolidWorks, Autodesk Fusion, others) and converts them into a set of specific movement instructions that the printer can follow. Bambu Studio allows you to do some basic modification of existing models—cloning parts, resizing them, adding supports for overhanging bits that would otherwise droop down, and a few other functions—but it’s primarily there for opening files, choosing a few settings, and sending them off to the printer to become tangible objects.

Bambu Studio isn’t the most approachable application, but if you’ve made it this far, it shouldn’t be totally beyond your comprehension. For first-time setup, you’ll choose your model of printer (all Bambu models and a healthy selection of third-party printers are officially supported), leave the filament settings as they are, and sign in if you want to use Bambu’s cloud services. These sync printer settings and keep track of the models you save and download from MakerWorld, but a non-cloud LAN mode is available for the Bambu skeptics and privacy-conscious.

For any newbie, pretty much all you need to do is connect your printer, open a .3MF or .STL file you’ve downloaded from MakerWorld or elsewhere, select your filament from the drop-down menu, click “slice plate,” and then click “print.” Things like the default 0.4 mm nozzle size and Bambu’s included Textured PEI Build Plate are generally already factored in, though you may need to double-check these selections when you open a file for the first time.

When you slice your build plate for the first time, the app will spit a pile of numbers back at you. There are two important ones for 3D printing neophytes to track. One is the “total filament” figure, which tells you how many grams of filament the printer will use to make your model (filament typically comes in 1 kg spools, and the printer generally won’t track usage for you, so if you want to avoid running out in the middle of the job, you may want to keep track of what you’re using). The second is the “total time” figure, which tells you how long the entire print will take from the first calibration steps to the end of the job.

Selecting your filament and/or temperature presets. If you have the Automatic Material System (AMS), this is also where you’ll manage multicolor printing. Andrew Cunningham

When selecting filament, people who stick to Bambu’s first-party spools will have the easiest time, since optimal settings are already programmed into the app. But I’ve had almost zero trouble with the “generic” presets and the spools of generic Inland-branded filament I’ve bought from our local Micro Center, at least when sticking to PLA (polylactic acid, the most common and generally the easiest-to-print of the different kinds of filament you can buy). But we’ll dive deeper into plastics in part 2 of this series.

I won’t pretend I’m skilled enough to do a deep dive on every single setting that Bambu Studio gives you access to, but here are a few of the odds and ends I’ve found most useful:

  • The “clone” function, accessed by right-clicking an object and clicking “clone.” Useful if you’d like to fit several copies of an object on the build plate at once, especially if you’re using a filament with a color gradient and you’d like to make the gradient effect more pronounced by spreading it out over a bunch of prints.
  • The “arrange all objects” function, the fourth button from the left under the “prepare” tab. Did you just clone a bunch of objects? Did you delete an individual object from a model because you didn’t need to print that part? Bambu Studio will arrange everything on your build plate to optimize the use of space.
  • Layer height, located in the sidebar directly beneath “Process” (which is directly underneath the area where you select your filament. For many functional parts, the standard 0.2 mm layer height is fine. Going with thinner layer heights adds to the printing time but can preserve more detail on prints that have a lot of it and slightly reduce the visible layer lines that give 3D-printed objects their distinct look (for better or worse). Thicker layer heights do the opposite, slightly reducing the amount of time a model takes to print but preserving less detail.
  • Infill percentage and wall loops, located in the Strength tab beneath the “Process” sidebar item. For most everyday prints, you don’t need to worry about messing with these settings much; the infill percentage determines the amount of your print’s interior that’s plastic and the part that’s empty space (15 percent is a good happy medium most of the time between maintaining rigidity and overusing plastic). The number of wall loops determines how many layers the printer uses for the outside surface of the print, with more walls using more plastic but also adding a bit of extra strength and rigidity to functional prints that need it (think hooks, hangers, shelves and brackets, and other things that will be asked to bear some weight).

My first prints

A humble start: My very first print was a wall bracket for the remote for my office’s ceiling fan. Credit: Andrew Cunningham

When given the opportunity to use a 3D printer, my mind went first to aggressively practical stuff—prints for organizing the odds and ends that eternally float around my office or desk.

When we moved into our current house, only one of the bedrooms had a ceiling fan installed. I put up remote-controlled ceiling fans in all the other bedrooms myself. And all those fans, except one, came with a wall-mounted caddy to hold the remote control. The first thing I decided to print was a wall-mounted holder for that remote control.

MakerWorld is just one of several resources for ready-made 3D-printable files, but the ease with which I found a Hampton Bay Ceiling Fan Remote Wall Mount is pretty representative of my experience so far. At this point in the life cycle of home 3D printing, if you can think about it and it’s not a terrible idea, you can usually find someone out there who has made something close to what you’re looking for.

I loaded up my black roll of PLA plastic—generally the cheapest, easiest-to-buy, easiest-to-work-with kind of 3D printer filament, though not always the best for prints that need more structural integrity—into the basic roll-holder that comes with the A1, downloaded that 3MF file, opened it in Bambu Studio, sliced the file, and hit print. It felt like there should have been extra steps in there somewhere. But that’s all it took to kick the printer into action.

After a few minutes of warmup—by default, the A1 has a thorough pre-print setup process where it checks the levelness of the bed and tests the flow rate of your filament for a few minutes before it begins printing anything—the nozzle started laying plastic down on my build plate, and inside of an hour or so, I had my first 3D-printed object.

Print No. 2 was another wall bracket, this time for my gaming PC’s gamepad and headset. Credit: Andrew Cunningham

It wears off a bit after you successfully execute a print, but I still haven’t quite lost the feeling of magic of printing out a fully 3D object that comes off the plate and then just exists in space along with me and all the store-bought objects in my office.

The remote holder was, as I’d learn, a fairly simple print made under near-ideal conditions. But it was an easy success to start off with, and that success can help embolden you and draw you in, inviting more printing and more experimentation. And the more you experiment, the more you inevitably learn.

This time, I talked about what I learned about basic terminology and the different kinds of plastics most commonly used by home 3D printers. Next time, I’ll talk about some of the pitfalls I ran into after my initial successes, what I learned about using Bambu Studio, what I’ve learned about fine-tuning settings to get good results, and a whole bunch of 3D-printable upgrades and mods available for the A1.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

What I learned from my first few months with a Bambu Lab A1 3D printer, part 1 Read More »

trump-admin-tells-supreme-court:-doge-needs-to-do-its-work-in-secret

Trump admin tells Supreme Court: DOGE needs to do its work in secret


DOJ complains of “sweeping, intrusive discovery” after DOGE refused FOIA requests.

A protest over DOGE’s reductions to the federal workforce outside the Jacob K. Javits Federal Office Building on March 19, 2025 in New York City. Credit: Getty Images | Michael M. Santiago

The Department of Justice today asked the Supreme Court to block a ruling that requires DOGE to provide information about its government cost-cutting operations as part of court-ordered discovery.

President Trump’s Justice Department sought an immediate halt to orders issued by US District Court for the District of Columbia. US Solicitor General John Sauer argued that the Department of Government Efficiency is exempt from the Freedom of Information Act (FOIA) as a presidential advisory body and not an official “agency.”

The district court “ordered USDS [US Doge Service] to submit to sweeping, intrusive discovery just to determine if USDS is subject to FOIA in the first place,” Sauer wrote. “That order turns FOIA on its head, effectively giving respondent a win on the merits of its FOIA suit under the guise of figuring out whether FOIA even applies. And that order clearly violates the separation of powers, subjecting a presidential advisory body to intrusive discovery and threatening the confidentiality and candor of its advice, putatively to address a legal question that never should have necessitated discovery in this case at all.”

The nonprofit watchdog group Citizens for Responsibility and Ethics in Washington (CREW) filed FOIA requests seeking information about DOGE and sued after DOGE officials refused to provide the requested records.

US District Judge Christopher Cooper has so far sided with CREW. Cooper decided in March that “USDS is likely covered by FOIA and that the public would be irreparably harmed by an indefinite delay in unearthing the records CREW seeks,” ordering DOGE “to process CREW’s request on an expedited timetable.”

Judge: DOGE is not just an advisor

DOGE then asked the district court for a summary judgment in its favor, and CREW responded by filing a motion for expedited discovery “seeking information relevant to whether USDS wields substantial authority independent of the President and is therefore subject to FOIA.” In an April 15 order, Cooper ruled that CREW is entitled to limited discovery into the question of whether DOGE is wielding authority sufficient to bring it within the purview of FOIA. Cooper hasn’t yet ruled on the motion for summary judgment.

“The structure of USDS and the scope of its authority are critical to determining whether the agency is ‘wield[ing] substantial authority independently of the President,'” the judge wrote. “And the answers to those questions are unclear from the record.”

Trump’s executive orders appear to support CREW’s argument by suggesting “that USDS is exercising substantial independent authority,” Cooper wrote. “As the Court already noted, the executive order establishing USDS ‘to implement the President’s DOGE Agenda’ appears to give USDS the authority to carry out that agenda, ‘not just to advise the President in doing so.'”

Not satisfied with the outcome, the Trump administration tried to get Cooper’s ruling overturned in the US Court of Appeals for the District of Columbia Circuit. The appeals court ruled against DOGE last week. The appeals court temporarily stayed the district court order in April but dissolved the stay on May 14 and denied the government’s petition.

“The government contends that the district court’s order permitting narrow discovery impermissibly intrudes upon the President’s constitutional prerogatives,” the appeals court said. But “the discovery here is modest in scope and does not target the President or any close adviser personally. The government retains every conventional tool to raise privilege objections on the limited question-by-question basis foreseen here on a narrow and discrete ground.”

US argues for secrecy

A three-judge panel at the appeals court was unswayed by the government’s claim that this process is too burdensome.

“Although the government protests that any such assertion of privilege would be burdensome, the only identified burdens are limited both by time and reach, covering as they do records within USDS’s control generated since January 20,” the ruling said. “It does not provide any specific details as to why accessing its own records or submitting to two depositions would pose an unbearable burden.”

Yesterday, the District Court set a discovery schedule requiring the government to produce all responsive documents within 14 days and complete depositions within 24 days. In its petition to the Supreme Court today, the Trump administration argued that DOGE’s recommendations to the president should be kept secret:

The district court’s requirement that USDS turn over the substance of its recommendations—even when the recommendations were “purely advisory”—epitomizes the order’s overbreadth and intrusiveness. The court’s order compels USDS to identify every “federal agency contract, grant, lease or similar instrument that any DOGE employee or DOGE Team member recommended that federal agencies cancel or rescind,” and every “federal agency employee or position that any DOGE employee or DOGE team member recommended” for termination or placement on administrative leave. Further, USDS must state “whether [each] recommendation was followed.”

It is difficult to imagine a more grievous intrusion and burden on a presidential advisory body. Providing recommendations is the core of what USDS does. Because USDS coordinates with agencies across the Executive Branch on an ongoing basis, that request requires USDS to review multitudes of discussions that USDS has had every day since the start of this Administration. And such information likely falls within the deliberative-process privilege almost by definition, as internal executive-branch recommendations are inherently “pre-decisional” and “deliberative.”

Lawsuit: “No meaningful transparency” into DOGE

The US further said the discovery “is unnecessary to answer the legal question whether USDS qualifies as an ‘agency’ that is subject to FOIA,” and is merely “a fishing expedition into USDS’s advisory activities under the guise of determining whether USDS engages in non-advisory activities—an approach to discovery that would be improper in any circumstance.”

CREW, like others that have sued the government over DOGE’s operations, says the entity exercises significant power without proper oversight and transparency. DOGE “has worked in the shadows—a cadre of largely unidentified actors, whose status as government employees is unclear, controlling major government functions with no oversight,” CREW’s lawsuit said. “USDS has provided no meaningful transparency into its operations or assurances that it is maintaining proper records of its unprecedented and legally dubious work.”

The Trump administration is fighting numerous DOGE-related lawsuits at multiple levels of the court system. Earlier this month, the administration asked the Supreme Court to restore DOGE’s access to Social Security Administration records after losing on the issue in both a district court and appeals court. That request to the Supreme Court is pending.

There was also a dispute over discovery when 14 states sued the federal government over Trump “delegat[ing] virtually unchecked authority to Mr. Musk without proper legal authorization from Congress and without meaningful supervision of his activities.” A federal judge ruled that the states could serve written discovery requests on Musk and DOGE, but the DC Circuit appeals court blocked the discovery order. In that case, appeals court judges said the lower-court judge should have ruled on a motion to dismiss before allowing discovery.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Trump admin tells Supreme Court: DOGE needs to do its work in secret Read More »

meta-hypes-ai-friends-as-social-media’s-future,-but-users-want-real-connections

Meta hypes AI friends as social media’s future, but users want real connections


Two visions for social media’s future pit real connections against AI friends.

A rotting zombie thumb up buzzing with flies while the real zombies are the people in the background who can't put their phones down

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

If you ask the man who has largely shaped how friends and family connect on social media over the past two decades about the future of social media, you may not get a straight answer.

At the Federal Trade Commission’s monopoly trial, Meta CEO Mark Zuckerberg attempted what seemed like an artful dodge to avoid criticism that his company allegedly bought out rivals Instagram and WhatsApp to lock users into Meta’s family of apps so they would never post about their personal lives anywhere else. He testified that people actually engage with social media less often these days to connect with loved ones, preferring instead to discover entertaining content on platforms to share in private messages with friends and family.

As Zuckerberg spins it, Meta no longer perceives much advantage in dominating the so-called personal social networking market where Facebook made its name and cemented what the FTC alleged is an illegal monopoly.

“Mark Zuckerberg says social media is over,” a New Yorker headline said about this testimony in a report noting a Meta chart that seemed to back up Zuckerberg’s words. That chart, shared at the trial, showed the “percent of time spent viewing content posted by ‘friends'” had declined over the past two years, from 22 to 17 percent on Facebook and from 11 to 7 percent on Instagram.

Supposedly because of this trend, Zuckerberg testified that “it doesn’t matter much” if someone’s friends are on their preferred platform. Every platform has its own value as a discovery engine, Zuckerberg suggested. And Meta platforms increasingly compete on this new playing field against rivals like TikTok, Meta argued, while insisting that it’s not so much focused on beating the FTC’s flagged rivals in the connecting-friends-and-family business, Snap and MeWe.

But while Zuckerberg claims that hosting that kind of content doesn’t move the needle much anymore, owning the biggest platforms that people use daily to connect with friends and family obviously still matters to Meta, MeWe founder Mark Weinstein told Ars. And Meta’s own press releases seem to back that up.

Weeks ahead of Zuckerberg’s testimony, Meta announced that it would bring back the “magic of friends,” introducing a “friends” tab to Facebook to make user experiences more like the original Facebook. The company intentionally diluted feeds with creator content and ads for the past two years, but it now appears intent on trying to spark more real conversations between friends and family, at least partly to fuel its newly launched AI chatbots.

Those chatbots mine personal information shared on Facebook and Instagram, and Meta wants to use that data to connect more personally with users—but “in a very creepy way,” The Washington Post wrote. In interviews, Zuckerberg has suggested these AI friends could “meaningfully” fill the void of real friendship online, as the average person has only three friends but “has demand” for up to 15. To critics seeking to undo Meta’s alleged monopoly, this latest move could signal a contradiction in Zuckerberg’s testimony, showing that the company is so invested in keeping users on its platforms that it’s now creating AI friends (wh0 can never leave its platform) to bait the loneliest among us into more engagement.

“The average person wants more connectivity, connection, than they have,” Zuckerberg said, hyping AI friends. For the Facebook founder, it must be hard to envision a future where his platforms aren’t the answer to providing that basic social need. All this comes more than a decade after he sought $5 billion in Facebook’s 2012 initial public offering so that he could keep building tools that he told investors would expand “people’s capacity to build and maintain relationships.”

At the trial, Zuckerberg testified that AI and augmented reality will be key fixtures of Meta’s platforms in the future, predicting that “several years from now, you are going to be scrolling through your feed, and not only is it going to be sort of animated, but it will be interactive.”

Meta declined to comment further on the company’s vision for social media’s future. In a statement, a Meta spokesperson told Ars that “the FTC’s lawsuit against Meta defies reality,” claiming that it threatens US leadership in AI and insisting that evidence at trial would establish that platforms like TikTok, YouTube, and X are Meta’s true rivals.

“More than 10 years after the FTC reviewed and cleared our acquisitions, the Commission’s action in this case sends the message that no deal is ever truly final,” Meta’s spokesperson said. “Regulators should be supporting American innovation rather than seeking to break up a great American company and further advantaging China on critical issues like AI.”

Meta faces calls to open up its platforms

Weinstein, the MeWe founder, told Ars that back in the 1990s when the original social media founders were planning the first community portals, “it was so beautiful because we didn’t think about bots and trolls. We didn’t think about data mining and surveillance capitalism. We thought about making the world a more connected and holistic place.”

But those who became social media overlords found more money in walled gardens and increasingly cut off attempts by outside developers to improve the biggest platforms’ functionality or leverage their platforms to compete for their users’ attention. Born of this era, Weinstein expects that Zuckerberg, and therefore Meta, will always cling to its friends-and-family roots, no matter which way Zuckerberg says the wind is blowing.

Meta “is still entirely based on personal social networking,” Weinstein told Ars.

In a Newsweek op-ed, Weinstein explained that he left MeWe in 2021 after “competition became impossible” with Meta. It was a time when MeWe faced backlash over lax content moderation, drawing comparisons between its service and right-wing apps like Gab or Parler. Weinstein rejected those comparisons, seeing his platform as an ideal Facebook rival and remaining a board member through the app’s more recent shift to decentralization. Still defending MeWe’s failed efforts to beat Facebook, he submitted hundreds of documents and was deposed in the monopoly trial, alleging that Meta retaliated against MeWe as a privacy-focused rival that sought to woo users away by branding itself the “anti-Facebook.”

Among his complaints, Weinstein accused Meta of thwarting MeWe’s attempts to introduce interoperability between the two platforms, which he thinks stems from a fear that users might leave Facebook if they discover a more appealing platform. That’s why he’s urged the FTC—if it wins its monopoly case—to go beyond simply ordering a potential breakup of Facebook, Instagram, and WhatsApp to also require interoperability between Meta’s platforms and all rivals. That may be the only way to force Meta to release its clutch on personal data collection, Weinstein suggested, and allow for more competition broadly in the social media industry.

“The glue that holds it all together is Facebook’s monopoly over data,” Weinstein wrote in a Wall Street Journal op-ed, recalling the moment he realized that Meta seemed to have an unbeatable monopoly. “Its ownership and control of the personal information of Facebook users and non-users alike is unmatched.”

Cory Doctorow, a special advisor to the Electronic Frontier Foundation, told Ars that his vision of a better social media future goes even further than requiring interoperability between all platforms. Social networks like Meta’s should also be made to allow reverse engineering so that outside developers can modify their apps with third-party tools without risking legal attacks, he said.

Doctorow said that solution would create “an equilibrium where companies are more incentivized to behave themselves than they are to cheat” by, say, retaliating against, killing off, or buying out rivals. And “if they fail to respond to that incentive and they cheat anyways, then the rest of the world still has a remedy,” Doctorow said, by having the choice to modify or ditch any platform deemed toxic, invasive, manipulative, or otherwise offensive.

Doctorow summed up the frustration that some users have faced through the ongoing “enshittification” of platforms (a term he coined) ever since platforms took over the Internet.

“I’m 55 now, and I’ve gotten a lot less interested in how things work because I’ve had too many experiences with how things fail,” Doctorow told Ars. “And I just want to make sure that if I’m on a service and it goes horribly wrong, I can leave.”

Social media haters wish OG platforms were doomed

Weinstein pointed out that Meta’s alleged monopoly impacts a group often left out of social media debates: non-users. And if you ask someone who hates social media what the future of social media should look like, they will not mince words: They want a way to opt out of all of it.

As Meta’s monopoly trial got underway, a personal blog post titled “No Instagram, no privacy” rose to the front page of Hacker News, prompting a discussion about social media norms and reasonable expectations for privacy in 2025.

In the post, Wouter-Jan Leys, a privacy advocate, explained that he felt “blessed” to have “somehow escaped having an Instagram account,” feeling no pressure to “update the abstract audience of everyone I ever connected with online on where I am, what I am doing, or who I am hanging out with.”

But despite never having an account, he’s found that “you don’t have to be on Instagram to be on Instagram,” complaining that “it bugs me” when friends seem to know “more about my life than I tell them” because of various friends’ posts that mention or show images of him. In his blog, he defined privacy as “being in control of what other people know about you” and suggested that because of platforms like Instagram, he currently lacked this control. There should be some way to “fix or regulate this,” Leys suggested, or maybe some universal “etiquette where it’s frowned upon to post about social gatherings to any audience beyond who already was at that gathering.”

On Hacker News, his post spurred a debate over one of the longest-running privacy questions swirling on social media: Is it OK to post about someone who abstains from social media?

Some seeming social media fans scolded Leys for being so old-fashioned about social media, suggesting, “just live your life without being so bothered about offending other people” or saying that “the entire world doesn’t have to be sanitized to meet individual people’s preferences.” Others seemed to better understand Leys’ point of view, with one agreeing that “the problem is that our modern norms (and tech) lead to everyone sharing everything with a large social network.”

Surveying the lively thread, another social media hater joked, “I feel vindicated for my decision to entirely stay off of this drama machine.”

Leys told Ars that he would “absolutely” be in favor of personal social networks like Meta’s platforms dying off or losing steam, as Zuckerberg suggested they already are. He thinks that the decline in personal post engagement that Meta is seeing is likely due to a combination of factors, where some users may prefer more privacy now after years of broadcasting their lives, and others may be tired of the pressure of building a personal brand or experiencing other “odd social dynamics.”

Setting user sentiments aside, Meta is also responsible for people engaging with fewer of their friends’ posts. Meta announced that it would double the amount of force-fed filler in people’s feeds on Instagram and Facebook starting in 2023. That’s when the two-year span begins that Zuckerberg measured in testifying about the sudden drop-off in friends’ content engagement.

So while it’s easy to say the market changed, Meta may be obscuring how much it shaped that shift. Degrading the newsfeed and changing Instagram’s default post shape from square to rectangle seemingly significantly shifted Instagram social norms, for example, creating an environment where Gen Z users felt less comfortable posting as prolifically as millennials did when Instagram debuted, The New Yorker explained last year. Where once millennials painstakingly designed immaculate grids of individual eye-catching photos to seem cool online, Gen Z users told The New Yorker that posting a single photo now feels “humiliating” and like a “social risk.”

But rather than eliminate the impulse to post, this cultural shift has popularized a different form of personal posting: staggered photo dumps, where users wait to post a variety of photos together to sum up a month of events or curate a vibe, the trend piece explained. And Meta is clearly intent on fueling that momentum, doubling the maximum number of photos that users can feature in a single post to encourage even more social posting, The New Yorker noted.

Brendan Benedict, an attorney for Benedict Law Group PLLC who has helped litigate big tech antitrust cases, is monitoring the FTC monopoly trial on a Substack called Big Tech on Trial. He told Ars that the evidence at the trial has shown that “consumers want more friends and family content, and Meta is belatedly trying to address this” with features like the “friends” tab, while claiming there’s less interest in this content.

Leys doesn’t think social media—at least the way that Facebook defined it in the mid-2000s—will ever die, because people will never stop wanting social networks like Facebook or Instagram to stay connected with all their friends and family. But he could see a world where, if people ever started truly caring about privacy or “indeed [got] tired of the social dynamics and personal brand-building… the kind of social media like Facebook and Instagram will have been a generational phenomenon, and they may not immediately bounce back,” especially if it’s easy to switch to other platforms that respond better to user preferences.

He also agreed that requiring interoperability would likely lead to better social media products, but he maintained that “it would still not get me on Instagram.”

Interoperability shakes up social media

Meta thought it may have already beaten the FTC’s monopoly case, filing for a motion for summary judgment after the FTC rested its case in a bid to end the trial early. That dream was quickly dashed when the judge denied the motion days later. But no matter the outcome of the trial, Meta’s influence over the social media world may be waning just as it’s facing increasing pressure to open up its platforms more than ever.

The FTC has alleged that Meta weaponized platform access early on, only allowing certain companies to interoperate and denying access to anyone perceived as a threat to its alleged monopoly power. That includes limiting promotions of Instagram to keep users engaged with Facebook Blue. A primary concern for Meta (then Facebook), the FTC claimed, was avoiding “training users to check multiple feeds,” which might allow other apps to “cannibalize” its users.

“Facebook has used this power to deter and suppress competitive threats to its personal social networking monopoly. In order to protect its monopoly, Facebook adopted and required developers to agree to conditional dealing policies that limited third-party apps’ ability to engage with Facebook rivals or to develop into rivals themselves,” the FTC alleged.

By 2011, the FTC alleged, then-Facebook had begun terminating API access to any developers that made it easier to export user data into a competing social network without Facebook’s permission. That practice only ended when the UK parliament started calling out Facebook’s anticompetitive conduct toward app developers in 2018, the FTC alleged.

According to the FTC, Meta continues “to this day” to “screen developers and can weaponize API access in ways that cement its dominance,” and if scrutiny ever subsides, Meta is expected to return to such anticompetitive practices as the AI race heats up.

One potential hurdle for Meta could be that the push for interoperability is not just coming from the FTC or lawmakers who recently reintroduced bipartisan legislation to end walled gardens. Doctorow told Ars that “huge public groundswells of mistrust and anger about excessive corporate power” that “cross political lines” are prompting global antitrust probes into big tech companies and are perhaps finally forcing a reckoning after years of degrading popular products to chase higher and higher revenues.

For social media companies, mounting concerns about privacy and suspicions about content manipulation or censorship are driving public distrust, Doctorow said, as well as fears of surveillance capitalism. The latter includes theories that Doctorow is skeptical of. Weinstein embraced them, though, warning that platforms seem to be profiting off data without consent while brainwashing users.

Allowing users to leave the platform without losing access to their friends, their social posts, and their messages might be the best way to incentivize Meta to either genuinely compete for billions of users or lose them forever as better options pop up that can plug into their networks.

In his Newsweek op-ed, Weinstein suggested that web inventor Tim Berners-Lee has already invented a working protocol “to enable people to own, upload, download, and relocate their social graphs,” which maps users’ connections across platforms. That could be used to mitigate “the network effect” that locks users into platforms like Meta’s “while interrupting unwanted data collection.”

At the same time, Doctorow told Ars that increasingly popular decentralized platforms like Bluesky and Mastodon already provide interoperability and are next looking into “building interoperable gateways” between their services. Doctorow said that communicating with other users across platforms may feel “awkward” at first, but ultimately, it may be like “having to find the diesel pump at the gas station” instead of the unleaded gas pump. “You’ll still be going to the same gas station,” Doctorow suggested.

Opening up gateways into all platforms could be useful in the future, Doctorow suggested. Imagine if one platform goes down—it would no longer disrupt communications as drastically, as users could just pivot to communicate on another platform and reach the same audience. The same goes for platforms that users grow to distrust.

The EFF supports regulators’ attempts to pass well-crafted interoperability mandates, Doctorow said, noting that “if you have to worry about your users leaving, you generally have to treat them better.”

But would interoperability fix social media?

The FTC has alleged that “Facebook’s dominant position in the US personal social networking market is durable due to significant entry barriers, including direct network effects and high switching costs.”

Meta disputes the FTC’s complaint as outdated, arguing that its platform could be substituted by pretty much any social network.

However, Guy Aridor, a co-author of a recent article called “The Economics of Social Media” in the Journal of Economic Literature, told Ars that dominant platforms are probably threatened by shifting social media trends and are likely to remain “resistant to interoperability” because “it’s in the interest of the platform to make switching and coordination costs high so that users are less likely to migrate away.” For Meta, research shows its platforms’ network effects have appeared to weaken somewhat but “clearly still exist” despite social media users increasingly seeking content on platforms rather than just socialization, Aridor said.

Interoperability advocates believe it will make it easier for startups to compete with giants like Meta, which fight hard and sometimes seemingly dirty to keep users on their apps. Reintroducing the ACCESS Act, which requires platform compatibility to enable service switching, Senator Mark R. Warner (D-Va.) said that “interoperability and portability are powerful tools to promote innovative new companies and limit anti-competitive behaviors.” He’s hoping that passing these “long-overdue requirements” will “boost competition and give consumers more power.”

Aridor told Ars it’s obvious that “interoperability would clearly increase competition,” but he still has questions about whether users would benefit from that competition “since one consistent theme is that these platforms are optimized to maximize engagement, and there’s numerous empirical evidence we have by now that engagement isn’t necessarily correlated with utility.”

Consider, Aridor suggested, how toxic content often leads to high engagement but lower user satisfaction, as MeWe experienced during its 2021 backlash.

Aridor said there is currently “very little empirical evidence on the effects of interoperability,” but theoretically, if it increased competition in the current climate, it would likely “push the market more toward supplying engaging entertainment-related content as opposed to friends and family type of content.”

Benedict told Ars that a remedy like interoperability would likely only be useful to combat Meta’s alleged monopoly following a breakup, which he views as the “natural remedy” following a potential win in the FTC’s lawsuit.

Without the breakup and other meaningful reforms, a Meta win could preserve the status quo and see the company never open up its platforms, perhaps perpetuating Meta’s influence over social media well into the future. And if Zuckerberg’s vision comes to pass, instead of seeing what your friends are posting on interoperating platforms across the Internet, you may have a dozen AI friends trained on your real friends’ behaviors sending you regular dopamine hits to keep you scrolling on Facebook or Instagram.

Aridor’s team’s article suggested that, regardless of user preferences, social media remains a permanent fixture of society. If that’s true, users could get stuck forever using whichever platforms connect them with the widest range of contacts.

“While social media has continued to evolve, one thing that has not changed is that social media remains a central part of people’s lives,” his team’s article concluded.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Meta hypes AI friends as social media’s future, but users want real connections Read More »

trump’s-trade-war-risks-splintering-the-internet,-experts-warn

Trump’s trade war risks splintering the Internet, experts warn


Trump urged to rethink trade policy to block attacks on digital services.

In sparking his global trade war, Donald Trump seems to have maintained a glaring blind spot when it comes to protecting one of America’s greatest trade advantages: the export of digital services.

Experts have warned that the consequences for Silicon Valley could be far-reaching.

In a report released Tuesday, an intelligence firm that tracks global trade risks, Allianz Trade, shared results of a survey of 4,500 firms worldwide, designed “to capture the impact of the escalation of trade tensions.” Amid other key findings, the group warned that the US’s fixation on the country’s trillion-dollar goods deficit risks rocking “the fastest-growing segment of global trade,” America’s “invisible exports” of financial and digital services.

Tracking these exports is challenging, as many services are provided through foreign affiliates, the report noted, but recent estimates “reveal a large digital trade surplus of at least $600 billion for the US, spread across categories like digital advertising, video streaming, cloud platforms, and online payment services.”

According to Allianz Trade, “the scale of this hidden trade is immense.” These “hidden” exports have “far” outpaced “the growth of goods exports over the past two decades, their report said, but because of how these services are delivered, “this trade goes uncounted in traditional statistics.”

If Trump doesn’t “rethink trade policy and narratives” soon to start tracking all this trade more closely, he risks undermining this trade advantage—which Allianz Trade noted “is underpinned by America’s innovative firms and massive data infrastructure”—at a time when he’s in trade talks with most of the world and could be leveraging that advantage.

“US digital exports now represent a significant share of world trade (about 3.6 percent of all global trade, and growing fast),” Allianz Trade reported. “These ‘invisible’ exports boost US trade revenues without filling any container ships, underscoring a new reality: routers and data centers are as strategically important as ports and factories in sustaining US leadership.”

Without a pivot, Trump’s current trade tactics—requiring all countries impacted by reciprocal tariffs to strike a deal before July 8, while acknowledging that there won’t be time to meet with every country—could even threaten US dominance as “the world’s digital content and tech services hub,” Allianz Trade suggested.

US trade partners are already “looking into tariffs or taxes on digital services as a retaliation tool that could cause pain to the US,” the report warned. And other experts agreed that if such countermeasures become permanent fixtures in global trade, it could significantly hurt the US tech industry, perhaps even splintering the Internet, as companies are forced to customize services according to where different users are located.

Jovan Kurbalija, a former diplomat and executive director of the DiploFoundation who has monitored the Internet’s impact on global trade for more than 20 years, warned in an April blog that this could have a “more profound impact” on the US than other retaliatory measures.

“If the escalation of trade tensions moves into the digital realm, it could have far-reaching consequences for Silicon Valley giants and the digital economy worldwide,” Kurbalija wrote.

“The silent war over digital services”

The threat of retaliatory tariffs hitting the digital services industry has loomed large since European Commission President Ursula von der Leyen confirmed to the Financial Times last month that she was proactively developing such countermeasures if Trump’s trade talks with the European Union failed.

Those measures could potentially include “a tax on digital advertising revenues that would hit tech groups such as Amazon, Google and Facebook,” the FT reported. But perhaps most alarmingly, they may also include “tariffs on the services trade between the US and the EU.” Unlike the digital sales tax—which could be imposed differently by EU member states to significantly hurt tech giants’ ad revenues in various regions—the tariff would be applied across a single EU-wide market.

Kurbalija suggested that the problem goes beyond the EU.

Trump’s aggressive tariffs on goods have handed “the EU and others both moral and tactical pretexts to fast-track digital taxes” as countermeasures, Kurbalija wrote. He’s also given foreign governments an appealing narrative of “reclaiming revenue from foreign tech ‘free riders,'” Kurbalija wrote, while perhaps accelerating the broader “use of digital service taxes as a diplomatic tool” to “pressure the US into balanced negotiations.”

For tech companies, the taxes risk escalating trade tensions, potentially perpetuating the atmosphere of uncertainty that, Allianz Trade reported, has US firms scrambling to secure reliable, affordable supply chains.

In an op-ed discussing potential harms to US tech firms and startups, the CEO of CareYaya Health Technologies, Neal K. Shah, warned that “tariffs on digital services would directly reduce revenues for American tech companies.”

At the furthest extreme, the “digital trade war threatens to splinter the Internet’s integrated infrastructure,” Kurbalija warned, fragmenting the Internet in a way that could “undermine decades of gradual development of technological interconnectedness.”

Imagine, Shah suggested, that on top of increased hardware costs, tech companies also incurred costs of providing services for “parallel digital universes with incompatible standards.” Users traveling to different locations might find that platforms have “different features, prices, and capabilities,” he said.

“For startups and industry innovators,” Shah predicted, “fragmentation means higher compliance costs, reduced market access, and slower growth.” Such a world also risks ending “the era of globally scalable digital platforms,” decreasing investor interest in tech, and reducing the global GDP “by up to 5 percent over the next decade as digital trade barriers multiply,” Shah said. And if digital services tariffs become a permanent fixture of global trade, Shah suggested that it could, in the long term, undermine American tech dominance, including in fields critical to national security, like artificial intelligence.

“Trump’s tariffs may dominate today’s headlines, but the silent war over digital services will define tomorrow’s economy,” Kurbalija wrote.

Trump’s go-to countermeasure is still tariffs

Trump has responded to threats of digital services taxes with threats of more tariffs, arguing that “only America should be allowed to tax American firms,” Reuters reported. In February, Trump issued a memo calling for research into the best responsive measures to counter threats of digital service taxes, including threatening more tariffs.

It’s worth asking if Trump’s tactics are working the way he intends, if the US plans to keep up the outdated trade strategy. Allianz Trade’s survey found that many US firms—rather than moving their operations into the US, as Trump has demanded—are instead rerouting supply chains through “emerging trade hubs” like Southeast Asia, the United Arab Emirates, Saudi Arabia, and Latin American countries where tariff rates are currently lower.

Likely even more frustrating to Trump, however, is a finding that 50 percent of US firms surveyed confirmed they are considering increasing investments in China, in response to the US abruptly shifting tariffs tactics. Only 8 percent said they’re considering decreasing Chinese investments.

It’s unclear if tech companies will be adequately shielded by the US threat of tariffs as the potential default countermeasure to digital services taxes or tariffs. Perhaps Trump’s memo will surface more novel tactics that interest the administration. But Allianz Trade suggested that Trump may be stuck in the past with a trade strategy focused too much on goods at a time when the tech industry needs more modern tactics to keep America’s edge in global markets.

“An economy adept at producing globally demanded services—from cloud software to financial engineering—is less reliant on physical supply chains and less vulnerable to commodity swings,” Allianz Trade reported. “The US edge in digital and financial services is not just an anecdote in the trade ledger; it has become a structural advantage.”

How would digital services tariffs even work?

Trump’s trade math so far has been criticized by economists as a “trillion-dollar tariff disappointment” that at times imposed baffling tariff rates that appeared to be generated by chatbots. But part of the trade math moving forward will also likely be deducing if nations threatening digital services taxes or tariffs can actually follow through on those threats.

Bertin Martens, a senior fellow at a European economics-focused think tank called Bruegel, broke down in April how practical it could be for the EU to attack digital platforms, noting, “there is a question of whether such retaliation is even feasible.”

The EU could possibly use a law known as the Anti-Coercion Regulation—which grants officials authority to lob countermeasures when facing “foreign economic coercion”—to impose digital services tariffs.

But “platforms with substantive presence in the EU cannot be the target of trade measures” under that law, Martens noted. That could create a carveout for the biggest tech giants who have operations in the EU, Martens suggested, but only if those operations are deemed “substantive,” a term that the law does not clearly define.

To make that determination, officials would need “detailed information on the locations or nationalities” of all the users that platforms bring together, including buyers, sellers, advertisers and other parties, Martens said.

This makes digital services platforms “particularly difficult to target,” he suggested. And lawmakers could risk backlash if “any arbitrary decision to invoke” the law risks “imposing a tax on EU users without retaliatory effect on the US.”

While tech companies will have to wait for the trade war to play out—likely planning to increase prices, Allianz Trade found, rather than bear the brunt of new costs—Shah suggested that there could be one clear winner if Trump doesn’t reprioritize shielding digital services exports in the way that experts recommend.

“A surprising potential consequence of digital tariffs could be the accelerated development and adoption of open-source technologies,” Shah wrote. “As proprietary digital products and services become subject to cross-border tariffs, open-source alternatives—which can be freely shared, modified, and distributed—may gain significant advantages.”

If costs get too high, Shah suggested that even tech giants might “increasingly turn to open-source solutions that can be locally deployed without triggering tariff thresholds.” Such a shift could potentially “profoundly affect the competitive landscape in areas like cloud infrastructure, AI frameworks, and enterprise software,” Shah wrote.

In that imagined future where open source alternatives rule the world, Shah said that targeting digital imports by tariff systems could become ineffective, “inadvertently driving adoption toward open-source alternatives that generate less economic leverage.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Trump’s trade war risks splintering the Internet, experts warn Read More »

the-codex-of-ultimate-vibing

The Codex of Ultimate Vibing

While we wait for wisdom, OpenAI releases a research preview of a new software engineering agent called Codex, because they previously released a lightweight open-source coding agent in terminal called Codex CLI and if OpenAI uses non-confusing product names it violates the nonprofit charter. The promise, also reflected in a number of rival coding agents, is to graduate from vibe coding. Why not let the AI do all the work on its own, typically for 1-30 minutes?

The answer is that it’s still early days, but already many report this is highly useful.

Sam Altman: today we are introducing codex.

it is a software engineering agent that runs in the cloud and does tasks for you, like writing a new feature of fixing a bug.

you can run many tasks in parallel.

it is amazing and exciting how much software one person is going to be able to create with tools like this. “you can just do things” is one of my favorite memes;

i didn’t think it would apply to AI itself, and its users, in such an important way so soon.

OpenAI: Today we’re launching a research preview of Codex: a cloud-based software engineering agent that can work on many tasks in parallel. Codex can perform tasks for you such as writing features, answering questions about your codebase, fixing bugs, and proposing pull requests for review; each task runs in its own cloud sandbox environment, preloaded with your repository.

Codex is powered by codex-1, a version of OpenAI o3 optimized for software engineering. It was trained using reinforcement learning on real-world coding tasks in a variety of environments to generate code that closely mirrors human style and PR preferences, adheres precisely to instructions, and can iteratively run tests until it receives a passing result.

Once Codex completes a task, it commits its changes in its environment. Codex provides verifiable evidence of its actions through citations of terminal logs and test outputs, allowing you to trace each step taken during task completion. You can then review the results, request further revisions, open a GitHub pull request, or directly integrate the changes into your local environment. In the product, you can configure the Codex environment to match your real development environment as closely as possible.

Codex can be guided by AGENTS.md files placed within your repository. These are text files, akin to README.md, where you can inform Codex how to navigate your codebase, which commands to run for testing, and how best to adhere to your project’s standard practices. Like human developers, Codex agents perform best when provided with configured dev environments, reliable testing setups, and clear documentation.

On coding evaluations and internal benchmarks, codex-1 shows strong performance even without AGENTS.md files or custom scaffolding.

All code is provided via GitHub repositories. All codex executions are sandboxed in the cloud. The agent cannot access external websites, APIs or other services. Afterwards you are given a comprehensive log of its actions and changes. You then choose to get the code via pull requests.

Note that while it lacks internet access during its core work, it can still install dependencies before it starts. But there are reports of struggles with its inability to install dependencies while it runs, which seems like a major issue.

Inability to access the web also makes some things trickier to diagnose, figure out or test. A lot of my frustration with AI coding is everything I want to do seems to involve interacting with persnickety websites.

This is a ‘research preview,’ and the worst Codex will ever be, although it might temporarily get less affordable once the free preview period ends. It does seem like they have given this a solid amount of thought and taken reasonable precautions.

The question is, when is this a better way to code than Cursor or Claude Code, and how does this compare to existing coding agents like Devin?

It would have been easy, given everything that happened, for OpenAI to have said ‘we do not need to give you a system card addendum, this is in preview and not a fully new model, etc.’ It is thus to their credit that they gave us the card anyway. It is short, but there is no need for it to be long.

As you would expect, the first thing that stood out was 2.3, ‘falsely claiming to have completed a task it did not complete.’ This seems to be a common pattern in similar models, including Claude 3.7.

I believe this behavior is something you want to fight hard to avoid having the AI learn in the first place. Once the AI learns to do this, it is difficult to get rid of it, but it wouldn’t learn it if you weren’t rewarding it during training. It is avoidable in theory. Is it avoidable in practice? I don’t know if the price is worthwhile, but I do know it’s worth a lot to avoid it.

OpenAI does indeed try, but with positive action rather than via negativa. Their plan is ensuring that the model is penalized for producing results inconsistent with its actions, and rewarded for acknowledging limitations. Good. That was a big help, going from 15% to 85% chance of correctly stating it couldn’t complete tasks. But 85% really isn’t 99%.

As in, I think if you include some things that push against pretending to solve problems, that helps a lot (hence the results here), but if you also have other places that pretending is rewarded, there will be a pattern, and then you still have a problem, and it will keep getting bigger. So instead, track down every damn place in which the AI could get away with claiming to have solved a task during training without having solved it, and make sure you always catch all of them. I know this is asking a lot.

They solve prompt injecting via network sandbagging. That definitely does the job for now, but also they made sure that prompt injections inside the coding environment also mostly failed. Good.

Finally we have the preparedness team affirming that the model did not reach high risk in any categories. I’d have liked to see more detail here, but overall This Is Fine.

Want to keep using the command line? OpenAI gives you codex-1, a variant of o4-mini, as an upgrade. They’re also introducing a simpler onboarding process for it and offering some free credits.

These look like a noticeable improvement over o4-mini-high and even o3-high. Codex-mini-latest will be priced at $1.50/$6 per million with a 75% prompt caching discount. They are also setting a great precedent by sharing the system message.

Greg Brockman speculates that over time the ‘local’ and ‘remote’ coding agents will merge. This makes sense. Why shouldn’t the local agent call additional remote agents to execute subtasks? Parallelism for the win. Nothing could possibly go wrong.

Immediate reaction to Codex was relatively muted. It takes a while for people to properly evaluate this kind of tool, and it is only available to those paying $200/month.

What feedback we do have is somewhat mixed. Cautious optimism, especially for what a future version could be, seems like the baseline.

Codex is the combination of an agent implementation with the underlying model. Reports seem to be consistent with the underlying model and async capabilities being excellent and those both matter a lot, but with the implementation needing work and being much less practically useful than rival agents, requiring more hand holding, having less clean UI and running slower.

That makes Codex in its current state a kind of ‘AI coding agent for advanced power users.’ You wouldn’t use the current Codex over the competition unless you understood what you were doing, and you wanted to do a lot of it.

The future of Codex looks bright. OpenAI in many senses started with ‘the hard part’ of having a great model and strong parallelism. The things still missing seem easily fixable over time.

One must also keep an eye out that OpenAI (especially via Greg Brockman) is picking out and amplifying positive feedback. It’s not yet clear how much of an upgrade this is over existing alternatives, especially as most reports don’t compare Codex to its rivals. That’s one reason I like to rely on my own Twitter reaction threads.

Then there’s Jules, Google’s coding assistant, which according to multiple sources is coming soon. Google will no doubt once again Fail Marketing Forever, but it seems highly plausible that Jules could be a better tool, and almost certain it will have a cheaper price tag.

What can it do?

Whatever those things are, it can do them fully in parallel. People seem to be underestimating this aspect of coding agents.

Alex Halliday: The killer feature of OpenAI Codex is parallelism.

Browser-based work is evolving: from humans handling tasks one tab at a time, to overseeing multiple AI agent tabs, providing feedback as needed.

The most important thing is the Task Relevant Maturity of these systems. You need to understand for which tasks systems like Codex can be used which is function of model capability and error tolerance. This is the “opportunity zone” for all AI systems, including ours @AirOpsHQ.

It can do legacy project migrations.

Flavio Adamo: I asked Codex to convert a legacy project from Python 2.7 to 3.11 and from Django 1.x to 5.0

It literally took 12 minutes. If you know, that’s usually weeks of pain. This is actually insane.

Haider: how much manual cleanup or review did it need after that initial pass?

Flavio Adamo: Not much, actually. Just a few Docker issues, solved in a couple of minutes.

Here’s Darwin Santos pumping out PRs and being very impressed.

Darwin Santos: Don’t mind us – it’s just @elvstejd and me knocking one PR after another with Codex. Thanks @embirico – @kevinweil. You weren’t joking with this being yet again a game changer.

Here’s Seconds being even more impressed, and sdmat being impressed with caveats.

0.005 Seconds: It’s incredible. The ux is mid and it’s missing features but the underlying model is so good that if you transported this to 2022 everyone would assume you have agi and put 70% of engineers into unemployment. 6 months of product engineering and it replaces teams.

It has been making insane progress in fairly complex scenarios on my personal project and I pretty effortlessly closed 7 tickets at work today. It obliterates small to medium tasks in familiar context.

Sdmat: Fantastic, though only part of what it will be and rough around the edges.

With no environment internet access, no agent search tool, and oriented to small-medium tasks it is currently a scalpel.

An excellent scalpel if you know what it is you want to cut.

Conrad Barski: this is right: it’s power is not that it can solve 50% of hard problems, it’s that it solves 99.9% of mid problems.

Sdmat: Exactly.

And mid problems comprise >90% of hard problems, so if you know what you are doing and can carve at the joints it is a very, very useful tool.

And here’s Riley Coyote being perhaps the most impressed, especially by the parallelism.

Riley Coyote: I’m *reallytrying to play it cool here but like…

I’mma just say it: Codex might be the most impressive, most *powerfulAI product I’ve ever touched. all things considered. the async ability, especially, is on another level. like it’s not just a technical ‘leap’, it’s transcendent. I’ve used basically every ai coding tool and platform out there at least once, and nothing else is in the same class. it just works, ridiculously well. and I’ll admit, I didn’t want to like it. Maybe it’s stubborn loyalty to Claude – I love that retro GUI and the no-nonsense simplicity of Claude Code. There’s still something special there and ill alway use it.

but, if I’m honest: that edge is kinda becoming irrelevant, because Codex feels like having a private, hyper-competent swarm – a crack team of 10/10 FS devs, but kinda *betteri think tbh.

it’s wild. at this rate, I might start shipping something new every single day, at least until I clear out my backlog (which, without exaggeration, is something like 35-40 ‘projects’ that are all ~70–85% done). this could not have come at a better time too. I desperately needed the combination of something like codex and much higher rate limits + a streamlined pipeline from my daily drive ai to db.

go try it out.

sidebar/tip: if you cant get over the initial hump, pop over to ai.studio.google.com and click the “build apps” button on the left hand side.

a bunch of sample apps and tools propogates and they’re actually really really really good one-click zero-shots essentially….

shits getting wild. and its only monday.

Bayram Annakov prefers Deep Research’s output for now on a sample task, but finds Codex to be promising as well, and it gets a B on an AI Product Engineer homework assignment.

Here’s Robbie Bouschery finding a bug in the first three minutes.

JB one shots a doodle jump game and gets 600k likes for the post, so clearly money well spent. Paul Couvert does the same with Gemini 2.5 although objectively the platform placement seems better in Codex’s version. Upgrade?

Reliability will always be a huge sticking point, right up until it isn’t. Being highly autonomous only matters if you can trust it.

Fleischman Mena: I’m reticent to use it on featurework: ~unchanged benchmarks & results look like o3 bolted to a SWE-bench finetune + git.

You seem to still need to baby it w/ gold-set context for decent outputs, so it’s unclear where alpha is vs. current reprompt grinds

It’s a nice “throw it in the bag, too” feature if you’re hitting GPT caps and don’t want to fan out to other services: But to me, it’s in the same category as task scheduling and the web agent: the “party trick” version of a better thing yet to come.

He points to a similar issue with Operator. I have access to Operator, but I don’t bother using it, largely because in many of the places where it is valuable it requires enough supervision I might as well do the job myself:

Henry: Does anyone use that ‘operator’ agent for anything?

Fleischman Mena: Not really.

Problem with web operators are that the REAL version of that product pretty much HAVE to be made by a sin-eater like the leetcode cheating startup.

Nobody wants “we build a web botting platform but it’s useless whenever lots of bots would have an impact.”

You pretty much HAVE to commit to “we’re going to sell you the ability to destroy the internet commons with bots”,

-or accept you’re only selling the “party trick” version of what this software would actually be if implemented “properly” for its users.

The few times I tried to use Operator to do something that would have been highly annoying to do myself, it fell down and died, and I decided that unless other people started reporting great results I’d rather just wait for similar agents to get better.

Alex Mizrahi reports Codex engaging in ‘busywork,’ identifying and fixing a ‘bug’ that wasn’t actually a bug.

Scott Swingle tries Codex out and compares it to Mentat. A theme throughout is that Mentat is more polished and faster, whereas Codex has to rerun a bunch of stuff. He likes o3 as the underlying model more than Sonnet 3.7, but finds the current implementation to not yet be up to par.

Lemonaut mostly doesn’t see the alpha over using some combination of Devin and Cursor/Cline, and finds it terribly finnicky and requiring hand holding in ways Cline and Devin aren’t, but does notice it solve a relatively difficult prompt. Again, that is compatible with o3 being a very good base model, but the implementation needing work.

People think about price all wrong.

Don’t think about relative price. Think about absolute benefits versus absolute price.

It doesn’t matter if ten times the price is ten times better. If ten times the price makes you 10% better, it’s an absolute steal.

Fleischman Mena: The sticking point is $2,160/year more than plus.

If you think Plus is a good deal at $240, the upgrade only makes sense if you GENUINELY believe

“This isn’t just better, it’s 10x better than plus, AND a better idea than subscribing to 9 other LLM pro plans.”

Seems dubious.

The $2,160 price issue is hard to ignore. that buys you ~43M o3 I/O tokens via API. War and peace is ~750k tokens. Most codebases & outputs don’t come close.

If spend’s okay, you prob do better plugging an API key into a half dozen agent competitors; you’d still come out ahead.

The dollar price, even at the $200/month a level, is chump change for a programmer, relative to a substantial productivity gain. What matters is your time and your productivity. If this improves your productivity even a few percent over rival options, and there isn’t a principal-agent problem (aka you pay the cost and someone else gets the productivity gains), then it is worthwhile. So ask whether or not it does that.

The other way this is the wrong approach is that it is only part of the $200/month package. You also get unlimited o3 and deep research use, among other products, which was previously the main attraction.

As a company, you are paying six figures for a programmer. Give them the best tools you can, whether or not this is the best tool.

This seems spot on to me:

Sully: I think agents are going to be split into 2 categories

Background & active

Background agents = stuff I don’t want to do (ux/spees doesn’t matter, but review + feedback does)

“Active agents” = things I want to do but 10x faster with agents (ux/speed matters, most apps are this)

Mat Ferrante: And I think they will be able to integrate with each other. Background leverages active one to execute quick stuff just like a user would. Active kicking off background tasks.

Sully: 100%.

Codex is currently in a weird spot. It wants to be background (or async) and is great at being async, but requires too much hand holding to let you actually ignore it for long. Once that is solved, things get a lot more interesting.

Discussion about this post

The Codex of Ultimate Vibing Read More »

experts-alarmed-over-trump’s-promotion-of-deep-sea-mining-in-international-waters

Experts alarmed over Trump’s promotion of deep-sea mining in international waters


Critics call for an industry moratorium until more scientific data can be obtained.

Greenpeace activists protest on the opening morning of the annual Deep Sea Mining Summit on April 17, 2024 in London, England. Credit: Chris J. Ratcliffe for Greenpeace via Getty Images

This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy and the environment. Sign up for their newsletter here.

In 2013, a deep-sea mining company named UK Seabed Resources contracted marine biologist Diva Amon and other scientists from the University of Hawaii at Manoa to survey a section of the seafloor in the Clarion-Clipperton Zone, a vast swath of international waters located in the Pacific Ocean that spans around 2 million square miles between Hawaii and Mexico.

The area is known to have an abundant supply of rocky deposits the size of potatoes called polymetallic nodules. They are rich in metals like nickel, cobalt, copper, and manganese, which have historically been used to make batteries and electric vehicles.

Someday, the company envisioned it might profit from mining them. But first it wanted to know more about the largely unexplored abyssal environment where they were found, Amon said.

Using a remotely operated vehicle equipped with cameras and lights, she began documenting life 2.5 miles deep.

On one of the robot’s first dives, an anemone-like creature with 8-foot-long billowing tentacles appeared about two feet above the seabed. It was attached to the stem of a sea sponge anchored on one of the valuable nodules.

Amon was overwhelmed with excitement. It was likely a new species, she said. She also felt a sense of grief. “Here was this incredibly beautiful animal,” she said, “that no one has likely ever seen before.” And they might not ever again. “I feel this immense sadness at the potential that this place that we have come to survey may be mined and essentially destroyed in the future,” she remembers thinking at that moment.

Now, more than a decade later, Amon worries her fears may be coming to fruition.

“The next gold rush”

On April 24, President Trump signed an executive order promoting deep-sea mining in the US and international waters, touting the industry’s potential to boost the country’s economic growth and national security.

“These resources are key to strengthening our economy, securing our energy future, and reducing dependence on foreign suppliers for critical minerals,” the order states.

In an online post last month, the National Oceanic and Atmospheric Administration (NOAA) described the political move as a step toward paving the way for “The Next Gold Rush,” stating: “Critical minerals are used in everything from defense systems and batteries to smartphones and medical devices. Access to these minerals is a key factor in the health and resilience of US supply chains.”

The order, titled “Unleashing America’s Offshore Critical Minerals and Resources,” charges NOAA and the Secretary of Commerce with expediting the process for reviewing and issuing licenses to explore and permits to mine seabed minerals in areas beyond national jurisdiction.

Less than a week after it was issued, a US subsidiary of the Canadian deep-sea mining corporation called The Metals Company submitted its first applications to explore and exploit polymetallic nodules in the Clarion-Clipperton Zone.

If approved, the company could be the first to mine in international waters. It would also be the first to do so under US law, sparking a rebuke from those opposed to the industry. These ocean advocates say the risks of mining far outweigh the benefits of maintaining a healthy deep-sea ecosystem, which plays a vital role in managing the global climate by absorbing heat and excess carbon dioxide.

During a House Committee on Natural Resources oversight hearing on the potential impact of deep-sea mining on the American economy—held in April on the same day The Metals Company made its announcement—US Rep. Jared Huffman (D-Calif.) critiqued the president’s order.

“Despite what proponents claim, it is not the great silver bullet,” he said. “The industry has very questionable market prospects because battery technology is rapidly changing,” he said. “[Electrical vehicle] markets are already moving away from the nickel, cobalt, copper and manganese found in deep-sea nodules towards other minerals.”

A vast resistance

Prior to the president’s order, more than 900 leading scientists and marine policy experts from over 70 countries, including Amon from Trinidad and Tobago, had signed a statement calling for a precautionary pause on deep-sea mining until more scientific data was obtained to prove related activity would not harm the marine environment.

Thirty-three countries, including Canada, France, the United Kingdom, and a number of Pacific Island Countries like Fiji and Vanuatu, are also calling for a moratorium or outright ban on deep-sea mining, according to the Deep Sea Conservation Coalition, an alliance of more than 100 organizations dedicated to protecting the ocean’s depths.

“You cannot authorize mining that’s going to cause biodiversity loss, that’s going to cause irreparable damage to the marine environment, that is going to potentially drive species extinct before we even discover them, until you can sort all that out, until you have enough knowledge to understand how you can prevent that kind of stuff from happening,” said Matthew Gianni, the coalition’s co-founder and political and policy advisor.

Some Indigenous peoples say deep-sea mining also threatens their cultural heritage. Native Hawaiians, for example, believe the deep sea is where life began.

“The action of deep-sea mining is such a destructive process, and that process now intrudes into this place, in the story of my beginning, my creation,” said Solomon Pili Kahoʻohalahala, a seventh-generation Indigenous Hawaiian elder and descendant from the island of Lānaʻi.

Legal experts also question whether Trump can authorize this activity.

The International Seabed Authority (ISA) is the only organization that can legally approve mining in international waters, sometimes referred to as high seas or the “Area,” according to Duncan Currie, an attorney who has practiced international and environmental law for more than 25 years. The organization was established under the 1982 United Nations Convention on the Law of the Sea (UNCLOS), an international treaty that provides a legal framework for governing maritime rights related to shipping, navigation, marine commerce, and the peaceful and sustainable use of ocean resources.

Currie said Trump’s new order falsely purports decision-making power over international waters, citing an outdated law called the Deep Seabed Hard Mineral Resources Act (DSHMRA). The act was passed in 1980—two years before UNCLOS was established—with the intent of serving as a temporary mechanism for regulating deep-sea mining until an international oversight body could be put into place. But the convention has never been ratified by the US Senate.

“It has always been seen as an interim or bootstraps provision,” said Currie, who provided expert legal testimony at the House Committee on Natural Resources hearing on deep-sea mining in April.

To grant companies permission to mine the deep sea under US law in areas far outside the country’s jurisdiction is unlawful, he said in an interview.

“That would be a breach of international law without a shadow of a doubt,” he said. It would also set a dangerous precedent, Currie said. “If the United States can do it, other countries can do it. And so this is very concerning.”

The International Seabed Authority’s Secretary-General Leticia Reis de Carvalho responded to Trump’s order in a statement: “This can only refer to resources found on the US seabed and ocean floor because everything beyond is the common heritage of humankind,” Carvalho said. “No State has the right to unilaterally exploit the mineral resources of the Area outside the legal framework established by UNCLOS.” This applies to all nations, including those who have not ratified the treaty, like the US, she said.

Since the US never signed or ratified the treaty, it is not a voting member of the ISA, which includes 169 member states, plus the European Union. But, for the last 30 years, the US has still been an active participant in ISA negotiations aimed at developing industry regulations in a Mining Code, according to Carvalho.

“The US has been a reliable observer and significant contributor to the negotiations of the International Seabed Authority, actively providing technical expertise to each stage of the development of the ISA regulatory framework,” she said in her statement.

It is all the more “surprising,” she said, that the US would now preemptively circumvent the code the ISA aims to adopt later this year.

“It is the foundation for ensuring that any activities in the Area benefit all humanity, for present and future generations, while protecting the marine environment,” Carvalho’s statement said.

Into the deep

Below 650 feet, rays of sunlight cease to pierce the deep ocean, which makes up the planet’s largest ecosystem.

“It provides more than 95 percent of all the habitable space on Earth,” said Amon, who explored parts of the Clarion-Clipperton Zone in 2013 and 2015 as a contractor for UK Seabed Resources, a company once owned by Lockheed Martin and acquired in 2023 by Norway’s Loke Marine Minerals. Loke filed for bankruptcy in April.

Amon has co-led or participated in deep-sea scientific expeditions in the Caribbean, the Gulf of Mexico, and the Mariana Trench National Marine Monument in the Pacific Ocean, among other places. “There’s new estimates that it’s actually .001 percent of the deep sea that has ever been seen with human eyes or camera,” she said.“We really, really haven’t scratched the surface.”

It is at these depths where thousands of species—the majority of which have yet to be identified or described—have specially adapted to live, she said. “From sharks that glow in the dark to blind white crabs that farm bacteria on their chests that they eat to corals that can live for millennia.”

Much of this life revolves around or depends upon the polymetallic nodules that mining companies plan to extract using massive industrial machinery.

“That process is going to destroy any biodiversity in the path of the vehicle because a lot of these animals can’t move,” Amon said.

Similar to a pearl, each of these nodules once began as a shark tooth or single piece of sediment that accrued layers of metals and minerals from the seawater “at a rate of just a few millimeters per million years,” the marine biologist said. These nodules litter parts of the seafloor in patches, like cobblestones on a street, she said.

Some of them are millions of years old, Amon said, and comprise a key part of the deep-sea ecosystem–“a whole thriving community down there”—so colorful and diverse that it conjures images of a Dr. Seuss book.

Purple, yellow, and white sea cucumbers. Brittle stars that resemble starfish but have long flexible arms. And corals, sponges, and anemones that use the polymetallic nodules as anchors to hold still and thrive on a seabed of silt, which, when mined, will be upturned and transformed into sediment plumes.

The plumes likely will form a sort of blinding “dust cloud” that will travel vertically and horizontally in the water far from the original mining sites, Amon said. The cloud may disorient and impair the vision of marine life that depend on sight to navigate or hunt for prey—or smother others.

“You can very safely say this mining would essentially lead to irreversible damage,” she said.

Photo of Inside Climate News

Experts alarmed over Trump’s promotion of deep-sea mining in international waters Read More »