Author name: Shannon Garcia

how-a-grad-student-got-lhc-data-to-play-nice-with-quantum-interference

How a grad student got LHC data to play nice with quantum interference


New approach is already having an impact on the experiment’s plans for future work.

The ATLAS particle detector of the Large Hadron Collider (LHC) at the European Nuclear Research Center (CERN) in Geneva, Switzerland. Credit: EThamPhoto/Getty Images

The ATLAS particle detector of the Large Hadron Collider (LHC) at the European Nuclear Research Center (CERN) in Geneva, Switzerland. Credit: EThamPhoto/Getty Images

Measurements at the Large Hadron Collider have been stymied by one of the most central phenomena of the quantum world. But now, a young researcher has championed a new method to solve the problem using deep neural networks.

The Large Hadron Collider is one of the biggest experiments in history, but it’s also one of the hardest to interpret. Unlike seeing an image of a star in a telescope, saying anything at all about the data that comes out of the LHC requires careful statistical modeling.

“If you gave me a theory [that] the Higgs boson is this way or that way, I think people imagine, ‘Hey, you built the experiment, you should be able to tell me what you’re going to see under various hypotheses!’” said Daniel Whiteson, a professor at the University of California, Irvine. “But we don’t.”

One challenge with interpreting LHC data is interference, a core implication of quantum mechanics. Interference allows two possible events to inhibit each other, weakening the likelihood of seeing the result of either. In the presence of interference, physicists needed to use a fuzzier statistical method to analyze data, losing the data’s full power and increasing its uncertainty.

However, a recent breakthrough suggests a different way to tackle the problem. The ATLAS collaboration, one of two groups studying proton collisions at the LHC, released two papers last December that describe new ways of exploring data from their detector. One describes how to use a machine learning technique called Neural Simulation-Based Inference to maximize the potential of particle physics data. The other demonstrates its effectiveness with the ultimate test: re-doing a previous analysis with the new technique and seeing dramatic improvement.

The papers are the culmination of a young researcher’s six-year quest to convince the collaboration of the value of the new technique. Its success is already having an impact on the experiment’s plans for future work.

Making sense out of fusing bosons

Each particle collision at the LHC involves many possible pathways in which different particles combine to give rise to the spray of debris that experimenters see. In 2017, David Rousseau at IJCLab in Orsay, a member of the ATLAS collaboration, asked one of his students, Aishik Ghosh, to improve his team’s ability to detect a specific pathway. That particular pathway is quite important since it’s used to measure properties of the Higgs boson, a particle (first measured in 2012) that helps explain the mass of all other fundamental particles.

It was a pretty big ask. “When a grad student gets started in ATLAS, they’re a tiny cog in a giant, well-oiled machine of 3,500 physicists, who all seem to know exactly what they’re doing,” said Ghosh.

The pathway Ghosh was asked to study occurs via several steps. First, the two colliding protons each emit a W boson, a particle associated with the weak nuclear force. These two bosons fuse together, changing their identity to form a Higgs boson. The Higgs boson then decays, forming a pair of Z bosons, another particle associated with the weak force. Finally, those Z bosons themselves each decay into a lepton, like an electron, and its antimatter partner, like a positron.

A Feynman diagram for the pathway studied by Aishik Ghosh. Credit: ATLAS

Measurements like the one Ghosh was studying are a key way of investigating the properties of the Higgs boson. By precisely measuring how long it takes the Higgs boson to decay, physicists could find evidence of it interacting with new, undiscovered particles that are too massive for the LHC to produce directly.

Ghosh started on the project, hoping to find a small improvement in the collaboration’s well-tested methods. Instead, he noticed a larger issue. The goal he was given, of detecting a single pathway by itself, didn’t actually make sense.

“I was doing that and I realized, ‘What am I doing?’ There’s no clear objective,” said Ghosh.

The problem was quantum interference.

How quantum histories interfere

One of the most famous demonstrations of the mysterious nature of quantum mechanics is called the double-slit experiment. In this demonstration, electrons are shot through a screen with two slits that allow them to pass through to a photographic plate on the other side. With one slit covered, the electrons form a pattern centered on the opening. The photographic plate lights up bright right across from the slit and dims further away from it.

With both slits open, you would expect the pattern to get brighter as more electrons reach the photographic plate. Instead, the effect varies. The two slits do not give rise to two nice bright peaks; instead, you see a rippling pattern in which some areas get brighter while others get dimmer, even though the dimmer areas should, in principle, be easier for electrons to reach.

The effect happens even if the electrons are shot at the screen one by one to stop them from influencing each other directly. It’s as if each electron carries with it two possible histories, one in which it goes through one slit and another where it goes through the other before both end up at the same place. These two histories interfere with each other so that some destinations become less likely instead of more likely.

Results of the double-slit experiment. Credit: Jordgette (CC BY-SA 3.0)

For electrons in the double-slit experiment, the two different histories are two different paths through space. For a measurement at the Large Hadron Collider, the histories are more abstract—paths that lead through transformations of fields. One history might be like the pathway Ghosh was asked to study, in which two W bosons fuse to form a Higgs boson before the Higgs boson splits into two Z bosons. But in another history, the two W bosons might fuse and immediately split into two Z bosons without ever producing a Higgs.

Both histories have the same beginning, with two W bosons, and the same end, with two Z bosons. And just as the two histories of electrons in the double-slit experiment can interfere, so can the two histories for these particles.

Another possible history for colliding particles at the Large Hadron Collider, which interferes with the measurement Ghosh was asked to do. Credit: ATLAS

That interference makes the effect of the Higgs boson much more challenging to spot. ATLAS scientists wanted to look for two pairs of electrons and positrons, which would provide evidence that two Z bosons were produced. They would classify their observations into two types: observations that are evidence for the signal they were looking for (that of a decaying Higgs boson) and observations of events that generate this pattern of particles without the Higgs boson acting as an intermediate (the latter are called the background). But the two types of observations, signal and background, interfere. With a stronger signal, corresponding to more Higgs bosons decaying, you might observe more pairs of electrons and positrons… but if these events interfere, you also might see those pairs disappear.

Learning to infer

In traditional approaches, those disappearances are hard to cope with, even when using methods that already incorporate machine learning.

One of the most common uses of machine learning is classification—for example, distinguishing between pictures of dogs and cats. You train the machine on pictures of cats and pictures of dogs, and it tells you, given a picture, which animal is the most likely match. Physicists at the LHC were already using this kind of classification method to characterize the products of collisions, but it functions much worse when interference is involved.

“If you have something that disappears, you don’t quite know what to train on,” said David Rousseau. “Usually, you’re training signal versus background, exactly like you’re training cats versus dogs. When there is something that disappears, you don’t see what you trained on.”

At first, Ghosh tried a few simple tricks, but as time went on, he realized he needed to make a more fundamental change. He reached out to others in the community and learned about a method called Neural Simulation-Based Inference, or NSBI.

In older approaches, people had trained machine learning models to classify observations into signal and background, using simulations of particle collisions to make the training data. Then they used that classification to infer the most likely value of a number, like the amount of time it takes a Higgs boson to decay, based on data from an actual experiment. Neural Simulation-Based Inference skips the classification and goes directly to the inference.

Instead of trying to classify observations into signal and background, NSBI uses simulations to teach an artificial neural network to guess a formula called a likelihood ratio. Someone using NSBI would run several simulations that describe different situations, such as letting the Higgs boson decay at different rates, and then check how many of each type of simulation yielded a specific observation. The fraction of these simulations with a certain decay rate would provide the likelihood ratio, a method for inferring which decay rate is more likely given experimental evidence. If the neural network is good at guessing this ratio, it will be good at finding how long the Higgs takes to decay.

Because NSBI doesn’t try to classify observations into different categories, it handles quantum interference more effectively. Instead of trying to find the Higgs based on a signal that disappears, it examines all the data, trying to guess which decay time is the most likely.

Ghosh tested the method, which showed promising results on test data, and presented the results at a conference in 2019. But if he was going to convince the ATLAS collaboration that the method was safe to use, he still had a lot of work ahead of him.

Shifting the weight on ATLAS’ shoulders

Experiments like ATLAS have high expectations attached to them. A collaboration of thousands of scientists, ATLAS needs to not only estimate the laws of physics but also have a clear idea of just how uncertain those estimates are. At the time, NSBI hadn’t been tested in that way.

“None of this has actually been used on data,” said Ghosh. “Nobody knew how to quantify the uncertainties. So you have a neural network that gives you a likelihood. You don’t know how good the likelihood is. Is it well-estimated? What if it’s wrongly estimated just in some weird corner? That would completely bias your results.”

Checking those corners was too big a job for a single PhD student and too complex to complete within a single PhD degree. Aishik would have to build a team, and he would need time to build that team. That’s tricky in the academic world, where students go on to short-term postdoc jobs with the expectation that they quickly publish new results to improve their CV for the next position.

“We’re usually looking to publish the next paper within two to three years—no time to overhaul our methods,” said Ghosh. Fortunately, Ghosh had support. He received his PhD alongside Rousseau and went to work with Daniel Whiteson, who encouraged him to pursue his ambitious project.

“I think it’s really important that postdocs learn to take those risks because that’s what science is,” Whiteson said.

Ghosh gathered his team. Another student of Rousseau’s, Arnaud Maury, worked to calibrate the machine’s confidence in its answers. A professor at the University of Massachusetts, Rafael Coelho Lopes de Sa, joined the project. His student Jay Sandesara would have a key role in getting the calculation to work at full scale on a computer cluster. IJCLab emeritus RD Schaffer and University of Liège professor Gilles Loupe provided cross-checks and advice.

The team wanted a clear demonstration that their method worked, so they took an unusual step. They took data that ATLAS had already analyzed and performed a full analysis using their method instead, showing that it could pass every check the collaboration could think of. They would publish two papers, one describing the method and the other giving the results of their upgraded analysis. Zach Marshall, who was the computing coordinator for ATLAS at the time, helped get the papers through, ensuring that they were vetted by experts in multiple areas.

“It was a very small subset of our community that had that overlap between this technical understanding and the physics analysis experience and understanding that were capable of really speaking to whether that paper was sufficient and intelligible and useful. So we really had to make sure that we engaged that little group of humans by name,” said Marshall.

The new method showed significant improvements, getting a much more precise result than the collaboration’s previous analysis. That improvement, and the thorough checks, persuaded ATLAS to use NSBI more broadly going forward. It will give them much more precision than they expected, using the Higgs boson to search for new particles and clarify our understanding of the quantum world. When ATLAS discusses its future plans, it makes projections of the precision it expects to reach in the future. But those plans are now being upended.

“One of the fun things about this method that Aishik pushed hard is each time it feels like now we do that projection—here’s how well we’ll do in 15 years—we absolutely crush those projections,” said Marshall. “So we are just now having to redo a set of projections because we matched our old projections for 15 years out already today. It’s a very fun problem to have.”

How a grad student got LHC data to play nice with quantum interference Read More »

psyche-keeps-its-date-with-an-asteroid,-but-now-it’s-running-in-backup-mode

Psyche keeps its date with an asteroid, but now it’s running in backup mode

The spacecraft, built by Maxar Space Systems, will operate its electric thrusters for the equivalent of three months between now and November to keep the mission on track for arrival at asteroid Psyche in 2029.

“Through comprehensive testing and analysis, the team narrowed down the potential causes to a valve that may have malfunctioned in the primary line,” NASA said in a statement Friday. “The switch to the identical backup propellant line in late May restored full functionality to the propulsion system.”

The next waypoint on Psyche’s voyage will be a flyby of Mars in May 2026. Officials expect Psyche to keep that date, which is critical for using Mars’ gravity to slingshot the spacecraft deeper into the Solar System, eventually reaching the asteroid belt about four years from now.

NASA’s Psyche spacecraft takes a spiral path to the asteroid Psyche, as depicted in this graphic that shows the path from above the plane of the planets, labeled with key milestones of the prime mission. Credit: NASA/JPL-Caltech

At Psyche, the spacecraft will enter orbit and progressively move closer to the asteroid, using a suite of sensors to map its surface, measure its shape, mass, and gravity field, and determine its elemental composition. Observations through telescopes suggest Psyche is roughly 140 miles (226 kilometers) in diameter, or about the width of Massachusetts. But it’s likely not spherical in shape. Scientists describe its shape as more akin to a potato.

Potatoes come in lots of shapes, and researchers won’t know exactly what Psyche looks like until NASA’s asteroid explorer arrives in 2029. Psyche will be the first metallic, or M-type, asteroid visited by any spacecraft, and scientists are eager to study an object that’s largely made of metals—probably iron, nickel, and perhaps some rarer elements instead of rocky minerals.

With the Psyche spacecraft’s plasma thrusters back in action, these goals of NASA’s billion-dollar science mission remain achievable.

“The mission team’s dedication and systematic approach to this investigation exemplifies the best of NASA engineering,” said Bob Mase, Psyche project manager at  JPL, in a statement. “Their thorough diagnosis and recovery, using the backup system, demonstrates the value of robust spacecraft design and exceptional teamwork.”

But there’s still a lingering concern whatever problem caused the valve to malfunction in the primary fuel line might also eventually affect the same kind of valve in the backup line.

“We are doing a lot of good proactive work around that possible issue,” wrote Lindy Elkins-Tanton, Psyche’s principal investigator at Arizona State University, in a post on X.

Psyche keeps its date with an asteroid, but now it’s running in backup mode Read More »

spacex’s-next-starship-just-blew-up-on-its-test-stand-in-south-texas

SpaceX’s next Starship just blew up on its test stand in South Texas


SpaceX had high hopes for Starship in 2025, but it’s been one setback after another.

A fireball erupts around SpaceX’s Starship rocket in South Texas late Wednesday night. Credit: LabPadre

SpaceX’s next Starship rocket exploded during a ground test in South Texas late Wednesday, dealing another blow to a program already struggling to overcome three consecutive failures in recent months.

The late-night explosion at SpaceX’s rocket development complex in Starbase, Texas, destroyed the bullet-shaped upper stage that was slated to launch on the next Starship test flight. The powerful blast set off fires around SpaceX’s Massey’s Test Site, located a few miles from the company’s Starship factory and launch pads.

Live streaming video from NASASpaceflight.com and LabPadremedia organizations with cameras positioned around Starbase—showed the 15-story-tall rocket burst into flames shortly after 11: 00 pm local time (12: 00 am EDT; 04: 00 UTC). Local residents as far as 30 miles away reported seeing and feeling the blast.

SpaceX confirmed the Starship, numbered Ship 36 in the company’s inventory, “experienced a major anomaly” on a test stand as the vehicle prepared to ignite its six Raptor engines for a static fire test. These hold-down test-firings are typically one of the final milestones in a Starship launch campaign before SpaceX moves the rocket to the launch pad.

The explosion occurred as SpaceX finished up loading super-cold methane and liquid oxygen propellants into Starship in preparation for the static fire test. The company said the area around the test site was evacuated of all personnel, and everyone was safe and accounted for after the incident. Firefighters from the Brownsville Fire Department were dispatched to the scene.

“Our Starbase team is actively working to safe the test site and the immediate surrounding area in conjunction with local officials,” SpaceX posted on X. “There are no hazards to residents in surrounding communities, and we ask that individuals do not attempt to approach the area while safing operations continue.”

Picking up the pieces

Earlier Wednesday, just hours before the late-night explosion at Starbase, an advisory released by the Federal Aviation Administration showed SpaceX had set June 29 as a tentative launch date for the next Starship test flight. That won’t happen now, and it’s anyone’s guess when SpaceX will have another Starship ready to fly.

Massey’s Test Site, named for a gun range that once occupied the property, is situated on a bend in the Rio Grande River, just a few hundred feet from the Mexican border. The test site is currently the only place where SpaceX can put Starships through proof testing and static fire tests before declaring the rockets are ready to fly.

The extent of the damage to ground equipment at Massey’s was not immediately clear, so it’s too soon to say how long the test site will be out of commission. For now, though, the explosion leaves SpaceX without a facility to support preflight testing on Starships.

The videos embedded below come from NASASpaceflight.com and LabPadre, showing multiple angles of the Starship blast.

The explosion at Massey’s is a reminder of SpaceX’s rocky path to get Starship to this point in its development. In 2020 and 2021, SpaceX lost several Starship prototypes to problems during ground and flight testing. The visual of Ship 36 going up in flames harkens back to those previous explosions, along with the fiery demise of a Falcon 9 rocket on its launch pad in 2016 under circumstances similar to Wednesday night’s incident.

SpaceX has now launched nine full-scale Starship rockets since April 2023, and before the explosion, the company hoped to launch the 10th test flight later this month. Starship’s track record has been dreadful so far this year, with the rocket’s three most recent test flights ending prematurely. These setbacks followed a triumphant 2024, when SpaceX made clear progress on each successive Starship suborbital test flight, culminating in the first catch of the rocket’s massive Super Heavy booster with giant robotic arms on the launch pad tower.

Stacked together, the Super Heavy booster stage and Starship upper stage stand more than 400 feet tall, creating the largest rocket ever built. SpaceX has already flown a reused Super Heavy booster, and the company has designed Starship itself to be recoverable and reusable, too.

After last year’s accomplishments, SpaceX appeared to be on track for a full orbital flight, an attempt to catch and recover Starship itself, and an important in-space refueling demonstration in 2025. The refueling demo has officially slipped into 2026, and it’s questionable whether SpaceX will make enough progress in the coming months to attempt recovery of a ship before the end of this year.

A Super Heavy booster and Starship upper stage are seen in March at SpaceX’s launch pad in South Texas, before the ship was stacked atop the booster for flight. The Super Heavy booster for the next Starship flight completed its static fire test earlier this month. Credit: Brandon Bell/Getty Images

Ambition meets reality

SpaceX debuted an upgraded Starship design, called Version 2 or Block 2, on a test flight in January. It’s been one setback after another since then.

The new Starship design is slightly taller than the version of Starship that SpaceX flew in 2023 and 2024. It has an improved heat shield to better withstand the extreme heat of atmospheric reentry. SpaceX also installed a new fuel feed line system to route methane fuel to the ship’s Raptor engines, and an improved propulsion avionics module controlling the vehicle’s valves and reading sensors.

Despite—or perhaps because ofall of these changes for Starship Version 2, SpaceX has been unable to replicate the successes it achieved with Starship in the last two years. Ships launched on test flights in January and March spun out of control minutes after liftoff, scattering debris over the sea, and in at least one case, onto a car in the Turks and Caicos Islands.

SpaceX engineers concluded the January failure was likely caused by intense vibrations that triggered fuel leaks and fires in the ship’s engine compartment, causing an early shutdown of the rocket’s engines. Engineers said the vibrations were likely in resonance with the vehicle’s natural frequency, intensifying the shaking beyond the levels SpaceX predicted.

The March flight failed in similar fashion, but SpaceX’s investigators determined the most probable root cause was a hardware failure in one of the ship’s engines, a different failure mode than two months before.

During SpaceX’s most recent Starship test flight last month, the rocket completed the ascent phase of the mission as planned, seemingly overcoming the problems that plagued the prior two launches. But soon after the Raptor engines shut down, a fuel leak caused the ship to begin tumbling in space, preventing the vehicle from completing a guided reentry to test the performance of new heat shield materials.

File photo of a Starship static fire in May at Massey’s Test Site.

SpaceX is working on a third-generation Starship design, called Version 3, that the company says could be ready to fly by the end of this year. The upgraded Starship Version 3 design will be able to lift heavier cargo—up to 200 metric tonsinto orbit thanks to larger propellant tanks and more powerful Raptor engines. Version 3 will also have the ability to refuel in low-Earth orbit.

Version 3 will presumably have permanent fixes to the problems currently slowing SpaceX’s pace of Starship development. And there are myriad issues for SpaceX’s engineers to solve, from engine reliability and the ship’s resonant frequency, to beefing up the ship’s heat shield and fixing its balky payload bay door.

Once officials solve these problems, it will be time for SpaceX to bring a Starship from low-Earth orbit back to the ground. Then, there’s more cool stuff on the books, like orbital refueling and missions to the Moon in partnership with NASA’s Artemis program. NASA has contracts worth more than $4 billion with SpaceX to develop a human-rated Starship that can land astronauts on the Moon and launch them safely back into space.

The Trump administration’s proposed budget for NASA would cancel the Artemis program’s ultra-expensive Space Launch System rocket and Orion crew capsule after two more flights, leaving commercial heavy-lifters to take over launching astronauts from the Earth to the Moon. SpaceX’s Starship, already on contract with NASA as a human-rated lander, may eventually win more government contracts to fill the role of SLS and Orion under Trump’s proposed budget. Other rockets, such as Blue Origin’s New Glenn, are also well-positioned to play a larger role in human space exploration.

NASA’s official schedule for the first Artemis crew landing on the Moon puts the mission some time in 2027, using SLS and Orion to transport astronauts out to the vicinity of the Moon to meet up with SpaceX’s Starship lunar lander. After that mission, known as Artemis III, NASA would pivot to using commercial rockets from Elon Musk’s SpaceX and Jeff Bezos’ Blue Origin to replace the Space Launch System.

Meanwhile, SpaceX’s founder and CEO has his sights set on Mars. Last month, Musk told his employees he wants to launch the first Starships toward the Red Planet in late 2026, when the positions of Earth and Mars in the Solar System make a direct journey possible. Optimistically, he would like to send people to Mars on Starships beginning in 2028.

All of these missions are predicated on SpaceX mastering routine Starship launch operations, rapid reuse of the ship and booster, and cryogenic refueling in orbit, along with adapting systems such as life support, communications, and deep space navigation for an interplanetary journey.

The to-do list is long for SpaceX’s Starship program—too long for Mars landings to seem realistic any time in the next few years. NASA’s schedule for the Artemis III lunar landing mission in 2027 is also tight, and not only because of Starship’s delays. The development of new spacesuits for astronauts to wear on the Moon may also put the Artemis III schedule at risk. NASA’s SLS rocket and Orion spacecraft have had significant delays throughout their history, so it’s not a sure thing they will be ready in 2027.

While it’s too soon to know the precise impact of Wednesday night’s explosion, we can say with some confidence that the chances of Starship meeting these audacious schedules are lower today than they were yesterday.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

SpaceX’s next Starship just blew up on its test stand in South Texas Read More »

senate-passes-genius-act—criticized-as-gifting-trump-ample-opportunity-to-grift

Senate passes GENIUS Act—criticized as gifting Trump ample opportunity to grift

“Why—beyond the obvious benefit of gaining favor, directly or indirectly, with the Trump administration—did you select USD1, a newly launched, untested cryptocurrency with no track record?” the senators asked.

Responding, World Liberty Financial’s lawyers claimed MGX was simply investing in “legitimate financial innovation,” CBS News reported, noting a Trump family-affiliated entity owns a 60 percent stake in the company.

Trump has denied any wrongdoing in the MGX deal, ABC News reported. However, Warren fears the GENIUS Act will provide “even more opportunities to reward buyers of Trump’s coins with favors like tariff exemptions, pardons, and government appointments” if it becomes law.

Although House supporters of the bill have reportedly promised to push the bill through, so Trump can sign it into law by July, the GENIUS Act is likely to face hurdles. And resistance may come from not just Democrats with ongoing concerns about Trump’s and future presidents’ potential conflicts of interest—but also from Republicans who think passing the bill is pointless without additional market regulations to drive more stablecoin adoption.

Dems: Opportunities for Trump grifts are “mind-boggling”

Although 18 Democrats helped the GENIUS Act pass in the Senate, most Democrats opposed the law over concerns of Trump’s feared conflicts of interest, PBS News reported.

Merkley remains one of the staunchest opponents to the GENIUS Act. In a statement, he alleged that the Senate passing the bill was essentially “rubberstamping Trump’s crypto corruption.”

According to Merkley, he and other Democrats pushed to remove the exemption from the GENIUS Act before the Senate vote—hoping to add “strong anti-corruption measures.” But Senate Republicans “repeatedly blocked” his efforts to hold votes on anti-corruption measures. Instead, they “rammed through this fatally flawed legislation without considering any amendments on the Senate floor—despite promises of an open amendment process and debate before the American people,” Merkley said.

Ultimately, it passed with the exemption intact, which Merkley considered “profoundly corrupt,” promising, “I will keep fighting to ban Trump-style crypto corruption to prevent the sale of government policy by elected federal officials in Congress and the White House.”

Senate passes GENIUS Act—criticized as gifting Trump ample opportunity to grift Read More »

openai-weighs-“nuclear-option”-of-antitrust-complaint-against-microsoft

OpenAI weighs “nuclear option” of antitrust complaint against Microsoft

OpenAI executives have discussed filing an antitrust complaint with US regulators against Microsoft, the company’s largest investor, The Wall Street Journal reported Monday, marking a dramatic escalation in tensions between the two long-term AI partners. OpenAI, which develops ChatGPT, has reportedly considered seeking a federal regulatory review of the terms of its contract with Microsoft for potential antitrust law violations, according to people familiar with the matter.

The potential antitrust complaint would likely argue that Microsoft is using its dominant position in cloud services and contractual leverage to suppress competition, according to insiders who described it as a “nuclear option,” the WSJ reports.

The move could unravel one of the most important business partnerships in the AI industry—a relationship that started with a $1 billion investment by Microsoft in 2019 and has grown to include billions more in funding, along with Microsoft’s exclusive rights to host OpenAI models on its Azure cloud platform.

The friction centers on OpenAI’s efforts to transition from its current nonprofit structure into a public benefit corporation, a conversion that needs Microsoft’s approval to complete. The two companies have not been able to agree on details after months of negotiations, sources told Reuters. OpenAI’s existing for-profit arm would become a Delaware-based public benefit corporation under the proposed restructuring.

The companies are discussing revising the terms of Microsoft’s investment, including the future equity stake it will hold in OpenAI. According to The Information, OpenAI wants Microsoft to hold a 33 percent stake in a restructured unit in exchange for foregoing rights to future profits. The AI company also wants to modify existing clauses that give Microsoft exclusive rights to host OpenAI models in its cloud.

OpenAI weighs “nuclear option” of antitrust complaint against Microsoft Read More »

paramount-drops-trailer-for-the-naked-gun-reboot

Paramount drops trailer for The Naked Gun reboot

Liam Neeson stars as Lt. Frank Drebin Jr. in The Naked Gun.

Thirty years after the last film in The Naked Gun crime-spoof comedy franchise, we’re finally getting a new installment, The Naked Gun, described as a “legacy sequel.” And it’s Liam Neeson stepping into Leslie Nielsen’s fumbling shoes, playing that character’s son. Judging by the official trailer, Neeson is up to the task, showcasing his screwball comedy chops.

(Some spoilers for the first three films in the franchise below.)

The original Naked Gun: From the Files of Police Squad! debuted in 1988, with Leslie Nielsen starring as Detective Frank Drebin, trying to foil an assassination attempt on Queen Elizabeth II during her visit to the US. It proved successful enough to launch two sequels. Naked Gun 2-1/2: The Smell of Fear (1991) found Drebin battling an evil plan to kidnap a prominent nuclear scientist. Naked Gun 33-1/3: The Final Insult (1994) found Drebin coming out of retirement and going undercover to take down a crime syndicate planning to blow up the Academy Awards.

The franchise rather lost steam after that, but by 2013, Paramount was planning a reboot starring Ed Helms as “Frank Drebin, no relation.” David Zucker, who produced the prior Naked Gun films and directed the first two, declined to be involved, feeling it could only be “inferior” to his originals. He was briefly involved in the 2017 rewrites, featuring Frank’s son as a secret agent rather than a policeman. That film never transpired either.  The project was revived again in 2021 by Seth MacFarlane (without Zucker’s involvement), and Neeson was cast as Frank Drebin Jr.—a police lieutenant in this incarnation.

In addition to Neeson, the film stars Paul Walter Hauser as Captain Ed Hocken, Jr.—Hauser will also appear as Mole Man in the forthcoming Fantastic Four: First Steps—and Pamela Anderson as a sultry femme fatale named Beth. The cast also includes Kevin Durand, Danny Huston, Liza Koshy, Cody Rhodes, CCH Pounder, Busta Rhymes, and Eddy Yu.

Paramount drops trailer for The Naked Gun reboot Read More »

founder-of-23andme-buys-back-company-out-of-bankruptcy-auction

Founder of 23andMe buys back company out of bankruptcy auction

TTAM’s winning offer requires judicial approval, and a court hearing to approve the bid is set for next week.

Several US states have filed objections or lawsuits with the court expressing concerns about the transfer of customers’ genetic data to a new company, though those may now be moot because of Wojcicki’s continued involvement.

An expert hired by the court to review data privacy concerns over a sale of 23andMe submitted a report on Wednesday that noted Wojcicki had been chief executive when a 2023 data breach compromised 7 million customer accounts. Litigation over the breach continues, although that liability remains with the bankruptcy estate to be paid off with the proceeds from the winning bid.

Wojcicki was once married to Google co-founder Sergey Brin. 23andMe went public in 2021 through a merger with a blank cheque vehicle sponsored by Richard Branson, quickly reaching a market cap of nearly $6 billion.

The company has been plagued by years of falling revenue as it was unable to grow beyond its genetic testing business, in which customers sent saliva samples in to be analyzed for medical conditions and family genealogy.

Wojcicki had bid 40 cents a share to acquire the company prior to the bankruptcy filing.

Shares of 23andMe, which now trade over the counter, have rocketed to $5.49 on the belief the company will stage a recovery after settling the litigation.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Founder of 23andMe buys back company out of bankruptcy auction Read More »

trump’s-ftc-may-impose-merger-condition-that-forbids-advertising-boycotts

Trump’s FTC may impose merger condition that forbids advertising boycotts

FTC chair alleged “serious risk” from ad boycotts

After Musk’s purchase of Twitter, the social network lost advertisers for various reasons, including changes to content moderation and an incident in which Musk posted a favorable response to an antisemitic tweet and then told concerned advertisers to “go fuck yourself.”

FTC Chairman Andrew Ferguson said at a conference in April that “the risk of an advertiser boycott is a pretty serious risk to the free exchange of ideas.”

“If advertisers get into a back room and agree, ‘We aren’t going to put our stuff next to this guy or woman or his or her ideas,’ that is a form of concerted refusal to deal,” Ferguson said. “The antitrust laws condemn concerted refusals to deal. Now, of course, because of the First Amendment, we don’t have a categorical antitrust prohibition on boycotts. When a boycott ceases to be economic for purposes of the antitrust laws and becomes purely First Amendment activity, the courts have not been super clear—[it’s] sort of a ‘we know it when we see it’ type of thing.”

The FTC website says that any individual company acting on its own may “refuse to do business with another firm, but an agreement among competitors not to do business with targeted individuals or businesses may be an illegal boycott, especially if the group of competitors working together has market power.” The examples given on the FTC webpage are mostly about price competition and do not address the widespread practice of companies choosing where to place advertising based on concerns about their brands.

We contacted the FTC about the merger review today and will update this article if it provides any comment.

X’s ad lawsuit

X’s lawsuit targets a World Federation of Advertisers initiative called the Global Alliance for Responsible Media (GARM), a now-defunct program that Omnicom and Interpublic participated in. X itself was part of the GARM initiative, which shut down after X filed the lawsuit. X alleged that the defendants conspired “to collectively withhold billions of dollars in advertising revenue.”

The World Federation of Advertisers said in a court filing last month that GARM was founded “to bring clarity and transparency to disparate definitions and understandings in advertising and brand safety in the context of social media. For example, certain advertisers did not want platforms to advertise their brands alongside content that could negatively impact their brands.”

Trump’s FTC may impose merger condition that forbids advertising boycotts Read More »

there’s-another-leak-on-the-iss,-but-nasa-is-not-saying-much-about-it

There’s another leak on the ISS, but NASA is not saying much about it

No one is certain. The best guess is that the seals on the hatch leading to the PrK module are, in some way, leaking. In this scenario, pressure from the station is feeding the leak inside the PrK module through these seals, leading to a stable pressure inside—making it appear as though the PrK module leaks are fully repaired.

At this point, NASA is monitoring the ongoing leak and preparing for any possibility. A senior industry source told Ars that the NASA leadership of the space station program is “worried” about the leak and its implications.

This is one reason the space agency delayed the launch of a commercial mission carrying four astronauts to the space station, Axiom-4, on Thursday.

“The postponement of Axiom Mission 4 provides additional time for NASA and Roscosmos to evaluate the situation and determine whether any additional troubleshooting is necessary,” NASA said in a statement. “A new launch date for the fourth private astronaut mission will be provided once available.”

One source indicated that the new tentative launch date is now June 18. However, this will depend on whatever resolution there is to the leak issue.

What’s the worst that could happen?

The worst-case scenario for the space station is that the ongoing leaks are a harbinger of a phenomenon known as “high cycle fatigue,” which affects metal, including aluminum. Consider that if you bend a metal clothes hanger once, it bends. But if you bend it back and forth multiple times, it will snap. This is because, as the metal fatigues, it hardens and eventually snaps. This happens suddenly and without warning, as was the case with an Aloha Airlines flight in 1988.

The concern is that some of these metal structures on board the station could fail quickly and catastrophically. Accordingly, in its previous assessments, NASA has classified the structural cracking issue on the space station as the highest level of concern on its 5v5 risk matrix to gauge the likelihood and severity of risks to the space station.

In the meantime, the space agency has not been forthcoming with any additional information. Despite many questions from Ars Technica and other publications, NASA has not scheduled a press conference or said anything else publicly about the leaks beyond stating, “The crew aboard the International Space Station is safely conducting normal operations.”

There’s another leak on the ISS, but NASA is not saying much about it Read More »

inside-the-firm-turning-eerie-blank-streaming-ads-into-useful-nonprofit-messages

Inside the firm turning eerie blank streaming ads into useful nonprofit messages

AdGood’s offerings also include a managed service for ad campaign management for nonprofits. AdGood doesn’t yet offer pixels, but Johns said developments like that are “in the works.”

Johns explained that while many nonprofits use services like Meta and Google AdWords for tracking ads, they’re “hitting plateaus” with their typical methods. He said there is nonprofit interest in reaching younger audiences, who often use CTV devices:

A lot of them have been looking for ways to get [into CTV ads], but, unfortunately, with minimum spend amounts, they’re just not able to access it.

Helping nonprofits make commercials

AdGood also sells a self-serve generative AI ad manager, which it offers via a partnership with Streamr.AI. The tool is designed to simplify the process of creating 30-second video ads that are “completely editable via a chat prompt,” according to Johns.

“It automatically generates all their targeting. They can update their targeting for whatever they want, and then they can swipe a credit card and essentially run that campaign. It goes into our approval queue, which typically takes 24 hours for us to approve because it needs to be deemed TV-quality,” he explained.

The executive said AdGood charges nonprofits a $7 CPM and a $250 flat fee for the service. He added:

Think about a small nonprofit in a local community, for instance, my son’s special needs baseball team. I can get together with five other parents, easily pull together a campaign, and run it in our local town. We get seven kids to show up, and it changes their lives. We’re talking about $250 having a massive impact in a local market.

Looking ahead, Johns said he’d like to see AdGood’s platform and team grow to be able to give every customer “a certain allocation of inventory, whether it’s 50,000 impressions a month or 100,000 a month.”

For some, streaming ads are rarely a good thing. But when those ads can help important causes and replace odd blank ad spaces that make us question our own existence, it brings new meaning to the idea of a “good” commercial.

Inside the firm turning eerie blank streaming ads into useful nonprofit messages Read More »

Why is digital sovereignty important right now?

Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders.

Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure.

The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself.

But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades (according to historical surveys I’ve run), most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack.

Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas.

Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty.

As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems.

What does the digital sovereignty landscape look like today?

Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts.

We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas others (the UK included) are adopting a risk-based approach based on trusted locales.

We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data?

This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks.

Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP.

How Are Cloud Providers Responding?

Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoring (in the French sense) its spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now.

We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France (Microsoft has similar in Germany). However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue.

Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players.

What Can Enterprise Organizations Do About It?

First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience.

If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that.

This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture.

It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency.

Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate.

Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing.

Where to start? Look after your own organization first

Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once.

Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario.

Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it.

Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience.

The post Why is digital sovereignty important right now? appeared first on Gigaom.

Why is digital sovereignty important right now? Read More »

meta-beefs-up-disappointing-ai-division-with-$15-billion-scale-ai-investment

Meta beefs up disappointing AI division with $15 billion Scale AI investment

Meta has invested heavily in generative AI, with the majority of its planned $72 billion in capital expenditure this year earmarked for data centers and servers. The deal underlines the high price AI companies are willing to pay for data that can be used to train AI models.

Zuckerberg pledged last year that his company’s models would outstrip rivals’ efforts in 2025, but Meta’s most recent release, Llama 4, has underperformed on various independent reasoning and coding benchmarks.

The long-term goal of researchers at Meta “has always been to reach human intelligence and go beyond it,” said Yann LeCun, the company’s chief AI scientist at the VivaTech conference in Paris this week.

Building artificial “general” intelligence—AI technologies that have human-level intelligence—is a popular goal for many AI companies. An increasing number of Silicon Valley groups are also seeking to reach “superintelligence,” a hypothetical scenario where AI systems surpass human intelligence.

The core of Scale’s business has been data-labeling, a manual process of ensuring images and text are accurately labeled and categorized before they are used to train AI models.

Wang has forged relationships with Silicon Valley’s biggest investors and technologists, including OpenAI’s Sam Altman. Scale AI’s early customers were autonomous vehicle companies, but the bulk of its expected $2 billion in revenues this year will come from labeling the data used to train the massive AI models built by OpenAI and others.

The deal will result in a substantial payday for Scale’s early venture capital investors, including Accel, Tiger Global Management, and Index Ventures. Tiger’s $200 million investment is worth more than $1 billion at the company’s new valuation, according to a person with knowledge of the matter.

Additional reporting by Tabby Kinder in San Francisco

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Meta beefs up disappointing AI division with $15 billion Scale AI investment Read More »