Author name: DJ Henderson

ford-decides-to-run-its-le-mans-program-in-house,-racing-in-2027

Ford decides to run its Le Mans program in-house, racing in 2027

Formula 1 might be riding high these days on a wave of interest not seen since the 1960s, but the Drive to Survive effect has been felt elsewhere in the world of motorsport. Endurance racing like the 24 Hours of Le Mans or the Rolex 24 at Daytona has seen record crowds over the last few years, and a large part of that is down to the sports prototype class, exemplified by cars from the likes of Ferrari and Porsche. And soon, we can add Ford to the list.

Currently, eight different manufacturers are competing against each other in the World Endurance Championship’s Hypercar class: Alpine, Aston Martin, BMW, Cadillac, Ferrari, Peugeot, Porsche, and Toyota. More are on the way—Genesis arrives next year, and at the beginning of the year, Ford announced that it, too, was joining the fray, in 2027. Today, the Blue Oval revealed some more details about the project.

What’s a hypercar?

Compared to the road car-derived machines that race in the GT3 category, the vehicles that contest for overall victory in the Hypercar class are purpose-built prototypes, just for racing. Because endurance racing often has to be needlessly complicated for the sake of being complicated, the Hypercar class is actually made up of a mix of vehicles designed to two different technical rule sets that are performance balanced to create a level playing field.

You can find a lengthy explanation of the differences between the two sets of technical regulations (called LMH and LMDh) in our previous coverage, but briefly, LMH cars are designed entirely by a manufacturer and can but don’t have to be hybrids—like the V12-only Aston Martin Valkyrie.

LMDh cars, by contrast, must use a carbon-fiber spine from one of four approved race car makers, and all must use the same spec transmission, hybrid battery, and electric motor, with the car companies designing their own bodywork and internal combustion engine. LMH has more technical freedom—you can mount that electric motor to the front axle, for example—but it’s also a more expensive way to go.

Ford decides to run its Le Mans program in-house, racing in 2027 Read More »

the-dhs-has-been-quietly-harvesting-dna-from-americans-for-years

The DHS has been quietly harvesting DNA from Americans for years


The DNA of nearly 2,000 US citizens has been entered into an FBI crime database.

For years, Customs and Border Protection agents have been quietly harvesting DNA from American citizens, including minors, and funneling the samples into an FBI crime database, government data shows. This expansion of genetic surveillance was never authorized by Congress for citizens, children, or civil detainees.

According to newly released government data analyzed by Georgetown Law’s Center on Privacy & Technology, the Department of Homeland Security, which oversees CBP, collected the DNA of nearly 2,000 US citizens between 2020 and 2024 and had it sent to CODIS, the FBI’s nationwide system for policing investigations. An estimated 95 were minors, some as young as 14. The entries also include travelers never charged with a crime and dozens of cases where agents left the “charges” field blank. In other files, officers invoked civil penalties as justification for swabs that federal law reserves for criminal arrests.

The findings appear to point to a program running outside the bounds of statute or oversight, experts say, with CBP officers exercising broad discretion to capture genetic material from Americans and have it funneled into a law-enforcement database designed in part for convicted offenders. Critics warn that anyone added to the database could endure heightened scrutiny by US law enforcement for life.

“Those spreadsheets tell a chilling story,” Stevie Glaberson, director of research and advocacy at Georgetown’s Center on Privacy & Technology, tells WIRED. “They show DNA taken from people as young as 4 and as old as 93—and, as our new analysis found, they also show CBP flagrantly violating the law by taking DNA from citizens without justification.”

DHS did not respond to a request for comment.

For more than two decades, the FBI’s Combined DNA Index System, or CODIS, has been billed as a tool for violent crime investigations. But under both recent policy changes and the Trump administration’s immigration agenda, the system has become a catchall repository for genetic material collected far outside the criminal justice system.

One of the sharpest revelations came from DHS data released earlier this year showing that CBP and Immigrations and Customs Enforcement have been systematically funneling cheek swabs from immigrants—and, in many cases, US citizens—into CODIS. What was once a program aimed at convicted offenders now sweeps in children at the border, families questioned at airports, and people held on civil—not criminal—grounds. WIRED previously reported that DNA from minors as young as 4 had ended up in the FBI’s database, alongside elderly people in their 90s, with little indication of how or why the samples were taken.

The scale is staggering. According to Georgetown researchers, DHS has contributed roughly 2.6 million profiles to CODIS since 2020—far above earlier projections and a surge that has reshaped the database. By December 2024, CODIS’s “detainee” index contained over 2.3 million profiles; by April 2025, the figure had already climbed to more than 2.6 million. Nearly all of these samples—97 percent—were collected under civil, not criminal, authority. At the current pace, according to Georgetown Law’s estimates, which are based on DHS projections, Homeland Security files alone could account for one-third of CODIS by 2034.

The expansion has been driven by specific legal and bureaucratic levers. Foremost was an April 2020 Justice Department rule that revoked a long-standing waiver allowing DHS to skip DNA collection from immigration detainees, effectively green-lighting mass sampling. Later that summer, the FBI signed off on rules that let police booking stations run arrestee cheek swabs through Rapid DNA machines—automated devices that can spit out CODIS-ready profiles in under two hours.

The strain of the changes became apparent in subsequent years. Former FBI director Christopher Wray warned during Senate testimony in 2023 that the flood of DNA samples from DHS threatened to overwhelm the bureau’s systems. The 2020 rule change, he said, had pushed the FBI from a historic average of a few thousand monthly submissions to 92,000 per month—over 10 times its traditional intake. The surge, he cautioned, had created a backlog of roughly 650,000 unprocessed kits, raising the risk that people detained by DHS could be released before DNA checks produced investigative leads.

Under Trump’s renewed executive order on border enforcement, signed in January 2025, DHS agencies were instructed to deploy “any available technologies” to verify family ties and identity, a directive that explicitly covers genetic testing. This month, federal officials announced they were soliciting new bids to install Rapid DNA at local booking facilities around the country, with combined awards of up to $3 million available.

“The Department of Homeland Security has been piloting a secret DNA collection program of American citizens since 2020. Now, the training wheels have come off,” said Anthony Enriquez, vice president of advocacy at Robert F. Kennedy Human Rights. “In 2025, Congress handed DHS a $178 billion check, making it the nation’s costliest law enforcement agency, even as the president gutted its civil rights watchdogs and the Supreme Court repeatedly signed off on unconstitutional tactics.”

Oversight bodies and lawmakers have raised alarms about the program. As early as 2021, the DHS inspector general found the department lacked central oversight of DNA collection and that years of noncompliance can undermine public safety—echoing an earlier rebuke from the Office of Special Counsel, which called CBP’s failures an “unacceptable dereliction.”

US Senator Ron Wyden (D-Kans.) more recently pressed DHS and DOJ for explanations about why children’s DNA is being captured and whether CODIS has any mechanism to reject improperly obtained samples, saying the program was never intended to collect and permanently retain the DNA of all noncitizens, warning the children are likely to be “treated by law enforcement as suspects for every investigation of every future crime, indefinitely.”

Rights advocates allege that CBP’s DNA collection program has morphed into a sweeping genetic surveillance regime, with samples from migrants and even US citizens fed into criminal databases absent transparency, legal safeguards, or limits on retention. Georgetown’s privacy center points out that once DHS creates and uploads a CODIS profile, the government retains the physical DNA sample indefinitely, with no procedure to revisit or remove profiles when the legality of the detention is in doubt.

In parallel, Georgetown and allied groups have sued DHS over its refusal to fully release records about the program, highlighting how little the public knows about how DNA is being used, stored, or shared once it enters CODIS.

Taken together, these revelations may suggest a quiet repurposing of CODIS. A system long described as a forensic breakthrough is being remade into a surveillance archive—sweeping up immigrants, travelers, and US citizens alike, with few checks on the agents deciding whose DNA ends up in the federal government’s most intimate database.

“There’s much we still don’t know about DHS’s DNA collection activities,” Georgetown’s Glaberson says. “We’ve had to sue the agencies just to get them to do their statutory duty, and even then they’ve flouted court orders. The public has a right to know what its government is up to, and we’ll keep fighting to bring this program into the light.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

The DHS has been quietly harvesting DNA from Americans for years Read More »

baby-steps-is-the-most-gloriously-frustrating-game-i’ve-ever-struggled-through

Baby Steps is the most gloriously frustrating game I’ve ever struggled through


A real “walking simulator”

QWOP meets Death Stranding meets Getting Over It to form wonderfully surreal, unique game.

Watch out for that first step, it’s a doozy! Credit: Devolver Digital

Watch out for that first step, it’s a doozy! Credit: Devolver Digital

There’s an old saying that life is not about how many times you fall down but how many times you get back up. In my roughly 13 hours of walking through the surreal mountain wilderness of Baby Steps, I’d conservatively estimate I easily fell down 1,000 times.

If so, I got up 1,001 times, which is the entire point.

When I say “fell down” here, I’m not being metaphorical. In Baby Steps, the only real antagonist is terrain that threatens to send your pudgy, middle-aged, long-underwear-clad avatar tumbling to the ground (or down a cliff) like a rag doll after the slightest misstep. You pilot this avatar using an intentionally touchy and cumbersome control system where each individual leg is tied a shoulder trigger on your controller.

Unlike the majority of 3D games, where you simply tilt the control stick and watch your character dutifully run, each step here means manually lifting one foot, leaning carefully in the direction you want to go, and then putting that foot down in a spot that maintains your overall balance. It’s like a slightly more forgiving version of the popular ’00s Flash game QWOP (which was also made by Baby Steps co-developer Bennett Foddy), except instead of sprinting on a 2D track, you take your time carefully planning each footfall on a methodical 3D hike.

Keep wiggling that foot until you find a safe place to put it.

Credit: Devolver Digital

Keep wiggling that foot until you find a safe place to put it. Credit: Devolver Digital

At first, you’ll stumble like a drunken toddler, mashing the shoulder buttons and tilting the control stick wildly just to inch forward. After a bit of trial and error, though, you’ll work yourself into a gentle rhythm—press the trigger, tilt the controller, let go while recentering the controller, press the other trigger, repeat thousands of times. You never quite break into a run, but you can fall into a zen pattern of marching methodically forward, like a Death Stranding courier who has to actually focus on each and every step.

As you make your halting progress up the mountain, you’ll infrequently stumble on other hikers who seem to lord their comfort and facility with the terrain over you in manic, surreal, and often hilarious cut scenes. I don’t want to even lightly spoil any of the truly gonzo moments in this extremely self-aware meta-narrative, but I will say that I found your character’s grand arc through the game to be surprisingly touching, often in some extremely subtle ways.

Does this game hate me?

Just as you feel like you’re finally getting the hang of basic hiking, Baby Steps starts ramping up the terrain difficulty in a way that can feel downright trolly at time. Gentle paths of packed dirt and rock start to be replaced with narrow planks and rickety wooden bridges spanning terrifying gaps. Gentle undulating hills are replaced with sheer cliff faces that you sidle up and across with the tiniest of toe holds to precariously balance on. Firm surfaces are slowly replaced with slippery mud, sand, snow, and ice that force you to alter your rhythm and tread extremely lightly just to make incremental progress.

Grabbing that fetching hat means risking an extremely punishing fall.

Grabbing that fetching hat means risking an extremely punishing fall.

And any hard-earned progress can feel incredibly fragile in Baby Steps. Literally one false step can send you sliding down a hill or tumbling down a cliff face in a way that sets you back anywhere from mere minutes to sizable chunks of an hour. There’s no “reset from checkpoint” menu option or save scumming that can limit the damage, either. When you fall in Baby Steps, it can be a very long way down.

This extremely punishing structure won’t be a surprise to anyone who has played Getting Over It With Bennett Foddy, where a single mistake can send you all the way back to the beginning of the game. Baby Steps doesn’t go quite that hard, giving players occasional major checkpoints and large, flat plains that prevent you from falling back too far. Still, this is a game that is more than happy to force you to pay for even small mistakes with huge portions of your only truly irreplaceable resource: time.

On more than one occasion during my playthrough, I audibly cursed at my monitor and quit the game in a huff rather than facing the prospect of spending ten minutes retracing my steps after a particularly damaging fall. Invariably, though, I’d come back a bit later more determined than ever to learn from my mistakes, which I usually did quickly with the benefits of time and calm on my side.

It’s frequently not entirely clear where you’re supposed to go in Baby Steps.

Credit: Devolver Digital

It’s frequently not entirely clear where you’re supposed to go in Baby Steps. Credit: Devolver Digital

Baby Steps is also a game that’s happy to let you wander aimlessly. There’s no in-game map to consult, and any paths and landmarks that could point you in the “intended” way up the mountain are often intentionally confusing or obscured. It can be extremely unclear which parts of the terrain are meant to be impossibly steep and which are merely designed as difficult but plausible shortcuts that simply require pinpoint timing and foot placement. But the terrain is also designed so that almost every near-impossible barrier can be avoided altogether if you’re patient and observant enough to find a way around it.

And if you wander even slightly off the lightly beaten path, you’ll stumble on many intricately designed and completely optional points of interest, from imposing architectural towers to foreboding natural outcroppings to a miniature city made of boxes. There’s no explicit in-game reward for almost all of these random digressions, and your fellow cut-scene hikers will frequently explicitly warn you that there’s no point in climbing some structure or another. Your only reward is the (often marvelous) view from the top—and the satisfaction of saying that you conquered something you didn’t need to.

Are we having fun yet?

So was playing Baby Steps any fun? Honestly, that’s not the first word I’d use to describe the experience.

To be sure, there’s a lot of humor built into the intentionally punishing designs of some sections, so much so that I often had to laugh even as I fell down yet another slippery hill that erased a huge chunk of my progress. And the promise of more wild cut scenes serves as a pretty fun and compelling carrot to get you through some of the game’s toughest sections.

I’ve earned this moment of zen.

Credit: Devolver Digital

I’ve earned this moment of zen. Credit: Devolver Digital

More than “fun,” though, I’d say my time with the Baby Steps felt meaningful in a way few games do. Amid all the trolly humor and intentionally obtuse design decisions is a game whose very structure forces you to consider the value of perseverance and commitment.

This is a game that stands proudly against a lot of modern game design trends. It won’t loudly and explicitly point you to the next checkpoint with a huge on-screen arrow. You can’t inexorably grind out stat points in Baby Steps until your character is powerful enough to beat the toughest boss easily. You can’t restart a Baby Steps run and hope for a lucky randomized seed that will get you past a difficult in-game wall.

Baby Steps doesn’t hand you anything. Your abilities and inventory are the same at the game’s start as they are at the end. Any progress you make is defined solely by your mastery of the obtuse movement system and your slowly increasing knowledge of how to safely traverse ever more treacherous terrain.

It’s a structure that can feel punishing, unforgiving, tedious, and enraging in turns. But it’s also a structure that leads to moments of the most genuinely satisfying sense of achievement I can remember having in modern gaming.

It’s about a miles-long journey starting with a single, halting step. It’s about putting one foot in front of the other until you can’t anymore. It’s about climbing the mountain because it’s there. It’s about falling down 1,000 times and getting up 1,001 times.

What else is there in the end?

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Baby Steps is the most gloriously frustrating game I’ve ever struggled through Read More »

judge-lets-construction-on-an-offshore-wind-farm-resume

Judge lets construction on an offshore wind farm resume

That did not, however, stop the administration from trying again, this time targeting a development called Revolution Wind, located a bit further north along the Atlantic coast. This time, however, the developer quickly sued, leading to Monday’s ruling. According to Reuters, after a two-hour court hearing at the District Court of DC, Judge Royce Lamberth termed the administration’s actions “the height of arbitrary and capricious” and issued a preliminary injunction against the hold on Revolution Wind’s construction. As a result, Orsted can restart work immediately.

The decision provides a strong indication of how Lamberth is likely to rule if the government pursues a full trial on the case. And while the Trump administration could appeal, it’s unlikely to see this injunction lifted unless it takes the case all the way to the Supreme Court. Given that Revolution Wind was already 80 percent complete, the case may become moot before it gets that far.

Judge lets construction on an offshore wind farm resume Read More »

eu-investigates-apple,-google,-and-microsoft-over-handling-of-online-scams

EU investigates Apple, Google, and Microsoft over handling of online scams

The EU is set to scrutinize if Apple, Google, and Microsoft are failing to adequately police financial fraud online, as it steps up efforts to police how Big Tech operates online.

The EU’s tech chief Henna Virkkunen told the Financial Times that on Tuesday, the bloc’s regulators would send formal requests for information to the three US Big Tech groups as well as global accommodation platform Booking Holdings, under powers granted under the Digital Services Act to tackle financial scams.

“We see that more and more criminal actions are taking place online,” Virkkunen said. “We have to make sure that online platforms really take all their efforts to detect and prevent that kind of illegal content.”

The move, which could later lead to a formal investigation and potential fines against the companies, comes amid transatlantic tensions over the EU’s digital rulebook. US President Donald Trump has threatened to punish countries that “discriminate” against US companies with higher tariffs.

Virkkunnen stressed the commission looked at the operations of individual companies, rather than where they were based. She will scrutinize how Apple and Google are handling fake applications in their app stores, such as fake banking apps.

She said regulators would also look at fake search results in the search engines of Google and Microsoft’s Bing. The bloc wants to have more information about the approach Booking Holdings, whose biggest subsidiary Booking.com is based in Amsterdam, is taking to fake accommodation listings. It is the only Europe-based company among the four set to be scrutinized.

EU investigates Apple, Google, and Microsoft over handling of online scams Read More »

us-intel-officials-“concerned”-china-will-soon-master-reusable-launch

US intel officials “concerned” China will soon master reusable launch


“They have to have on-orbit refueling because they don’t access space as frequently as we do.”

File photo of a reusable Falcon 9 booster moments before landing on a recent flight at Cape Canaveral Space Force Station, Florida. Credit: SpaceX

SpaceX scored its 500th landing of a Falcon 9 first stage booster on an otherwise routine flight earlier this month, sending 28 Starlink communications satellites into orbit. Barring any unforeseen problems, SpaceX will mark the 500th re-flight of a Falcon first stage later this year.

A handful of other US companies, including Blue Origin, Rocket Lab, Relativity Space, and Stoke Space, are on the way to replicating or building on SpaceX’s achievements in recycling rocket parts. These launch providers are racing a medley of Chinese rocket builders to become the second company to land and reuse a first stage booster.

But it will be many years—perhaps a decade or longer—until anyone else matches the kinds of numbers SpaceX is racking up in the realm of reusable rockets. SpaceX’s dominance in this field is one of the most important advantages the United States has over China as competition between the two nations extends into space, US Space Force officials said Monday.

“It’s concerning how fast they’re going,” said Brig. Gen. Brian Sidari, the Space Force’s deputy chief of space operations for intelligence. “I’m concerned about when the Chinese figure out how to do reusable lift that allows them to put more capability on orbit at a quicker cadence than currently exists.”

Taking advantage

China has used 14 different types of rockets on its 56 orbital-class missions this year, and none have flown more than 11 times. Eight US rocket types have cumulatively flown 142 times, with 120 of those using SpaceX’s workhorse Falcon 9. Without a reusable rocket, China must maintain more rocket companies to sustain a launch rate of just one-third to one-half that of the United States.

This contrasts with the situation just four years ago, when China outpaced the United States in orbital rocket launches. The growth in US launches has been a direct result of SpaceX’s improvements to launch at a higher rate, an achievement primarily driven by the recovery and reuse of Falcon 9 boosters and payload fairings. Last month, SpaceX flew one of its Falcon 9 boosters for the 30th time and set a record at nine days for the shortest turnaround between flights of the same booster in March.

“They’ve put more satellites on orbit,” Sidari said, referring to China. “They still do not compare to the US, but it is concerning once they figure out that reusable lift. The other one is the megaconstellations. They’ve seen how the megaconstellations provide capability to the US joint force and the West, and they’re mimicking it. So, that does concern me, how fast they’re going, but we’ll see. It’s easier said than done. They do have to figure it out, and they do have some challenges that we haven’t dealt with.”

One of those challenges is China’s continued reliance on expendable rockets. This has made it more important for China to make “game-changing” advancements in other areas, according to Chief Master Sgt. Ron Lerch, the Space Force’s senior enlisted advisor for intelligence.

Lerch pointed to the recent refueling of a Chinese satellite in geosynchronous orbit, more than 22,000 miles (nearly 36,000 kilometers) over the equator. China’s Shijian-21 and Shijian-25 satellites, known as SJ-21 and SJ-25 for short, came together on July 2 and have remained together ever since, according to open source orbital tracking data.

No one has refueled a spacecraft so far from Earth before. SJ-25 appears to be the refueler for SJ-21, a Chinese craft capable of latching onto other satellites and towing them to different orbits. Chinese officials say SJ-21 is testing “space debris mitigation” techniques, but US officials have raised concerns that China is testing a counter-space weapon that could sidle up to an American or allied satellite and take control of it.

Lerch said satellite refueling is more important to China than it is to the United States. With refueling, China can achieve a different kind of reuse in space while the government waits for reusable rockets to enter service.

“They have to have on-orbit refueling as a capability because they don’t access space as frequently as we do,” Lerch said Monday at the Air Force Association’s Air, Space, and Cyber Conference. “When it comes to replenishing our toolkit, getting more capability (on orbit) and reconstitution, having reusable launch is what affords us that ability, and the Chinese don’t have that. So, pursuing things like refueling on orbit, it is game-changing for them.”

The Nebula 1 rocket from China’s Deep Blue Aerospace just before attempting to land on a vertical takeoff, vertical landing test flight last year. Credit: Deep Blue Aerospace

SpaceX’s rapid-fire cadence is pivotal for a number of US national security programs. The Pentagon uses SpaceX’s Starlink satellites, which take up most of the Falcon 9 launch capacity, for commercial-grade global connectivity. SpaceX’s Starshield satellite platform, derived from the Starlink design, has launched in stacks of up to 22 spacecraft on a single Falcon 9 to deploy a constellation of hundreds of all-seeing spy satellites for the National Reconnaissance Office. The most recent batch of these Starshield satellites launched Monday.

Cheaper, readily available launch services will also be critical to the Pentagon’s aspirations to construct a missile shield to defend against attacks on the US homeland. Sensors and interceptors for the military’s planned Golden Dome missile defense system will be scattered throughout low-Earth orbit.

SpaceX’s inventory of Falcon 9 rockets has enabled the Space Force to move closer to realizing on-demand launch services. On two occasions within the last year, the Space Force asked SpaceX to launch a GPS navigation satellite with just a few months of lead time to prepare for the mission. With a fleet of reusable rockets at the ready, SpaceX delivered.

Meanwhile, China recently started deploying its own satellite megaconstellations. Chinese officials claim these new satellite networks will be used for Internet connectivity. That may be so, but Pentagon officials worry China can use them for other purposes, just as the Space Force is doing with Starlink, Starshield, and other programs.

Copycats in space

Lerch mentioned two other recent Chinese actions in space that have his attention. One is the launch of five Tongxin Jishu Shiyan (TJS) satellites, or what China calls communication technology test satellites, into geosynchronous orbit since January, something Lerch called “highly unusual.” Chinese authorities released (rather interesting) patches for four of these TJS satellites, suggesting they are part of a family of spacecraft.

“More importantly, these spacecraft sitting at GEO (geosynchronous orbit) are not supposed to be sliding all around the GEO belt,” Lerch said. “But the history of these experimental spacecraft have shown that that’s exactly what they do, which is very uncharacteristic for a system that’s supposed to be providing satellite communications.”

US officials believe China uses at least some of the TJS satellites for missile warning or spy missions. TJS satellites filling the role of a reconnaissance mission might have enormous umbrella-like reflectors to try to pick up communication signals transmitted by foreign forces, such as those of the United States.

A modified Long March 7 rocket carrying the Yaogan 45 satellite lifts off from the Wenchang Space Launch Site on September 9, 2025, in Wenchang, Hainan Province of China. Credit: Luo Yunfei/China News Service/VCG via Getty Images

China also launched a spy satellite called Yaogan 45 into a peculiar orbit earlier this month. (Yaogan is a cover name for China’s military spy satellites.) Yaogan 45 is a remote sensing platform, Lerch said, but it’s flying much higher than a typical Earth-imaging satellite. Instead of orbiting a few hundred miles above the Earth, Yaogan 45 circles at an altitude of some 4,660 miles (7,500 kilometers).

“That, alone, is very interesting,” Lerch said.

But US intelligence officials believe there’s more to the story. China launched the country’s first two communications satellites into a so-called medium-Earth orbit, or MEO, last year. These satellites are the first in a network called Smart Skynet.

“It looks like a year ago they started to put the infrastructure at MEO to be able to move around data, and then a year later, the Chinese are now putting remote sensing capability at MEO as well,” Lerch said. “That’s interesting, and that starts to paint a picture that they value remote sensing to the point where they want resiliency in layers of it.”

China launched a satellite named Yaogan 41 into geosynchronous orbit in 2023 with a sharp-eyed telescope with enough sensitivity to track car-sized objects on the ground and at sea. From its perch in geosynchronous orbit, Yaogan 41 will provide China’s military with a continuous view of the Indo-Pacific region. A single satellite in low-Earth orbit offers only fleeting views.

Some of this may sound familiar if you follow what the US military and the National Reconnaissance Office are doing with their satellites.

“Our military power has served as a bit of an open book, and adversaries have watched and observed us for years,” said Lt. Gen. Max Pearson, the Air Force’s deputy chief of staff for intelligence.

China’s military has “observed how we fight, the techniques we use, the weapons systems we have,” Pearson said. “When you combine that with intellectual property theft that has fueled a lot of their modernization, they have deliberately developed and modernized to counter our American way of war.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

US intel officials “concerned” China will soon master reusable launch Read More »

h1-b-and-the-$100k-fee

H1-B And The $100k Fee

The Trump Administration is attempting to put a $100k fee on future H1-B applications, including those that are exempt from the lottery and cap, unless of course they choose to waive it for you. I say attempting because Trump’s legal ability to do this appears dubious.

This post mostly covers the question of whether this is a reasonable policy poorly implemented, or a terrible policy poorly implemented.

Details offered have been contradictory, with many very confused about what is happening, including those inside the administration. The wording of the Executive Order alarmingly suggested that any H1-B visa holders outside the country even one day later would be subject to the fee, causing mad scrambles until this was ‘clarified.’ There was chaos.

Those so-called ‘clarifications’ citing ‘facts’ gaslit the public, changing key aspects of the policy, and I still do not remain confident on what the Trump administration will actually do here, exactly.

Trump announced he is going to charge a $100,000 fee for H-1B visas, echoing his suspension of such visas back in 2020 (although a court overruled him on this, and is likely to overrule the new fee as well). It remains remarkable that some, such as those like Jason and the All-In Podcast, believed or claimed to believe that Trump would be a changed man on this.

But what did this announcement actually mean? Annual or one-time? Existing ones or only new ones? Lottery only or also others? On each re-entry to the country? What the hell is actually happening?

I put it that way because, in addition to the constant threat that Trump will alter the deal and the suggestion that you pray he does not alter it any further, there was a period where no one could even be confident what the announced policy was, and the written text of the executive order did not line up with what administration officials were saying. It is entirely plausible that they themselves did not know which policy they were implementing, or different people in the administration had different ideas about what it was.

It would indeed be wise to not trust the exact text of the Executive Order, on its own, to be reflective of future policy in such a spot. Nor would it be wise to trust other statements that contradict the order. It is chaos, and causes expensive chaos.

The exact text is full of paranoid zero-sum logic. It treats it as a problem that we have 2.5 million foreign STEM workers in America, rather than the obvious blessing that it is. It actively complains that the unemployment rate in ‘computer occupations’ has risen from 1.98% to (gasp!) 3.02%. Even if we think that has zero to do with AI, or the business cycle or interest rates, isn’t that pretty low? Instead, ‘a company hired H1-B workers and also laid off other workers’ is considered evidence of widespread abuse of the system, when there is no reason to presume these actions are related, or that these firms reduced the number of American jobs.

As I read the document, in 1a it says entry for such workers is restricted ‘except for those aliens whose petitions are accompanied or supplemented by a payment of $100k,’ which certainly sounds like it applies to existing H1-Bs. Then in section 3 it clarifies that this restriction applies after the date of this proclamation – so actual zero days of warning – and declines another clear opportunity to clarify that this does not apply to existing H1-B visas.

I don’t know why anyone reading this document would believe that this would not apply to existing H1-B visa holders attempting re entry. GPT-5 Pro agrees with this and my other interpretations (that I made on my own first).

Distinctly, 1b then says they won’t consider applications that don’t come with a payment, also note that the payment looks to be due on application not on acceptance. Imagine paying $100k and then getting your application refused.

There is then the ability in 1c for the Secretary of Homeland Security to waive the fee, setting up an obvious opportunity for playing favorites and for outright corruption, the government picking winners and losers. Why pay $100k to the government when you can instead pay $50k to someone else? What happens when this decision is then used as widespread leverage?

Justin Wolfers: Critical part of the President’s new $100,000 charge for H1-B visas: The Administration can also offer a $100,000 discount to any person, company, or industry that it wants. Replacing rules with arbitrary discretion.

Want visas? You know who to call and who to flatter.

Here’s some people being understandably very confused.

Sheel Mohnot (7.6m views): $100k fee for H-1B’s is reasonable imo [link goes to Bloomberg story that said it was per application].

Generates revenue & reduces abuse for the H-1B program and talented folks can still come via other programs.

Sheel Mohnot (8.1m views): I initially thought the H-1B reform was reasonable but it’s $100k per year, not per visa.

That is too much, will definitely have a negative effect on innovation in the US. It’s basically only for people who make >$400k now.

Luba Lesiva: It’s per grant and per renewal – but are renewals actually annual? I thought it was every 3 years

Sheel Mohnot: Lutnick says per year but the text says per grant and wouldn’t apply to renewals.

Garrett Jones: They keep Lutnick out of the loop.

Daniel: It’s fun that this executive order was so poorly written that members of the administration are publicly disagreeing about what it means.

Trump EOs are like koans. Meditate on them and you will suddenly achieve enlightenment.

Here’s some people acting highly sensibly in response to what was happening, even if they had closely read the Executive Order:

Gabbar Singh: A flight from US to India. People boarded the flight. Found out about the H1B news. Immediately disembarked fearing they won’t be allowed re-entry.

Quoted Text Message: Experiencing immediate effect of the H1B rukes. Boarded Emirates flight to Dubai/ Mumbai. Flight departure time 5.05 pm. Was held up because a passenger or family disembarked and they had to remove checked in luggage. Then looked ready to leave and another passenger disembarks. Another 20 minutes and saw about 10–15 passengers disembarking. Worried about reentry. Flight still on ground after 2 hours. Announcement from cabin crew saying if you wish to disembark please do so now. Cabin is like an indian train. Crazy.

Rohan Paul: From Microsoft to Facebook, all tech majors ask H-1B visa holders to return back to the US in less than 24 hours.

Starting tomorrow (Sunday), H1B holders can’t reenter the US without paying $100K.

The number of last-minute flight bookings to the US have seen massive spike.

4chan users are blocking India–US flights by holding tickets at checkout so Indian H1B holders can’t book before the deadline. 😯

Typed Female: the people on here deriving joy from these type of posts are very sick

James Blunt: Very sad.

Giving these travelers even a grace period until October 1st would have been the human thing to do.

Instead, people are disembarking from flights out of fear they won’t be let back into the country they’ve lived, worked, and paid taxes in for years.

Cruelty isn’t policy, it’s just cruelty.

Any remotely competent drafter of the Executive Order, or anyone with access to a good LLM to ask obvious questions about likely response, would have known the chaos and harm this would cause. One can only assume they knew and did it anyway.

The White House later attempted to clear this up as a one-time fee on new visas only.

Aaron Reichlin-Melnick: Oh my GOD. This is the first official confirmation that the new H-1B entry ban does not apply to people with current visas, (despite the text not containing ANY exception for people with current visas), and it’s not even something on an official website, it’s a tweet!

Incredible level of incompetence to write a proclamation THIS unclear and then not release official guidance until 24 hours later!

It is not only incompetence, it is very clearly gaslighting everyone involved, saying that the clarifications were unnecessary when not only are they needed they contract the language in the original document.

Rapid Response 47 (2.1m views, an official Twitter account, September 20, 2: 58pm): Corporate lawyers and others with agendas are creating a lot of FAKE NEWS around President Trump’s H-1B Proclamation, but these are FACTS:

  1. The Proclamation does not apply to anyone who has a current visa.

  2. The Proclamation only applies to future applicants in the February lottery who are currently outside the U.S. It does not apply to anyone who participated in the 2025 lottery.

  3. The Proclamation does not impact the ability of any current visa holder to travel to/from the U.S.

That post has the following highly accurate community note:

Karoline Leavitt (White House Press Secretary, via Twitter, September 20, 5: 11pm):

To be clear:

  1. This is NOT an annual fee. It’s a one-time fee that applies only to the petition.

  2. Those who already hold H-1B visas and are currently outside of the country right now will NOT be charged $100,000 to re-enter. H-1B visa holders can leave and re-enter the country to the same extent as they normally would; whatever ability they have to do that is not impacted by yesterday’s proclamation.

  3. This applies only to new visas, not renewals, and not current visa holders. It will first apply in the next upcoming lottery cycle.

CBP (Official Account, on Twitter, 5: 22pm): Let’s set the record straight: President Trump’s updated H-1B visa requirement applies only to new, prospective petitions that have not yet been filed. Petitions submitted prior to September 21, 2025 are not affected. Any reports claiming otherwise are flat-out wrong and should be ignored.

[Mirrored by USCIS at 5:35pm]

None of these three seem, to me, to have been described in the Executive Order. Nor would I trust Leavitt’s statements here to hold true, where they contradict the order.

Shakeel Hashim: Even if this is true, the fact they did not think to clarify this at the time of the announcement is a nice demonstration of this administration’s incompetence.

There’s also the issue that this is unlikely to stand up in court when challenged.

Meanwhile, here’s the biggest voice of support, which the next day on the 21st believed that this was a yearly tax. Even the cofounder of Netflix is deeply confused.

Reed Hastings (CEO Powder, Co-Founder Netflix): I’ve worked on H1-B politics for 30 years. Trump’s $100k per year tax is a great solution. It will mean H1-B is used just for very high value jobs, which will mean no lottery needed, and more certainty for those jobs.

The issued FAQ came out on the 21st, two days after some questions had indeed been highly frequently asked. Is this FAQ binding? Should we believe it? Unclear.

Here’s what it says:

  1. $100k is a one-time new application payment.

  2. This does not apply to already filed petitions.

  3. This does apply to cap-exempt applications, including national labs, nonprofit research organizations and research universities, and there is no mention here of an exemption for hospitals.

  4. In the future the prevailing wage level will be raised.

  5. In the future there will be a rule to prioritize high-skilled, high-paid aliens in the H1-B lottery over those at lower wage levels (!).

  6. This does not do anything else, such as constrain movement.

If we are going to ‘prioritize high-skilled, high-paid aliens’ in the lottery, that’s great, lose the lottery entirely and pick by salary even, or replace the cap with a minimum salary and see what happens. But then why do we need the fee?

If they had put out this FAQ as the first thing, and it matched the language in the Executive Order, a lot of tsouris could have been avoided, and we could have a reasonable debate on whether the new policy makes sense, although it still is missing key clarifications, especially whether it applies to shifts out of J-1 or L visas.

Instead, the Executive Order was worded to cause panic, in ways that were either highly incompetent, malicious and intentional, or both. Implementation is already, at best, a giant unforced clusterfuck. If this was not going to apply to previously submitted petitions, there was absolutely zero reason to have this take effect with zero notice.

There is no reason to issue an order saying one (quite terrible) set of things only to ‘clarify’ into a different set of things a day later, in a way that still leaves everyone paranoid at best. There is now once again a permanent increase in uncertainty and resulting costs and frictions, especially since we have no reason to presume they won’t simply start enforcing illegal actions.

If this drives down the number of H1-B visas a lot, that would be quite bad, including to the deficit because the average H1-B visa holder contributes ~$40k more in taxes than they use in benefits, and creates a lot of economic value beyond that.

Could this still be a good idea, or close enough to become a good idea, if done well?

Maybe. If it was actually done well and everyone had certainty. At least, it could be better than the previous situation. The Reed Hastings theory isn’t crazy, and our tax revenue has to come from somewhere.

Caleb Watney (IFP): Quick H1-B takes:

Current effort is (probably) not legally viable without Congress.

There’s a WORLD of difference between applying a fee to applicants who are stuck in the lottery (capped employers) and to those who aren’t (research nonprofits and universities). If we apply this rule to the latter, we will dramatically reduce the number of international scientists working in the U.S.

In theory, a fee just on *cappedemployers can help differentiate between high-value and low-value applications, but 100k seems too high (especially if it’s per year).

A major downside of a fee vs a compensation rank is that it creates a distortion for skilled talent choosing between the US and other countries. If you have the choice between a 300k offer in London vs a 200k offer in New York…

I would be willing to bite the bullet and say that every non-fraudulent H1-B visa issued under the old policy was a good H1-B visa worth issuing, and we should have raised or eliminated the cap, but one can have a more measured view and the system wasn’t working as designed or intended.

Jeremy Neufeld: The H-1B has huge problems and high-skilled immigration supporters shouldn’t pretend it doesn’t.

It’s been used far too much for middling talent, wage arbitrage, and downright fraud.

Use for middling talent and wage arbitrage (at least below some reasonable minimum that still counts as middling here) is not something I would have a problem with if we could issue unlimited visas, but I understand that others do have a problem with it, and also that there is no willingness to issue unlimited visas.

The bigger problem was the crowding out via the lottery. Given the cap in slots, every mediocre talent was taking up a slot that could have gone to better talent.

This meant that the best talent would often get turned down. It also meant that no one could rely or plan upon an H-1B. If I need to fill a position, why would I choose someone with a greater than 70% chance of being turned down?

Whereas if you impose a price per visa, you can clear the market, and ensure we use it for the highest value cases.

If the price and implementation were chosen wisely, this would be good on the margin. Not a first best solution, because there should not be a limit on the number of H-1B visas, but a second or third best solution. This would have three big advantages:

  1. The H1-B visas go where they are most valuable.

  2. The government will want to issue more visas to make more money, or at least a lot of the pressure against the H1-B would go away.

  3. This provides a costly signal that no, you couldn’t simply hire an American, and if you’re not earning well over $100k you can feel very safe.

The central argument against the H1-B is that you’re taking a job away from an American to save money. A willingness to pay $100k is strong evidence that there really was a problem finding a qualified domestic applicant for the job.

You still have to choose the right price. When I asked GPT-5 Pro, it estimated that the market clearing rate was only about $30k one time fee per visa, and thus a $100k fee even one time would collapse demand to below supply.

I press X to doubt that. Half of that is covered purely by savings on search costs, and there is a lot of talk that annual salaries for such workers are often $30k or more lower than American workers already.

I think we could, if implemented well, charge $100k one time, raise $2 billion or so per year, and still clear the market. Don’t underestimate search and uncertainty costs. There is a market at Manifold, as well as another on whether the $100k will actually get charged, and this market on net fees collected (although it includes a bunch of Can’t Happen options, so ignore the headline number).

I say second or third best because another solution is to award H1-Bs by salary.

Ege Erdil: some people struggle to understand the nuanced position of

– trump’s H-1B order is illegal & will be struck down

– but broadly trump’s attempts to reform the program are good

– a policy with $100k/yr fees is probably bad, a policy with $100k one-time fee would probably be good.

Maxwell Tabarrok: A massive tax on skilled foreign labor would not be good even if it corrects somewhat for the original sin of setting H1-B up as a lottery.

Just allocate the visas based on the wage offered to applicants.

Awarding by salary solves a lot of problems. It mostly allocates to the highest value positions. It invalidates the ‘put in tons of applications’ strategy big companies use. It invalidates the ‘bring them in to undercut American workers’ argument. It takes away the uncertainty in the lottery. IFP estimates this would raise the value of the H1-B program by 48% versus baseline.

Indeed, the Trump Administration appears to be intent on implementing this change, and prioritizing high skilled and highly paid applications in the lottery. Which is great, but again it makes the fee unnecessary.

There is a study that claims that winning the H1-B lottery is amazingly great for startups, so amazingly great it seems impossible.

Alex Tabarrok: The US offers a limited number of H1-B visas annually, these are temporary 3-6 year visas that allow firms to hire high-skill workers. In many years, the demand exceeds the supply which is capped at 85,000 and in these years USCIS randomly selects which visas to approve. The random selection is key to a new NBER paper by Dimmock, Huang and Weisbenner (published here). What’s the effect on a firm of getting lucky and wining the lottery?

The Paper: We find that a firm’s win rate in the H-1B visa lottery is strongly related to the firm’s outcomes over the following three years. Relative to ex ante similar firms that also applied for H-1B visas, firms with higher win rates in the lottery are more likely to receive additional external funding and have an IPO or be acquired.

Firms with higher win rates also become more likely to secure funding from high-reputation VCs, and receive more patents and more patent citations. Overall, the results show that access to skilled foreign workers has a strong positive effect on firm-level measures of success.

Alex Tabarrok: Overall, getting (approximately) one extra high-skilled worker causes a 23% increase in the probability of a successful IPO within five years (a 1.5 percentage point increase in the baseline probability of 6.6%). That’s a huge effect.

Remember, these startups have access to a labor pool of 160 million workers. For most firms, the next best worker can’t be appreciably different than the first-best worker. But for the 2000 or so tech-startups the authors examine, the difference between the world’s best and the US best is huge. Put differently on some margins the US is starved for talent.

Roon: this seems hard to believe – how does one employee measurably improve the odds of IPO? but then this randomization is basically the highest standard of evidence.

Alec Stapp: had the same thought, but the paper points out that this would include direct effects and indirect effects, like increasing the likelihood of obtaining VC funding. Also noteworthy that they find a similar effect on successful exits.

Max Del Blanco: Sounds like it would be worth 100k!

Charles: There’s just no way this effect size is real tbh. When I see an effect this size I assume there’s a confounder I’m unaware of.

This is a double digit percentage increase in firm value, worth vastly more than $100k for almost any startup that is hiring. It should be easy to reclaim the $100k via fundraising more money on better terms, including in advance since VCs can be forward looking, even for relatively low-value startups, and recruitment will be vastly easier without a lottery in the way. For any companies in the AI space, valuations even at seed are now often eight or nine figures, so it is disingenuous to say this is not affordable.

If we assume the lottery is fully random, suppose the effect size were somehow real. How would we explain it?

My guess is that this would be because a startup, having to move quickly, would mostly only try to use the H1-B lottery when given access to exceptional talent, or for a role they flat out cannot otherwise fill. Roles at startups are not fungible and things are moving quickly, so the marginal difference is very high.

This would in turn suggest that the change is, if priced and implemented correctly, very good policy for the most important startups, and yes they would find a way to pay the $100k. However, like many others I doubt the study’s findings because the effect size seems too big.

A $100k annual fee, if that somehow happened and survived contact with the courts? Yeah, that would probably have done it, and would clearly have been intentional.

AP (at the time Lutnick thought the fee was annual): Lutnick said the change will likely result in far fewer H-1B visas than the 85,000 annual cap allows because “it’s just not economic anymore.”

Even then, I would have taken the over on the number of visas that get issued, versus others expectations or those of GPT-5. Yes, $100k per year is quite a lot, but getting the right employee can be extremely valuable.

In many cases you really can’t find an American to do the job at a price you can socially pay, and there are a lot of high value positions where given the current lottery you wouldn’t bother trying for an H1-B at all.

Consider radiologists, where open positions have often moved into the high six figures, and there are many qualified applicants overseas that can’t otherwise get paid anything like that.

Consider AI as well. A traditional ‘scrappy’ startup can’t pay $100k per year, but when seed rounds are going for tens or hundreds of millions, and good engineers are getting paid hundreds of thousands on the regular, then suddenly yes, yes you can pay.

I think the argument ‘all startups are scrappy and can’t afford such fees’ simply flies in the face of current valuations in the AI industry, where even seed stage raises often have valuations in the tens to hundreds of millions.

The ‘socially pay’ thing matters a lot. You can easily get into a pickle, where the ‘standard’ pay for something is let’s say $100k, but price to get a new hire is $200k.

If you paid the new hire $200k and anyone finds out then your existing $100k employees will go apocalyptic unless you bump them up to $200k. Relative pay has to largely match social status within the firm.

Whereas in this case, you’d be able to (for example) pay $100k in salary to an immigrant happy to take it, and a $100k visa annual fee, without destroying the social order. It also gives you leverage over the employee.

If you change to a one-time fee, the employer doesn’t get to amortize it that much, but it is a lot less onerous. Is the fee too high even at a one time payment?

One objection here and elsewhere is ‘this is illegal and won’t survive in court’ but for now let’s do the thought experiment where it survives, perhaps by act of Congress.

Jeremy Neufeld: That doesn’t make the $100k fee a good solution.

  1. Outsourcers can just avoid the fee by bringing their people in on L visas and then enter them in the lottery.

  2. US companies face a competitive disadvantage in recruiting real talent from abroad since they’ll have to lower their compensation offers to cover the $100k.

  3. Research universities will recruit fewer foreign-trained scientists.

  4. It’s likely to get overturned in court so the long term effect is just signaling uncertainty and unpredictability to talent.

Another issue is that they are applying the fee to cap-exempt organizations, which seems obviously foolish.

This new FAQ from the White House makes it clear the $100k fee does apply to cap-exempt organizations.

That includes national labs and other government R&D, nonprofit research orgs, and research universities.

Big threat to US scientific leadership.

Nothing wrong in principle with tacking on a large fee to cap-subject H-1Bs to prioritize top talent but it needs a broader base (no big loophole for L to H-1B changes) and a lower rate.

(Although for better or for worse, Congress needs to do it.)

But the fee on cap-exempt H-1Bs is just stupid.

Presumably they won’t allow the L visa loophole or other forms of ‘already in the country,’ and would refuse to issue related visas without fees. Padme asks, they’re not so literally foolish as to issue such visas but stop the workers at the border anyway, and certainly not letting them take up lottery slots, are they? Are they?

On compensation, yes presumably they will offer somewhat lower compensation than they would have otherwise, but also they can offer a much higher chance of a visa. It’s not obvious where this turns into a net win, and note that it costs a lot more than $100k to pay an employee $100k in salary, and the social dynamics discussed above. I’m not convinced the hit here will be all that big.

I certainly would bet against hyperbolic claims like this one from David Bier at Cato, who predicts that this will ‘effectively end’ the H-1B visa category.

David Bier: This fee would effectively end the H‑1B visa category by making it prohibitive for most businesses to hire H‑1B workers. This would force leading technology companies out of the United States, reduce demand for US workers, reduce innovation, have severe second-order economic effects, and lower the supply of goods and services in everything from IT and education to manufacturing and medicine.

Research universities will recruit a lot fewer foreign-trained scientists if and only if both of the following are true:

  1. The administration does not issue waivers for research scientists, despite this being clearly in the public interest.

  2. The perceived marginal value of the research scientist is not that high, such that universities decline to pay the application fee.

That does seem likely to happen often. It also seems like a likely point of administration leverage, as they are constantly looking for leverage over universities.

American intentionally caps the number of doctors we train. The last thing we want to do is make our intentionally created doctor shortage even worse. A lot of people warned that this will go very badly, and it looks like Trump is likely to waive the fee for doctors.

Note that according to GPT-5-Pro, residencies mostly don’t currently use H-1B, rather they mostly use J-1. The real change would be if they cut off shifting to H1-B, which would prevent us from retaining those residents once they finish. Which would still be a very bad outcome, if the rules were sustained for that long. That would in the long term be far worse than having our medical schools expand. This is one of the places where yes, Americans very much want these jobs and could become qualified for them.

Of course the right answer here was always to open more slots and train more doctors.

Our loss may partly be the UK’s gain.

Alec Stapp: My feed is full of smart people in the UK pouncing on the opportunity to poach more global talent for their own country.

We are making ourselves weaker and poorer by turning away scientists, engineers, and technologists who want to contribute to the US.

Alex Cheema: If you’re a talented engineer affected by the H-1B changes, come build with us in London @exolabs

– SF-level comp (270K-360K base + equity)

– Best talent from Europe

– Hardcore build culture

– Build something important with massive distribution

Email jobs at exolabs dot net

Discussion about this post

H1-B And The $100k Fee Read More »

a-history-of-the-internet,-part-3:-the-rise-of-the-user

A history of the Internet, part 3: The rise of the user


the best of times, the worst of times

The reins of the Internet are handed over to ordinary users—with uneven results.

Everybody get together. Credit: D3Damon/Getty Images

Everybody get together. Credit: D3Damon/Getty Images

Welcome to the final article in our three-part series on the history of the Internet. If you haven’t already, catch up with part one and part two.

As a refresher, here’s the story so far:

The ARPANET was a project started by the Defense Department’s Advanced Research Project Agency in 1969 to network different mainframe computers together across the country. It later evolved into the Internet, connecting multiple global networks together using a common TCP/IP protocol. By the late 1980s, a small group of academics and a few curious consumers connected to each other on the Internet, which was still mostly text-based.

In 1991, Tim Berners-Lee invented the World Wide Web, an Internet-based hypertext system designed for graphical interfaces. At first, it ran only on the expensive NeXT workstation. But when Berners-Lee published the web’s protocols and made them available for free, people built web browsers for many different operating systems. The most popular of these was Mosaic, written by Marc Andreessen, who formed a company to create its successor, Netscape. Microsoft responded with Internet Explorer, and the browser wars were on.

The web grew exponentially, and so did the hype surrounding it. It peaked in early 2001, right before the dotcom collapse that left most web-based companies nearly or completely bankrupt. Some people interpreted this crash as proof that the consumer Internet was just a fad. Others had different ideas.

Larry Page and Sergey Brin met each other at a graduate student orientation at Stanford in 1996. Both were studying for their PhDs in computer science, and both were interested in analyzing large sets of data. Because the web was growing so rapidly, they decided to start a project to improve the way people found information on the Internet.

They weren’t the first to try this. Hand-curated sites like Yahoo had already given way to more algorithmic search engines like AltaVista and Excite, which both started in 1995. These sites attempted to find relevant webpages by analyzing the words on every page.

Page and Brin’s technique was different. Their “BackRub” software created a map of all the links that pages had to each other. Pages on a given subject that had many incoming links from other sites were given a higher ranking for that keyword. Higher-ranked pages could then contribute a larger score to any pages they linked to. In a sense, this was a like a crowdsourcing of search: When people put “This is a good place to read about alligators” on a popular site and added a link to a page about alligators, it did a better job of determining that page’s relevance than simply counting the number of times the word appeared on a page.

Step 1 of the simplified BackRub algorithm. It also stores the position of each word on a page, so it can make a further subset for multiple words that appear next to each other. Jeremy Reimer.

Creating a connected map of the entire World Wide Web with indexes for every word took a lot of computing power. The pair filled their dorm rooms with any computers they could find, paid for by a $10,000 grant from the Stanford Digital Libraries Project. Many were cobbled together from spare parts, including one with a case made from imitation LEGO bricks. Their web scraping project was so bandwidth-intensive that it briefly disrupted the university’s internal network. Because neither of them had design skills, they coded the simplest possible “home page” in HTML.

In August 1996, BackRub was made available as a link from Stanford’s website. A year later, Page and Brin rebranded the site as “Google.” The name was an accidental misspelling of googol, a term coined by a mathematician’s young son to describe a 1 with 100 zeros after it. Even back then, the pair was thinking big.

Google.com as it appeared in 1998. Credit: Jeremy Reimer

By mid-1998, their prototype was getting over 10,000 searches a day. Page and Brin realized they might be onto something big. It was nearing the height of the dotcom mania, so they went looking for some venture capital to start a new company.

But at the time, search engines were considered passée. The new hotness was portals, sites that had some search functionality but leaned heavily into sponsored content. After all, that’s where the big money was. Page and Brin tried to sell the technology to AltaVista for $1 million, but its parent company passed. Excite also turned them down, as did Yahoo.

Frustrated, they decided to hunker down and keep improving their product. Brin created a colorful logo using the free GIMP paint program, and they added a summary snippet to each result. Eventually, the pair received $100,000 from angel investor Andy Bechtolsheim, who had co-founded Sun Microsystems. That was enough to get the company off the ground.

Page and Brin were careful with their money, even after they received millions more from venture capitalist firms. They preferred cheap commodity PC hardware and the free Linux operating system as they expanded their system. For marketing, they relied mostly on word of mouth. This allowed Google to survive the dotcom crash that crippled its competitors.

Still, the company eventually had to find a source of income. The founders were concerned that if search results were influenced by advertising, it could lower the usefulness and accuracy of the search. They compromised by adding short, text-based ads that were clearly labeled as “Sponsored Links.” To cut costs, they created a form so that advertisers could submit their own ads and see them appear in minutes. They even added a ranking system so that more popular ads would rise to the top.

The combination of a superior product with less intrusive ads propelled Google to dizzying heights. In 2024, the company collected over $350 billion in revenue, with $112 billion of that as profit.

Information wants to be free

The web was, at first, all about text and the occasional image. In 1997, Netscape added the ability to embed small music files in the MIDI sound format that would play when a webpage was loaded. Because the songs only encoded notes, they sounded tinny and annoying on most computers. Good audio or songs with vocals required files that were too large to download over the Internet.

But this all changed with a new file format. In 1993, researchers at the Fraunhofer Institute developed a compression technique that eliminated portions of audio that human ears couldn’t detect. Suzanne Vega’s song “Tom’s Diner” was used as the first test of the new MP3 standard.

Now, computers could play back reasonably high-quality songs from small files using software decoders. WinPlay3 was the first, but WinAmp, released in 1997, became the most popular. People started putting links to MP3 files on their personal websites. Then, in 1999, Shawn Fanning released a beta of a product he called Napster. This was a desktop application that relied on the Internet to let people share their MP3 collection and search everyone else’s.

Napster as it would have appeared in 1999. Credit: Jeremy Reimer

Napster almost immediately ran into legal challenges from the Recording Industry Association of America (RIAA). It sparked a debate about sharing things over the Internet that persists to this day. Some artists agreed with the RIAA that downloading MP3 files should be illegal, while others (many of whom had been financially harmed by their own record labels) welcomed a new age of digital distribution. Napster lost the case against the RIAA and shut down in 2002. This didn’t stop people from sharing files, but replacement tools like eDonkey 2000, Limewire, Kazaa, and Bearshare lived in a legal gray area.

In the end, it was Apple that figured out a middle ground that worked for both sides. In 2003, two years after launching its iPod music player, Apple announced the Internet-only iTunes Store. Steve Jobs had signed deals with all five major record labels to allow legal purchasing of individual songs—astoundingly, without copy protection—for 99 cents each, or full albums for $10. By 2010, the iTunes Store was the largest music vendor in the world.

iTunes 4.1, released in 2003. This was the first version for Windows and introduced the iTunes Store to a wider world. Credit: Jeremy Reimer

The Web turns 2.0

Tim Berners-Lee’s original vision for the web was simply to deliver and display information. It was like a library, but with hypertext links. But it didn’t take long for people to start experimenting with information flowing the other way. In 1994, Netscape 0.9 added new HTML tags like FORM and INPUT that let users enter text and, using a “Submit” button, send it back to the web server.

Early web servers didn’t know what to do with this text. But programmers developed extensions that let a server run programs in the background. The standardized “Common Gateway Interface” (CGI) made it possible for a “Submit” button to trigger a program (usually in a /cgi-bin/ directory) that could do something interesting with the submission, like talking to a database. CGI scripts could even generate new webpages dynamically and send them back to the user.

This intelligent two-way interaction changed the web forever. It enabled things like logging into an account on a website, web-based forums, and even uploading files directly to a web server. Suddenly, a website wasn’t just a page that you looked at. It could be a community where groups of interested people could interact with each other, sharing both text and images.

Dynamic webpages led to the rise of blogging, first as an experiment (some, like Justin Hall’s and Dave Winer’s, are still around today) and then as something anyone could do in their spare time. Websites in general became easier to create with sites like Geocities and Angelfire, which let people build their own personal dream house on the web for free. A community-run dynamic linking site, webring.org, connected similar websites together, encouraging exploration.

Webring.org was a free, community-run service that allowed dynamically updated webrings. Credit: Jeremy Reimer

One of the best things to come out of Web 2.0 was Wikipedia. It arose as a side project of Nupedia, an online encyclopedia founded by Jimmy Wales, with articles written by volunteers who were subject matter experts. This process was slow, and the site only had 21 articles in its first year. Wikipedia, in contrast, allowed anyone to contribute and review articles, so it quickly outpaced its predecessor. At first, people were skeptical about letting random Internet users edit articles. But thanks to an army of volunteer editors and a set of tools to quickly fix vandalism, the site flourished. Wikipedia far surpassed works like the Encyclopedia Britannica in sheer numbers of articles while maintaining roughly equivalent accuracy.

Not every Internet innovation lived on a webpage. In 1988, Jarkko Oikarinen created a program called Internet Relay Chat (IRC), which allowed real-time messaging between individuals and groups. IRC clients for Windows and Macintosh were popular among nerds, but friendlier applications like PowWow (1994), ICQ (1996), and AIM (1997) brought messaging to the masses. Even Microsoft got in on the act with MSN Messenger in 1999. For a few years, this messaging culture was an important part of daily life at home, school, and work.

A digital recreation of MSN Messenger from 2001. Sadly, Microsoft shut down the servers in 2014. Credit: Jeremy Reimer

Animation, games, and video

While the web was evolving quickly, the slow speeds of dial-up modems limited the size of files you could upload to a website. Static images were the norm. Animation only appeared in heavily compressed GIF files with a few frames each.

But a new technology blasted past these limitations and unleashed a torrent of creativity on the web. In 1995, Macromedia released Shockwave Player, an add-on for Netscape Navigator. Along with its Director software, the combination allowed artists to create animations based on vector drawings. These were small enough to embed inside webpages.

Websites popped up to support this new content. Newgrounds.com, which started in 1995 as a Neo-Geo fan site, started collecting the best animations. Because Director was designed to create interactive multimedia for CD-ROM projects, it also supported keyboard and mouse input and had basic scripting. This meant that people could make simple games that ran in Shockwave. Newgrounds eagerly showcased these as well, giving many aspiring artists and game designers an entry point into their careers. Super Meat Boy, for example, was first prototyped on Newgrounds.

Newgrounds as it would have appeared circa 2003. Credit: Jeremy Reimer

Putting actual video on the web seemed like something from the far future. But the future arrived quickly. After the dotcom crash of 2001, there were many unemployed web programmers with a lot of time on their hands to experiment with their personal projects. The arrival of broadband with cable modems and digital subscriber lines (DSL), combined with the new MPEG4 compression standard, made a lot of formerly impossible things possible.

In early 2005, Chad Hurley, Steve Chen, and Jawed Karim launched Youtube.com. Initially, it was meant to be an online dating site, but that service failed. The site, however, had great technology for uploading and playing videos. It used Macromedia’s Flash, a new technology so similar to Shockwave that the company marketed it as Shockwave Flash. YouTube allowed anybody to upload videos up to ten minutes in length for free. It became so popular that Google bought it a year later for $1.65 billion.

All these technologies combined to provide ordinary people with the opportunity, however brief, to make an impact on popular culture. An early example was the All Your Base phenomenon. An animated GIF of an obscure, mistranslated Sega Genesis game inspired indie musicians The Laziest Men On Mars to create a song and distribute it as an MP3. The popular humor site somethingawful.com picked it up, and users in the Photoshop Friday forum thread created a series of humorous images to go along with the song. Then in 2001, the user Bad_CRC took the song and the best of the images and put them together in an animation they shared on Newgrounds. The YouTube version gained such wide popularity that it was reported on by USA Today.

You have no chance to survive make your time.

Media goes social

In the early 2000s, most websites were either blogs or forums—and frequently both. Forums had multiple discussion boards, both general and specific. They often leaned into a specific hobby or interest, and anyone with that interest could join. There were also a handful of dating websites, like kiss.com (1994), match.com (1995), and eHarmony.com (2000), that specifically tried to connect people who might have a romantic interest in each other.

The Swedish Lunarstorm was one of the first social media websites. Credit: Jeremy Reimer

The road to social media was a hazy and confusing merging of these two types of websites. There was classmates.com (1995) that served as a way to connect with former school chums, and the following year, the Swedish site lunarstorm.com opened with this mission:

Everyone has their own website called Krypin. Each babe [this word is an accurate translation] has their own Krypin where she or he introduces themselves, posts their diaries and their favorite files, which can be anything from photos and their own songs to poems and other fun stuff. Every LunarStormer also has their own guestbook where you can write if you don’t really dare send a LunarEmail or complete a Friend Request.

In 1997, sixdegrees.com opened, based on the truism that everyone on earth is connected with six or fewer degrees of separation. Its About page said, “Our free networking services let you find the people you want to know through the people you already know.”

By the time friendster.com opened its doors in 2002, the concept of “friending” someone online was already well established, although it was still a niche activity. LinkedIn.com, launched the following year, used the excuse of business networking to encourage this behavior. But it was MySpace.com (2003) that was the first to gain significant traction.

MySpace was initially a Friendster clone written in just ten days by employees at eUniverse, an Internet marketing startup founded by Brad Greenspan. It became the company’s most successful product. MySpace combined the website-building ability of sites like GeoCities with social networking features. It took off incredibly quickly: in just three years, it surpassed Google as the most visited website in the United States. Hype around MySpace reached such a crescendo that Rupert Murdoch purchased it in 2005 for $580 million.

But a newcomer to the social media scene was about to destroy MySpace. Just as Google crushed its competitors, this startup won by providing a simpler, more functional, and less intrusive product. TheFaceBook.com began as Mark Zuckerberg and his college roommate’s attempt to replace their college’s online directory. Zuckerberg’s first student website, “Facemash,” had been created by breaking into Harvard’s network, and its sole feature was to provide “Hot or Not” comparisons of student photos. Facebook quickly spread to other universities, and in 2006 (after dropping the “the”), it was opened to the rest of the world.

“The” Facebook as it appeared in 2004. Credit: Jeremy Reimer

Facebook won the social networking wars by focusing on the rapid delivery of new features. The company’s slogan, “Move fast and break things,” encouraged this strategy. The most prominent feature, added in 2006, was the News Feed. It generated a list of posts, selected out of thousands of potential updates for each user based on who they followed and liked, and showed it on their front page. Combined with a technique called “infinite scrolling,” first invented for Microsoft’s Bing Image Search by Hugh E. Williams in 2005, it changed the way the web worked forever.

The algorithmically generated News Feed created new opportunities for Facebook to make profits. For example, businesses could boost posts for a fee, which would make them appear in news feeds more often. These blurred the lines between posts and ads.

Facebook was also successful in identifying up-and-coming social media sites and buying them out before they were able to pose a threat. This was made easier thanks to Onavo, a VPN that monitored its users’ activities and resold the data. Facebook acquired Onavo in 2013. It was shut down in 2019 due to continued controversy over the use of private data.

Social media transformed the Internet, drawing in millions of new users and starting a consolidation of website-visiting habits that continues to this day. But something else was about to happen that would shake the Internet to its core.

Don’t you people have phones?

For years, power users had experimented with getting the Internet on their handheld devices. IBM’s Simon phone, which came out in 1994, had both phone and PDA features. It could send and receive email. The Nokia 9000 Communicator, released in 1996, even had a primitive text-based web browser.

Later phones like the Blackberry 850 (1999), the Nokia 9210 (2001), and the Palm Treo (2002), added keyboards, color screens, and faster processors. In 1999, the Wireless Application Protocol (WAP) was released, which allowed mobile phones to receive and display simplified, phone-friendly pages using WML instead of the standard HTML markup language.

Browsing the web on phones was possible before modern smartphones, but it wasn’t easy. Credit: James Cridland (Flickr)

But despite their popularity with business users, these phones never broke into the mainstream. That all changed in 2007 when Steve Jobs got on stage and announced the iPhone. Now, every webpage could be viewed natively on the phone’s browser, and zooming into a section was as easy as pinching or double-tapping. The one exception was Flash, but a new HTML 5 standard promised to standardize advanced web features like animation and video playback.

Google quickly changed its Android prototype from a Blackberry clone to something more closely resembling the iPhone. Android’s open licensing structure allowed companies around the world to produce inexpensive smartphones. Even mid-range phones were still much cheaper than computers. This technology allowed, for the first time, the entire world to become connected through the Internet.

The exploding market of phone users also propelled the massive growth of social media companies like Facebook and Twitter. It was a lot easier now to snap a picture of a live event with your phone and post it instantly to the world. Optimists pointed to the remarkable events of the Arab Spring protests as proof that the Internet could help spread democracy and freedom. But governments around the world were just as eager to use these new tools, except their goals leaned more toward control and crushing dissent.

The backlash

Technology has always been a double-edged sword. But in recent years, public opinion about the Internet has shifted from being mostly positive to increasingly negative.

The combination of mobile phones, social media algorithms, and infinite scrolling led to the phenomenon of “doomscrolling,” where people spend hours every day reading “news” that is tuned for maximum engagement by provoking as many people as possible. The emotional toil caused by doomscrolling has been shown to cause real harm. Even more serious is the fallout from misinformation and hate speech, like the genocide in Myanmar that an Amnesty International report claims was amplified on Facebook.

As companies like Google, Amazon, and Facebook grew into near-monopolies, they inevitably lost sight of their original mission in favor of a never-ending quest for more money. The process, dubbed enshittification by Cory Doctorow, shifts the focus first from users to advertisers and then to shareholders.

Chasing these profits has fueled the rise of generative AI, which threatens to turn the entire Internet into a sea of soulless gray soup. Google is now forcing AI summaries at the top of web searches, which reduce traffic to websites and often provide dangerous misinformation. But even if you ignore the AI summaries, the sites you find underneath may also be suspect. Once-trusted websites have laid off staff and replaced them with AI, generating an endless series of new articles written by nobody. A web where AIs comment on AI-generated Facebook posts that link to AI-generated articles, which are then AI-summarized by Google, seems inhuman and pointless.

A search for cute baby peacocks on Bing. Some of them are real, and some aren’t. Credit: Jeremy Reimer

Where from here?

The history of the Internet can be roughly divided into three phases. The first, from 1969 to 1990, was all about the inventors: people like Vint Cerf, Steve Crocker, and Robert Taylor. These folks were part of a small group of computer scientists who figured out how to get different types of computers to talk to each other and to other networks.

The next phase, from 1991 to 1999, was a whirlwind that was fueled by entrepreneurs, people like Jerry Yang and Jeff Bezos. They latched on to Tim Berners-Lee’s invention of the World Wide Web and created companies that lived entirely in this new digital landscape. This set off a manic phase of exponential growth and hype, which peaked in early 2001 and crashed a few months later.

The final phase, from 2000 through today, has primarily been about the users. New companies like Google and Facebook may have reaped the greatest financial rewards during this time, but none of their successes would have been possible without the contributions of ordinary people like you and me. Every time we typed something into a text box and hit the “Submit” button, we created a tiny piece of a giant web of content. Even the generative AIs that pretend to make new things today are merely regurgitating words, phrases, and pictures that were created and shared by people.

There is a growing sense of nostalgia today for the old Internet, when it felt like a place, and the joy of discovery was around every corner. “Using the old Internet felt like digging for treasure,” said YouTube commenter MySoftCrow. “Using the current Internet feels like getting buried alive.”

Ars community member MichaelHurd added his own thoughts: “I feel the same way. It feels to me like the core problem with the modern Internet is that websites want you to stay on them for as long as possible, but the World Wide Web is at its best when sites connect to each other and encourage people to move between them. That’s what hyperlinks are for!”

Despite all the doom surrounding the modern Internet, it remains largely open. Anyone can pay about $5 per month for a shared Linux server and create a personal website containing anything they can think of, using any software they like, even their own. And for the most part, anyone, on any device, anywhere in the world, can access that website.

Ultimately, the fate of the Internet depends on the actions of every one of us. That’s why I’m leaving the final words in this series of articles to you. What would your dream Internet of the future look and feel like? The comments section is open.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

A history of the Internet, part 3: The rise of the user Read More »

steam-will-wind-down-support-for-32-bit-windows-as-that-version-of-windows-fades

Steam will wind down support for 32-bit Windows as that version of Windows fades

Though the 32-bit versions of Windows were widely used from the mid-90s all the way through to the early 2010s, this change is coming so late that it should only actually affect a statistically insignificant number of Steam users. Valve already pulled Steam support for all versions of Windows 7 and Windows 8 in January 2024, and 2021’s Windows 11 was the first in decades not to ship a 32-bit version. That leaves only the 32-bit version of Windows 10, which is old enough that it will stop getting security updates in either October 2025 or October 2026, depending on how you count it.

According to Steam Hardware Survey data from August, usage of the 32-bit version of Windows 10 (and any other 32-bit version of Windows) is so small that it’s lumped in with “other” on the page that tracks Windows version usage. All “other” versions of Windows combined represent roughly 0.05 percent of all Steam users. The 64-bit version of Windows 10 still runs on just over a third of all Steam-using Windows PCs, while the 64-bit version of Windows 11 accounts for just under two-thirds.

The change to the Steam client shouldn’t have any effects on game availability or compatibility. Any older 32-bit games that you can currently run in 64-bit versions of Windows will continue to work fine because, unlike modern macOS versions, new 64-bit versions of Windows still maintain compatibility with most 32-bit apps.

Steam will wind down support for 32-bit Windows as that version of Windows fades Read More »

after-a-very-slow-start,-europe’s-reusable-rocket-program-shows-signs-of-life

After a very slow start, Europe’s reusable rocket program shows signs of life

No one could accuse the European Space Agency and its various contractors of moving swiftly when it comes to the development of reusable rockets. However, it appears that Europe is finally making some credible progress.

This week, the France-based ArianeGroup aerospace company announced that it had completed the integration of the Themis vehicle, a prototype rocket that will test various landing technologies, on a launch pad in Sweden. Low-altitude hop tests, a precursor for developing a rocket’s first stage that can vertically land after an orbital launch, could start late this year or early next.

“This milestone marks the beginning of the ‘combined tests,’ during which the interface between Themis and the launch pad’s mechanical, electrical, and fluid systems will be thoroughly trialed, with the aim of completing a test under cryogenic conditions,” the company said.

Finally getting going

The advancement of the Themis program represents a concrete step forward for Europe, which has had a delayed and somewhat confusing response to the rise of reusable rockets a decade ago.

After several years of development and testing, including the Grasshopper program in Texas to demonstrate vertical landing, SpaceX landed its first orbital rocket in December 2015. Weeks earlier, Blue Origin landed the much smaller New Shepard vehicle after a suborbital hop. This put the industry on notice that first stage reuse was on the horizon.

At this point, the European Space Agency had already committed to a new medium-lift rocket, the Ariane 6, and locked in a traditional design that would not incorporate any elements of reuse. Most of its funding focused on developing the Ariane 6.

However, by the middle of 2017, the space agency began to initiate programs that would eventually lead to a reusable launch vehicle. They included:

After a very slow start, Europe’s reusable rocket program shows signs of life Read More »

microsoft-raises-xbox-console-prices-for-the-second-time-this-year

Microsoft raises Xbox console prices for the second time this year

Here we go again

Higher than usual inflation can help explain some of the nominal price increases for the oldest Xbox consoles affected by today’s price hikes. The $300 for an Xbox Series S at launch in November 2020 is worth roughly $375 in August 2025 dollars, for instance. And the $500 for an Xbox Series X in 2020 is now worth about $625.

But the particularly sharp price increases for more recent Xbox configurations can’t really use that inflation excuse. The disc-drive-free Digital Xbox Series X Digital and 2TB “Galaxy Special Edition” are now a whopping 33 percent more expensive than they were at launch in October 2024. A year’s worth of inflation would account for only a small fraction of that.

Even accounting for inflation, though, the current spate of nominal console price increases goes against a near-universal, decades-long trend of game console prices dropping significantly in the years following their launch. Those days seem well and truly gone now, as console makers’ costs remain high thanks in part to current tariff uncertainty and in part to the wider slowdown of Moore’s Law.

We’ll see just how much the market can bear aging console hardware that increases in price over time rather than decreases. But until and unless consumers start balking, it looks like ever-increasing console prices are here to stay.

Microsoft raises Xbox console prices for the second time this year Read More »

rfk-jr.’s-anti-vaccine-panel-realizes-it-has-no-idea-what-it’s-doing,-skips-vote

RFK Jr.’s anti-vaccine panel realizes it has no idea what it’s doing, skips vote


With a lack of data and confusing language, the panel tabled the vote indefinitely.

Catherine Stein, far right, speaks during a meeting of the CDC’s Advisory Committee on Immunization Practices on September 18, 2025 in Chamblee, Georgia. Credit: Getty | Elijah Nouvelage

The second day of a two-day meeting of the Advisory Committee on Immunization Practices—a panel currently made up of federal vaccine advisors hand-selected by anti-vaccine activist Robert F. Kennedy, Jr.—is off to a dramatic start, with the advisors seemingly realizing they have no idea what they’re doing.

The inexperienced, questionably qualified group that has espoused anti-vaccine rhetoric started its second day of deliberations by reversing a vote taken the previous day on federal coverage for the measles, mumps, rubella, and varicella (MMRV) vaccine. Yesterday, the group voted to restrict access to MMRV, stripping recommendations for its use in children under age 4. While that decision was based on no new data, it passed with majority support of 8–3 (with one abstention). (For an explanation of that, see our coverage of yesterday’s part of the meeting here.)

But puzzlingly, they then voted to uphold access and coverage of MMRV vaccines for children under age 4 if they receive free vaccines through the federal Vaccines for Children program, which covers about half of American children, mostly low-income. The discrepancy projected the idea that the alleged safety concerns that led the panel to rescind the recommendation for MMRV generally, somehow did not apply to low-income, vulnerable children. The vote also created significant confusion for VFC coverage, which typically aligns with recommendations made by the panel.

Today, Kennedy’s ACIP retook the vote, deciding 9-0 (with three abstentions) to align VFC coverage with their vote yesterday to strip the recommendation for MMRV in young children.

Hepatitis B vaccine newborn dose

Next, they moved to a vote they failed to take yesterday as scheduled—a vote to strip a recommendation for a dose of hepatitis B vaccine that is currently recommended to be given universally on the first day of a baby’s life. Instead, the proposed recommendation would be to wait at least a month before the first dose—opening a window for a highly infectious disease that leads to chronic liver disease and cancer—unless the baby’s mother tested positive for the virus.

While it initially seemed that the panel was poised to approve the change, cracks in the plan began to appear quickly this morning, as some members of the panel noted that the proposed recommendation made no sense and was based on zero data.

Joseph Hibbeln, a psychiatrist on the panel, raised the obvious concern yesterday, saying: “I’m unclear if we’ve been presented with any safety or data comparing before one month to after one month, and I’m wondering why one month was selected as our time point and if there are data to help to inform us if there’s greater risk of adverse effects before one month or after one month at all, let alone in negative mothers.”

There was no data comparing the risks and benefits of moving the first dose from the day of birth to any other time point. And there is no data suggesting that such a move would be more or less safe.

Adam Langer, Acting Principal Deputy Director of the CDC’s National Center for HIV, Viral Hepatitis, STD, and Tuberculosis Prevention, stressed in his presentation on the safety data yesterday, that the vaccine is safe—there are no safety concerns for giving a dose at birth. Adverse side effects are rare, he said, and when they do occur, they’re mild. “The worst adverse event you could imagine, anaphylaxis, has been very rarely reported at only 1.1 cases per 1 million vaccine doses administered.”

Langer gave a clear explanation for why newborns are vaccinated at day one. Hepatitis B, which primarily affects the liver, spreads via bodily fluids and can live on surfaces for up to seven days. It can spread easily; only a tiny microscopic amount of blood or fluid is enough for a child to be infected. For some, an infection can be short-lived, but for others it can become chronic, which leads to liver disease, cirrhosis, liver transplant, and liver cancer. The risk of the infection becoming chronic increases with the younger someone is when they’re infected.

Benefits and harms

Newborns who get hepatitis B from their mothers at birth have a 90 percent chance of developing a chronic infection, and 25 percent of those children will die prematurely from the disease. Up to 16 percent of pregnant women in the US are not tested for hepatitis B during pregnancy. Newborns and babies can also get infected from other people in their family or household, given hepatitis B’s infectiousness. Prior to the universal birth dose recommendation, a study of US-born children born to immigrant mothers found that 7 percent to 11 percent of them had hepatitis B while their mothers were negative. This highlights that unvaccinated babies and children can pick up the infection from family or the community.

Part of the reason for this is the elusiveness of the disease. While about 2.4 million people in the US are infected with hepatitis B, about 50 percent of those infected do not know that they’re infected.

In 1991, ACIP began recommending universal hepatitis B vaccination at birth; acute hepatitis B cases then fell from around 18,000 to about 5,500 in 2005 to about 2,200 in 2023. Since 2018, ACIP has recommended universal Hep B vaccination for all newborns within 24 hours of birth.

In the discussion, panel members pushed back on the universal birth dose, arguing that if mothers tested negative, there was little to no risk—downplaying the risk of other family or community exposure and assuming that test coverage could increase to 100 percent. There was a lot of discussion of why some women aren’t tested and if doctors can just try to assess whether there’s a risk that a family member might have the infection—even if those family members don’t know themselves that they’re infected.

Data and trust

Langer acknowledged there might be ways to assess risk from at least the mother in the 24-hour window after birth—”or,” he suggested, “you cannot have to worry about all of those different things that could go wrong, and you could simply give the vaccine because there is no data available that says that there is any harm that would come to a newborn compared to a one-month-old infant [getting the vaccine.]”

He summed up the discussion succinctly: “The only thing that we’re discussing here is if there’s some benefit or removal of harm that comes from waiting a month. And I have not seen any data that says that there is any benefit to the infant of waiting a month, but there are a number of potential harms to the infant of waiting a month.”

Panel member Robert Malone, who has falsely claimed that COVID-19 vaccines cause a form of AIDS, explained that the proposed change for the hep B vaccination was not due to any safety concern or evidence-based reason, but about trust among parents who have been exposed to vaccine misinformation.

“The signal that is prompting this is not one of safety, it is one of trust,” Malone said yesterday. “It is one of parents uncomfortable with this medical procedure being performed at birth in a rather unilateral fashion without significant informed consent at a time in particular when there has been a loss of trust in the public health enterprise and vaccines in general.”

Dashed decisions

But the questions and uncertainties of the proposed recommendation and the data behind it dogged the committee again this morning.

This morning, the voting language was put on a slide and immediately drew criticism. The language was:

If a mother tests [hepatitis B]-negative:

  • The first dose of the Hepatitis B vaccine is not given until the child is at least one month old.
  • Infants may receive a dose of Hepatitis B vaccine before one month according to individual based decision-making. *

*Also referred to as shared clinical decision-making.

Hibbeln, the psychiatrist, again pushed back, this time noting that the language of the change is confusing. “You can’t say don’t give it and then give an opportunity to give it,” he said, arguing that shared clinical decision-making is, essentially, all or nothing.

Discussion quickly spiraled, with another member questioning whether there was any data presented at all on the proposed recommendation. There was a fast motion to table the vote indefinitely, and the motion to table passed in a speedy vote of 11–1, with the ACIP chair, Martin Kulldorff, being the only holdout.

For the rest of the day, the panel is discussing COVID-19 vaccines. Stay tuned.

Photo of Beth Mole

Beth is Ars Technica’s Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes.

RFK Jr.’s anti-vaccine panel realizes it has no idea what it’s doing, skips vote Read More »