Features

doom:-the-dark-ages-review:-shields-up!

Doom: The Dark Ages review: Shields up!


Prepare to add a more defensive stance to the usual dodge-and-shoot gameplay loop.

There’s a reason that shield is so prominent in this image. Credit: Bethesda Game Studios

There’s a reason that shield is so prominent in this image. Credit: Bethesda Game Studios

For decades now, you could count on there being a certain rhythm to a Doom game. From the ’90s originals to the series’ resurrection in recent years, the Doom games have always been about using constant, zippy motion to dodge through a sea of relatively slow-moving bullets, maintaining your distance while firing back at encroaching hordes of varied monsters. The specific guns and movement options you could call on might change from game to game, but the basic rhythm of that dodge-and-shoot gameplay never has.

Just a few minutes in, Doom: The Dark Ages throws out that traditional Doom rhythm almost completely. The introduction of a crucial shield adds a whole suite of new verbs to the Doom vocabulary; in addition to running, dodging, and shooting, you’ll now be blocking, parrying, and stunning enemies for counterattacks. In previous Doom games, standing still for any length of time often led to instant death. In The Dark Ages, standing your ground to absorb and/or deflect incoming enemy attacks is practically required at many points.

During a preview event earlier this year, the game’s developers likened this change to the difference between flying a fighter jet and piloting a tank. That’s a pretty apt metaphor, and it’s not exactly an unwelcome change for a series that might be in need of a shake-up. But it only works if you go in ready to play like a tank and not like the fighter jet that has been synonymous with Doom for decades.

Stand your ground

Don’t get me wrong, The Dark Ages still features its fair share of the Doom series’ standard position-based Boomer Shooter action. The game includes the usual stockpile of varied weapons—from short-range shotguns to long-range semi-automatics to high-damage explosives with dangerous blowback—and doles them out slowly enough that major new options are still being introduced well into the back half of the game.

But the shooting side has simplified a bit since Doom Eternal. Gone are the secondary weapon modes, grenades, chainsaws, and flamethrowers that made enemy encounters a complicated weapon and ammo juggling act. Gone too are the enemies that practically forced you to use a specific weapon to exploit their One True Weakness; I got by for most of The Dark Ages by leaning on my favored plasma rifle, with occasional switches to a charged steel ball-and-chain launcher for heavily armored enemies.

See green, get ready to parry…

Credit: Bethesda Game Studios

See green, get ready to parry… Credit: Bethesda Game Studios

In their place is the shield, which gives you ample (but not unlimited) ability to simply deflect enemy attacks damage-free. You can also throw the shield for a ranged attack that’s useful for blowing up frequent phalanxes of shielded enemies or freezing larger unarmored enemies in place for a safe, punishing barrage.

But the shield’s most important role comes when you stand face to face with a particularly punishing demon, waiting for a flash of green to appear on the screen. When that color appears, it’s your signal that the associated projectile and/or incoming melee attack can be parried by raising your shield just before it lands. A successful parry knocks that attack back entirely, returning projectiles to their source and/or temporarily deflecting the encroaching enemy themselves.

A well-timed, powerful parry is often the only reasonable option for attacks that are otherwise too quick or overwhelming to dodge effectively. The overall effect ends up feeling a bit like Doom by way of Mike Tyson’s Punch-Out!! Instead of dancing around a sea of hazards and looking for an opening, you’ll often find yourself just standing still for a few seconds, waiting to knock back a flash of green so you can have the opportunity to unleash your own counterattack. Various shield sigils introduced late in the game encourage this kind of conservative turtling strategy even more by adding powerful bonus effects to each successful parry.

The window for executing a successful parry is pretty generous, and the dramatic temporal slowdown and sound effects make each one feel like an impactful moment. But they start to feel less impactful as the game goes on, and battles often devolve into vast seas of incoming green flashes. There were countless moments in my Dark Ages playthrough where I found myself more or less pinned down by a deluge of green attacks, frantically clicking the right mouse button four or five times in quick succession to parry off threats from a variety of angles.

In between all the parrying, you do get to shoot stuff.

Credit: Bethesda Game Studios

In between all the parrying, you do get to shoot stuff. Credit: Bethesda Game Studios

In between these parries, the game seems to go out of its way to encourage a more fast-paced, aggressive style of play. A targeted shield slam move lets you leap quickly across great distances to get up close and personal with enemy demons, at which point you can use one of a variety of melee weapons for some extremely satisfying, crunchy close quarters beatdowns (though these melee attacks are limited by their own slowly recharging ammo system).

You might absorb some damage in the process of going in for these aggressive close-up attacks, but don’t worry—defeated enemies tend to drop heaps of health, armor, and ammo, depending on the specific way they were killed. I’d often find myself dancing on the edge of critically low health after an especially aggressive move, only to recover just in time by finishing off a major demon. Doubling back for a shield slam on a far-off “fodder” enemy can also be an effective strategy for quickly escaping a sticky situation and grabbing some health in the process.

The back-and-forth tug between these aggressive encroachments and the more conservative parry-based turtling makes for some exciting moment-to-moment gameplay, with enough variety in the enemy mix to never feel too stale. Effectively managing your movement and attack options in any given firefight feels complex enough to be engaging without ever tipping into overwhelming, as well.

Even so, working through Doom: The Dark Ages, there was a part of me that missed the more free-form, three-dimensional acrobatics of Doom Eternal’s double jumps and air dashes. Compared to the almost balletic, improvisational movement in that game, playing The Dark Ages too often felt like it devolved into something akin to a simple rhythm game; simply wait for each green “note” to reach the bottom of the screen, then hit the button to activate your counterattack.

Stories and secrets

In between chapters, Doom: The Dark Ages breaks things up with some extremely ponderous cutscenes featuring a number of religious and political factions, both demon and human, jockeying for position and control in an interdimensional war. This mostly involves a lot of tedious standing around discussing the Heart of Argent (a McGuffin that’s supposed to grant the bearer the power of a god) and debating how, where, and when to deploy the Slayer (that’s you) as a weapon.

I watched these cutscenes out of a sense of professional obligation, but I tuned out at points and thus had trouble following the internecine intrigue that seemed to develop between factions whose motivations and backgrounds never seemed to be sufficiently explained or delineated. Most players who aren’t reviewing the game should feel comfortable skipping these scenes and getting back to the action as quickly as possible.

I hope you like red and black, because there’s a lot of it here…

Credit: Bethesda Game Studios

I hope you like red and black, because there’s a lot of it here… Credit: Bethesda Game Studios

The levels themselves are all dripping with the usual mix of Hellish symbology and red-and-black gore, with mood lighting so dark that it can be hard to see a wall right in front of your face. Design-wise, the chapters seem to alternate between Doom’s usual system of twisty enemy-filled corridors and more wide-open outdoor levels. The latter are punctuated by a number of large, open areas where huge groups of demons simply teleport in as soon as you set foot in the pre-set engagement zone. These battle arenas might have a few inclines or spires to mix things up, but for the most part, they all feel depressingly similar and bland after a while. If you’ve stood your ground in one canyon, you’ve stood your ground in them all.

Each level is also absolutely crawling with secret collectibles hidden in various nooks and crannies, which often tease you with a glimpse through a hole in some impassable wall or rock formation. Studying the map screen for a minute more often than not reveals the general double-back path you’ll need to follow to find the hidden entrance behind these walls, even as finding the precise path can involve solving some simple puzzles or examining your surroundings for one particularly well-hidden bit that will allow you to advance.

After all the enemies were cleared in one particularly vast open level, I spent a good half hour picking through every corner of the map until I tracked down the hidden pathways leading to every stray piece of gold and collectible trinket. It was fine as a change of pace—and lucrative in terms of upgrading my weapons and shield for later fights—but it felt kind of lonely and quiet compared to the more action-packed battles.

Don’t unleash the dragon

Speaking of changes of pace, by far the worst parts of Doom: The Dark Ages come when the game insists on interrupting the usual parry-and-shoot gameplay to put you in some sort of vehicle. This includes multiple sections where your quick-moving hero is replaced with a lumbering 30-foot-tall mech, which slouches pitifully down straight corridors toward encounters with equally large demons.

These mech battles play out as the world’s dullest fistfights, where you simply wail on the attack buttons while occasionally tapping the dodge button to step away from some incredibly slow and telegraphed counterattacks. I found myself counting the minutes until these extremely boring interludes were over.

Believe me, this is less exciting than it looks.

Credit: Bethesda Game Studios

Believe me, this is less exciting than it looks. Credit: Bethesda Game Studios

The sections where your Slayer rides a dragon for some reason are ever-so-slightly more interesting, if only because the intuitive, fast-paced flight controls can be a tad more exciting. Unfortunately, these sections don’t give you any thrilling dogfights or complex obstacle courses to take advantage of these controls, topping out instead in a few simplistic chase sequences where you take literally no incoming fire.

Between those semi-engaging chase sequences is a seemingly endless parade of showdowns with stationary turrets. These require your dragon to hover frustratingly still in mid-air, waiting patiently for an incoming energy attack to dodge, which in turn somehow powers up your gun enough to take out the turret in a counterattack. How anyone thought that this was the most engaging use of a seemingly competent third-person flight-combat system is utterly baffling.

Those too-frequent interludes aside, Doom: The Dark Ages is a more-than-suitable attempt to shake up the Doom formula with a completely new style of gameplay. While the more conservative, parry-based shield system takes some getting used to—and may require adjusting some of your long-standing Doom muscle memory in the process—it’s ultimately a welcome and engaging way to add new types of interaction to the long-running franchise.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Doom: The Dark Ages review: Shields up! Read More »

spacex-pushed-“sniper”-theory-with-the-feds-far-more-than-is-publicly-known

SpaceX pushed “sniper” theory with the feds far more than is publicly known


“It came out of nowhere, and it was really violent.”

The Amos 6 satellite is lost atop a Falcon 9 rocket. Credit: USLaunchReport

The Amos 6 satellite is lost atop a Falcon 9 rocket. Credit: USLaunchReport

The rocket was there. And then it decidedly was not.

Shortly after sunrise on a late summer morning nearly nine years ago at SpaceX’s sole operational launch pad, engineers neared the end of a static fire test. These were still early days for their operation of a Falcon 9 rocket that used super-chilled liquid propellants, and engineers pressed to see how quickly they could complete fueling. This was because the liquid oxygen and kerosene fuel warmed quickly in Florida’s sultry air, and cold propellants were essential to maximizing the rocket’s performance.

On this morning, September 1, 2016, everything proceeded more or less nominally up until eight minutes before the ignition of the rocket’s nine Merlin engines. It was a stable point in the countdown, so no one expected what happened next.

“I saw the first explosion,” John Muratore, launch director for the mission, told me. “It came out of nowhere, and it was really violent. I swear, that explosion must have taken an hour. It felt like an hour. But it was only a few seconds. The second stage exploded in this huge ball of fire, and then the payload kind of teetered on top of the transporter erector. And then it took a swan dive off the top rails, dove down, and hit the ground. And then it exploded.”

The dramatic loss of the Falcon 9 rocket and its Amos-6 satellite, captured on video by a commercial photographer, came at a pivotal moment for SpaceX and the broader commercial space industry. It was SpaceX’s second rocket failure in a little more than a year, and it occurred as NASA was betting heavily on the company to carry its astronauts to orbit. SpaceX was not the behemoth it is today, a company valued at $350 billion. It remained vulnerable to the vicissitudes of the launch industry. This violent failure shook everyone, from the engineers in Florida to satellite launch customers to the suits at NASA headquarters in Washington, DC.

As part of my book on the Falcon 9 and Dragon years at SpaceX, Reentry, I reported deeply on the loss of the Amos-6 mission. In the weeks afterward, the greatest mystery was what had precipitated the accident. It was understood that a pressurized helium tank inside the upper stage had ruptured. But why? No major parts on the rocket were moving at the time of the failure. It was, for all intents and purposes, akin to an automobile idling in a driveway with half a tank of gasoline. And then it exploded.

This failure gave rise to one of the oddest—but also strangely compelling—stories of the 2010s in spaceflight. And we’re still learning new things today.

The “sniper” theory

The lack of a concrete explanation for the failure led SpaceX engineers to pursue hundreds of theories. One was the possibility that an outside “sniper” had shot the rocket. This theory appealed to SpaceX founder Elon Musk, who was asleep at his home in California when the rocket exploded. Within hours of hearing about the failure, Musk gravitated toward the simple answer of a projectile being shot through the rocket.

This is not as crazy as it sounds, and other engineers at SpaceX aside from Musk entertained the possibility, as some circumstantial evidence to support the notion of an outside actor existed. Most notably, the first rupture in the rocket occurred about 200 feet above the ground, on the side of the vehicle facing the southwest. In this direction, about one mile away, lay a building leased by SpaceX’s main competitor in launch, United Launch Alliance. A separate video indicated a flash on the roof of this building, now known as the Spaceflight Processing Operations Center. The timing of this flash matched the interval it would take a projectile to travel from the building to the rocket.

A sniper on the roof of a competitor’s building—forget the Right Stuff, this was the stuff of a Mission: Impossible or James Bond movie.

At Musk’s direction, SpaceX worked this theory both internally and externally. Within the company, engineers and technicians actually took pressurized tanks that stored helium—one of these had burst, leading to the explosion—and shot at them in Texas to determine whether they would explode and what the result looked like. Externally, they sent the site director for their Florida operations, Ricky Lim, to inquire whether he might visit the roof of the United Launch Alliance building.

SpaceX pursued the sniper theory for more than a month. A few SpaceX employees told me that they did not stop this line of inquiry until the Federal Aviation Administration sent the company a letter definitively saying that there was no gunman involved. It would be interesting to see this letter, so I submitted a Freedom of Information Act request to the FAA in the spring of 2023. Because the federal FOIA process moves slowly, I did not expect to receive a response in time for the book. But it was worth a try anyway.

No reply came in 2023 or early 2024, when the final version of my book was due to my editor. Reentry was published last September, and still nothing. However, last week, to my great surprise and delight, I got a response from the FAA. It was the very letter I requested, sent from the FAA to Tim Hughes, the general counsel of SpaceX, on October 13, 2016. And yes, the letter says there was no gunman involved.

However, there were other things I did not know—namely, that the FBI had also investigated the incident.

The ULA rivalry

One of the most compelling elements of this story is that it involves SpaceX’s heated rival, United Launch Alliance. For a long time, ULA had the upper hand, but in recent years, it has taken a dramatic turn. Now we know that David would grow up and slay Goliath: Between the final rocket ULA launched last year (the Vulcan test flight on October 4) and the first rocket the company launched this year (Atlas V, April 28), SpaceX launched 90 rockets.

Ninety.

But it was a different story in the summer of 2016 in the months leading up to the Amos 6 failure. Back then, ULA was launching about 15 rockets a year, compared to SpaceX’s five. And ULA was launching all of the important science missions for NASA and the critical spy satellites for the US military. They were the big dog, SpaceX the pup.

In the early days of the Falcon 9 rocket, some ULA employees would drive to where SpaceX was working on the first booster and jeer at their efforts. And rivalry played out not just on the launch pad but in courtrooms and on Capitol Hill. After ULA won an $11 billion block buy contract from the US Air Force to launch high-value military payloads into the early 2020s, Musk sued in April 2014. He alleged that the contract had been awarded without a fair competition and said the Falcon 9 rocket could launch the missions at a substantially lower price. Taxpayers, he argued, were being taken for a ride.

Eventually, SpaceX and the Air Force resolved their claims. The Air Force agreed to open some of its previously awarded national security missions to competitive bids. Over time, SpaceX has overtaken ULA even in this arena. During the most recent round of awards, SpaceX won 60 percent of the contracts compared to ULA’s 40 percent.

So when SpaceX raised the possibility of a ULA sniper, it came at an incendiary moment in the rivalry, when SpaceX was finally putting forth a very serious challenge to ULA’s dominance and monopoly.

It is no surprise, therefore, that ULA told SpaceX’s Ricky Lim to get lost when he wanted to see the roof of their building in Florida.

“Hair-on-fire stuff”

NASA officials were also deeply concerned by the loss of the Falcon 9 rocket in September 2016.

The space agency spent much of the 2010s working with SpaceX and Boeing to develop, test, and fly spacecraft that could fly humans into space. These were difficult years for the space agency, which had to rely on Russia to get its astronauts into space. NASA also had a challenging time balancing costs with astronaut safety. Then rockets started blowing up.

Consider this sequence from mid-2015 to mid-2016. In June 2015, the second stage of a Falcon 9 rocket carrying a cargo version of the Dragon spacecraft into orbit exploded. Less than two weeks later, NASA named four astronauts to its “commercial crew” cadre from which the initial pilots of Dragon and Starliner spacecraft would be selected. Finally, a little more than a year after this, a second Falcon 9 rocket upper stage detonated.

Video of CRS-7 launch and failure.

Even as it was losing Falcon 9 rockets, SpaceX revealed that it intended to upend NASA’s long-standing practice of fueling a rocket and then, when the vehicle reached a stable condition, putting crew on board. Rather, SpaceX said it would put the astronauts on board before fueling. This process became known as “load and go.”

NASA’s safety community went nuts.

“When SpaceX came to us and said we want to load the crew first and then the propellant, mushroom clouds went off in our safety community,” Phil McAlister, the head of NASA’s commercial programs, told me for Reentry. “I mean, hair-on-fire stuff. It was just conventional wisdom that you load the propellant first and get it thermally stable. Fueling is a very dynamic operation. The vehicle is popping and hissing. The safety community was adamantly against this.”

Amos-6 compounded these concerns. That’s because the rocket was not shot by a sniper. After months of painful investigation and analysis, engineers determined the rocket was lost due to the propellant-loading process. In their goal of rapidly fueling the Falcon 9 rocket, the SpaceX teams had filled the pressurized helium tanks too quickly, heating the aluminum liner and causing it to buckle. In their haste to load super-chilled propellant onto the Falcon 9, SpaceX had found its speed limit.

At NASA, it was not difficult to visualize astronauts in a Dragon capsule sitting atop an exploding rocket during propellant loading rather than a commercial satellite.

Enter the FBI

We should stop and appreciate the crucible that SpaceX engineers and technicians endured in the fall of 2016. They were simultaneously attempting to tease out the physics of a fiendishly complex failure; prove to NASA their exploding rocket was safe; convince safety officials that even though they had just blown up their rocket by fueling it too quickly, load-and-go was feasible for astronaut missions; increase the cadence of Falcon 9 missions to catch and surpass ULA; and, oh yes, gently explain to the boss that a sniper had not shot their rocket.

So there had to be some relief when, on October 13, Hughes received that letter from Dr. Michael C. Romanowski, director of Commercial Space Integration at the FAA.

According to this letter (see a copy here), three weeks after the launch pad explosion, SpaceX submitted “video and audio” along with its analysis of the failure to the FAA. “SpaceX suggested that in the company’s view, this information and data could be indicative of sabotage or criminal activity associated with the on-pad explosion of SpaceX’s Falcon 9,” the letter states.

This is notable because it suggests that Musk directed SpaceX to elevate the “sniper” theory to the point that the FAA should take it seriously. But there was more. According to the letter, SpaceX reported the same data and analysis to the Federal Bureau of Investigation in Florida.

After this, the Tampa Field Office of the FBI and its Criminal Investigative Division in Washington, DC, looked into the matter. And what did they find? Nothing, apparently.

“The FBI has informed us that based upon a thorough and coordinated review by the appropriate Federal criminal and security investigative authorities, there were no indications to suggest that sabotage or any other criminal activity played a role in the September 1 Falcon 9 explosion,” Romanowski wrote. “As a result, the FAA considers this matter closed.”

The failure of the Amos-6 mission would turn out to be a low point for SpaceX. For a few weeks, there were non-trivial questions about the company’s financial viability. But soon, SpaceX would come roaring back. In 2017, the Falcon 9 rocket launched a record 18 times, surpassing ULA for the first time. The gap would only widen. Last year, SpaceX launched 137 rockets to ULA’s five.

With Amos-6, therefore, SpaceX lost the battle. But it would eventually win the war—without anyone firing a shot.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

SpaceX pushed “sniper” theory with the feds far more than is publicly known Read More »

redditor-accidentally-reinvents-discarded-’90s-tool-to-escape-today’s-age-gates

Redditor accidentally reinvents discarded ’90s tool to escape today’s age gates


The ’90s called. They want their flawed age verification methods back.

A boys head with a fingerprint revealing something unclear but perhaps evocative

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Back in the mid-1990s, when The Net was among the top box office draws and Americans were just starting to flock online in droves, kids had to swipe their parents’ credit cards or find a fraudulent number online to access adult content on the web. But today’s kids—even in states with the strictest age verification laws—know they can just use Google.

Last month, a study analyzing the relative popularity of Google search terms found that age verification laws shift users’ search behavior. It’s impossible to tell if the shift represents young users attempting to circumvent the child-focused law or adult users who aren’t the actual target of the laws. But overall, enforcement causes nearly half of users to stop searching for popular adult sites complying with laws and instead search for a noncompliant rival (48 percent) or virtual private network (VPN) services (34 percent), which are used to mask a location and circumvent age checks on preferred sites, the study found.

“Individuals adapt primarily by moving to content providers that do not require age verification,” the study concluded.

Although the Google Trends data prevented researchers from analyzing trends by particular age groups, the findings help confirm critics’ fears that age verification laws “may be ineffective, potentially compromise user privacy, and could drive users toward less regulated, potentially more dangerous platforms,” the study said.

The authors warn that lawmakers are not relying enough on evidence-backed policy evaluations to truly understand the consequences of circumvention strategies before passing laws. Internet law expert Eric Goldman recently warned in an analysis of age-estimation tech available today that this situation creates a world in which some kids are likely to be harmed by the laws designed to protect them.

Goldman told Ars that all of the age check methods carry the same privacy and security flaws, concluding that technology alone can’t solve this age-old societal problem. And logic-defying laws that push for them could end up “dramatically” reshaping the Internet, he warned.

Zeve Sanderson, a co-author of the Google Trends study, told Ars that “if you’re a policymaker, in addition to being potentially nervous about the more dangerous content, it’s also about just benefiting a noncompliant firm.”

“You don’t want to create a regulatory environment where noncompliance is incentivized or they benefit in some way,” Sanderson said.

Sanderson’s study pointed out that search data is only part of the picture. Some users may be using VPNs and accessing adult sites through direct URLs rather than through search. Others may rely on social media to find adult content, a 2025 conference paper noted, “easily” bypassing age checks on the largest platforms. VPNs remain the most popular circumvention method, a 2024 article in the International Journal of Law, Ethics, and Technology confirmed, “and yet they tend to be ignored or overlooked by statutes despite their popularity.”

While kids are ducking age gates and likely putting their sensitive data at greater risk, adult backlash may be peaking over the red wave of age-gating laws already blocking adults from visiting popular porn sites in several states.

Some states started controversially requiring checking IDs to access adult content, which prompted Pornhub owner Aylo to swiftly block access to its sites in certain states. Pornhub instead advocates for device-based age verification, which it claims is a safer choice.

Aylo’s campaign has seemingly won over some states that either explicitly recommend device-based age checks or allow platforms to adopt whatever age check method they deem “reasonable.” Other methods could include app store-based age checks, algorithmic age estimation (based on a user’s web activity), face scans, or even tools that guess users’ ages based on hand movements.

On Reddit, adults have spent the past year debating the least intrusive age verification methods, as it appears inevitable that adult content will stay locked down, and they dread a future where more and more adult sites might ask for IDs. Additionally, critics have warned that showing an ID magnifies the risk of users publicly exposing their sexual preferences if a data breach or leak occurs.

To avoid that fate, at least one Redditor has attempted to reinvent the earliest age verification method, promoting a resurgence of credit card-based age checks that society discarded as unconstitutional in the early 2000s.

Under those systems, an entire industry of age verification companies emerged, selling passcodes to access adult sites for a supposedly nominal fee. The logic was simple: Only adults could buy credit cards, so only adults could buy passcodes with credit cards.

If “a person buys, for a nominal fee, a randomly generated passcode not connected to them in any way” to access adult sites, one Redditor suggested about three months ago, “there won’t be any way to tie the individual to that passcode.”

“This could satisfy the requirement to keep stuff out of minors’ hands,” the Redditor wrote in a thread asking how any site featuring sexual imagery could hypothetically comply with US laws. “Maybe?”

Several users rushed to educate the Redditor about the history of age checks. Those grasping for purely technology-based solutions today could be propping up the next industry flourishing from flawed laws, they said.

And, of course, since ’90s kids easily ducked those age gates, too, history shows why investing millions to build the latest and greatest age verification systems probably remains a fool’s errand after all these years.

The cringey early history of age checks

The earliest age verification systems were born out of Congress’s “first attempt to outlaw pornography online,” the LA Times reported. That attempt culminated in the Communications Decency Act of 1996.

Although the law was largely overturned a year later, the million-dollar age verification industry was already entrenched, partly due to its intriguing business model. These companies didn’t charge adult sites any fee to add age check systems—which required little technical expertise to implement—and instead shared a big chunk of their revenue with porn sites that opted in. Some sites got 50 percent of revenues, estimated in the millions, simply for adding the functionality.

The age check business was apparently so lucrative that in 2000, one adult site, which was sued for distributing pornographic images of children, pushed fans to buy subscriptions to its preferred service as a way of helping to fund its defense, Wired reported. “Please buy an Adult Check ID, and show your support to fight this injustice!” the site urged users. (The age check service promptly denied any association with the site.)

In a sense, the age check industry incentivized adult sites’ growth, an American Civil Liberties Union attorney told the LA Times in 1999. In turn, that fueled further growth in the age verification industry.

Some services made their link to adult sites obvious, like Porno Press, which charged a one-time fee of $9.95 to access affiliated adult sites, a Congressional filing noted. But many others tried to mask the link, opting for names like PayCom Billing Services, Inc. or CCBill, as Forbes reported, perhaps enticing more customers by drawing less attention on a credit card statement. Other firms had names like Adult Check, Mancheck, and Adult Sights, Wired reported.

Of these firms, the biggest and most successful was Adult Check. At its peak popularity in 2001, the service boasted 4 million customers willing to pay “for the privilege of ogling 400,000 sex sites,” Forbes reported.

At the head of the company was Laith P. Alsarraf, the CEO of the Adult Check service provider Cybernet Ventures.

Alsarraf testified to Congress several times, becoming a go-to expert witness for lawmakers behind the 1998 Child Online Protection Act (COPA). Like the version of the CDA that prompted it, this act was ultimately deemed unconstitutional. And some judges and top law enforcement officers defended Alsarraf’s business model with Adult Check in court—insisting that it didn’t impact adult speech and “at most” posed a “modest burden” that was “outweighed by the government’s compelling interest in shielding minors” from adult content.

But his apparent conflicts of interest also drew criticism. One judge warned in 1999 that “perhaps we do the minors of this country harm if First Amendment protections, which they will with age inherit fully, are chipped away in the name of their protection,” the American Civil Liberties Union (ACLU) noted.

Summing up the seeming conflict, Ann Beeson, an ACLU lawyer, told the LA Times, “the government wants to shut down porn on the Net. And yet their main witness is this guy who makes his money urging more and more people to access porn on the Net.”

’90s kids dodged Adult Check age gates

Adult Check’s subscription costs varied, but the service predictably got more expensive as its popularity spiked. In 1999, customers could snag a “lifetime membership” for $76.95 or else fork over $30 every two years or $20 annually, the LA Times reported. Those were good deals compared to the significantly higher costs documented in the 2001 Forbes report, which noted a three-month package was available for $20, or users could pay $20 monthly to access supposedly premium content.

Among Adult Check’s customers were apparently some savvy kids who snuck through the cracks in the system. In various threads debating today’s laws, several Redditors have claimed that they used Adult Check as minors in the ’90s, either admitting to stealing a parent’s credit card or sharing age-authenticated passcodes with friends.

“Adult Check? I remember signing up for that in the mid-late 90s,” one commenter wrote in a thread asking if anyone would ever show ID to access porn. “Possibly a minor friend of mine paid for half the fee so he could use it too.”

“Those years were a strange time,” the commenter continued. “We’d go see tech-suspense-horror-thrillers like The Net and Disclosure where the protagonist has to fight to reclaim their lives from cyberantagonists, only to come home to send our personal information along with a credit card payment so we could look at porn.”

“LOL. I remember paying for the lifetime package, thinking I’d use it for decades,” another commenter responded. “Doh…”

Adult Check thrived even without age check laws

Sanderson’s study noted that today, minors’ “first exposure [to adult content] typically occurs between ages 11–13,” which is “substantially earlier than pre-Internet estimates.” Kids seeking out adult content may be in a period of heightened risk-taking or lack self-control, while others may be exposed without ever seeking it out. Some studies suggest that kids who are more likely to seek out adult content could struggle with lower self-esteem, emotional problems, body image concerns, or depressive symptoms. These potential negative associations with adolescent exposure to porn have long been the basis for lawmakers’ fight to keep the content away from kids—and even the biggest publishers today, like Pornhub, agree that it’s a worthy goal.

After parents got wise to ’90s kids dodging age gates, pressure predictably mounted on Adult Check to solve the problem, despite Adult Check consistently admitting that its system wasn’t foolproof. Alsarraf claimed that Adult Check developed “proprietary” technology to detect when kids were using credit cards or when multiple kids were attempting to use the same passcode at the same time from different IP addresses. He also claimed that Adult Check could detect stolen credit cards, bogus card numbers, card numbers “posted on the Internet,” and other fraud.

Meanwhile, the LA Times noted, Cybernet Ventures pulled in an estimated $50 million in 1999, ensuring that the CEO could splurge on a $690,000 house in Pasadena and a $100,000 Hummer. Although Adult Check was believed to be his most profitable venture at that time, Alsarraf told the LA Times that he wasn’t really invested in COPA passing.

“I know Adult Check will flourish,” Alsarraf said, “with or without the law.”

And he was apparently right. By 2001, subscriptions banked an estimated $320 million.

After the CDA and COPA were blocked, “many website owners continue to use Adult Check as a responsible approach to content accessibility,” Alsarraf testified.

While adult sites were likely just in it for the paychecks—which reportedly were dependably delivered—he positioned this ongoing growth as fueled by sites voluntarily turning to Adult Check to protect kids and free speech. “Adult Check allows a free flow of ideas and constitutionally protected speech to course through the Internet without censorship and unreasonable intrusion,” Alsarraf said.

“The Adult Check system is the least restrictive, least intrusive method of restricting access to content that requires minimal cost, and no parental technical expertise and intervention: It does not judge content, does not inhibit free speech, and it does not prevent access to any ideas, word, thoughts, or expressions,” Alsarraf testified.

Britney Spears aided Adult Check’s downfall

Adult Check’s downfall ultimately came in part thanks to Britney Spears, Wired reported in 2002. Spears went from Mickey Mouse Club child star to the “Princess of Pop” at 16 years old with her hit “Baby One More Time” in 1999, the same year that Adult Check rose to prominence.

Today, Spears is well-known for her activism, but in the late 1990s and early 2000s, she was one of the earliest victims of fake online porn.

Spears submitted documents in a lawsuit raised by the publisher of a porn magazine called Perfect 10. The publisher accused Adult Check of enabling the infringement of its content featured on the age check provider’s partner sites, and Spears’ documents helped prove that Adult Check was also linking to “non-existent nude photos,” allegedly in violation of unfair competition laws. The case was an early test of online liability, and Adult Check seemingly learned the hard way that the courts weren’t on its side.

That suit prompted an injunction blocking Adult Check from partnering with sites promoting supposedly illicit photos of “models and celebrities,” which it said was no big deal because it only comprised about 6 percent of its business.

However, after losing the lawsuit in 2004, Adult Check’s reputation took a hit, and it fell out of the pop lexicon. Although Cybernet Ventures continued to exist, Adult Check screening was dropped from sites, as it was no longer considered the gold standard in age verification. Perhaps more importantly, it was no longer required by law.

But although millions validated Adult Check for years, not everybody in the ’90s bought into Adult Check’s claims that it was protecting kids from porn. Some critics said it only provided a veneer of online safety without meaningfully impacting kids. Most of the country—more than 250 million US residents—never subscribed.

“I never used Adult Check,” one Redditor said in a thread pondering whether age gate laws might increase the risks of government surveillance. “My recollection was that it was an untrustworthy scam and unneeded barrier for the theater of legitimacy.”

Alsarraf keeps a lower profile these days and did not respond to Ars’ request to comment.

The rise and fall of Adult Check may have prevented more legally viable age verification systems from gaining traction. The ACLU argued that its popularity trampled the momentum of the “least restrictive” method for age checks available in the ’90s, a system called the Platform for Internet Content Selection (PICS).

Based on rating and filtering technology, PICS allowed content providers or third-party interest groups to create private rating systems so that “individual users can then choose the rating system that best reflects their own values, and any material that offends them will be blocked from their homes.”

However, like all age check systems, PICS was also criticized as being imperfect. Legal scholar Lawrence Lessig called it “the devil” because “it allows censorship at any point on the chain of distribution” of online content.

Although the age verification technology has changed, today’s lawmakers are stuck in the same debate decades later, with no perfect solutions in sight.

SCOTUS to rule on constitutionality of age gate laws

This summer, the Supreme Court will decide whether a Texas law blocking minors’ access to porn is constitutional. The decision could either stunt the momentum or strengthen the backbone of nearly 20 laws in red states across the country seeking to age-gate the Internet.

For privacy advocates opposing the laws, the SCOTUS ruling feels like a sink-or-swim moment for age gates, depending on which way the court swings. And it will come just as blue states like Colorado have recently begun pushing for age gates, too. Meanwhile, other laws increasingly seek to safeguard kids’ privacy and prevent social media addiction by also requiring age checks.

Since the 1990s, the US has debated how to best keep kids away from harmful content without trampling adults’ First Amendment rights. And while cruder credit card-based systems like Adult Check are no longer seen as viable, it’s clear that for lawmakers today, technology is still viewed as both the problem and the solution.

While lawmakers claim that the latest technology makes it easier than ever to access porn, advancements like digital IDs, device-based age checks, or app store age checks seem to signal salvation, making it easier to digitally verify user ages. And some artificial intelligence solutions have likely made lawmakers’ dreams of age-gating the Internet appear even more within reach.

Critics have condemned age gates as unconstitutionally limiting adults’ access to legal speech, at the furthest extreme accusing conservatives of seeking to censor all adult content online or expand government surveillance by tracking people’s sexual identity. (Goldman noted that “Russell Vought, an architect of Project 2025 and President Trump’s Director of the Office of Management and Budget, admitted that he favored age authentication mandates as a ‘back door’ way to censor pornography.”)

Ultimately, SCOTUS could end up deciding if any kind of age gate is ever appropriate. The court could perhaps rule that strict scrutiny, which requires a narrowly tailored solution to serve a compelling government interest, must be applied, potentially ruling out all of lawmakers’ suggested strategies. Or the court could decide that strict scrutiny applies but age checks are narrowly tailored. Or it could go the other way and rule that strict scrutiny does not apply, so all state lawmakers need to show is that their basis for requiring age verification is rationally connected to their interest in blocking minors from adult content.

Age verification remains flawed, experts say

If there’s anything the ’90s can teach lawmakers about age gates, it’s that creating an age verification industry dependent on adult sites will only incentivize the creation of more adult sites that benefit from the new rules. Back then, when age verification systems increased sites’ revenues, compliant sites were rewarded, but in today’s climate, it’s the noncompliant sites that stand to profit by not authenticating ages.

Sanderson’s study noted that Louisiana “was the only state that implemented age verification in a manner that plausibly preserved a user’s anonymity while verifying age,” which is why Pornhub didn’t block the state over its age verification law. But other states that Pornhub blocked passed copycat laws that “tended to be stricter, either requiring uploads of an individual’s government identification,” methods requiring providing other sensitive data, “or even presenting biometric data such as face scanning,” the study noted.

The technology continues evolving as the debate rages on. Some of the most popular platforms and biggest tech companies have been testing new age estimation methods this year. Notably, Discord is testing out face scans in the United Kingdom and Australia, and both Meta and Google are testing technology to supposedly detect kids lying about their ages online.

But a solution has not yet been found as parents and their lawyers circle social media companies they believe are harming their kids. In fact, the unreliability of the tech remains an issue for Meta, which is perhaps the most motivated to find a fix, having long faced immense pressure to improve child safety on its platforms. Earlier this year, Meta had to yank its age detection tool after the “measure didn’t work as well as we’d hoped and inadvertently locked out some parents and guardians who shared devices with their teens,” the company said.

On April 21, Meta announced that it started testing the tech in the US, suggesting the flaws were fixed, but Meta did not directly respond to Ars’ request to comment in more detail on updates.

Two years ago, Ash Johnson, a senior policy manager at the nonpartisan nonprofit think tank the Information Technology and Innovation Foundation (ITIF), urged Congress to “support more research and testing of age verification technology,” saying that the government’s last empirical evaluation was in 2014. She noted then that “the technology is not perfect, and some children will break the rules, eventually slipping through the safeguards,” but that lawmakers need to understand the trade-offs of advocating for different tech solutions or else risk infringing user privacy.

More research is needed, Johnson told Ars, while Sanderson’s study suggested that regulators should also conduct circumvention research or be stuck with laws that have a “limited effectiveness as a standalone policy tool.”

For example, while AI solutions are increasingly more accurate—and in one Facebook survey overwhelmingly more popular with users, Goldman’s analysis noted—the tech still struggles to differentiate between a 17- or 18-year-old, for example.

Like Aylo, ITIF recommends device-based age authentication as the least restrictive method, Johnson told Ars. Perhaps the biggest issue with that option, though, is that kids may have an easy time accessing adult content on devices shared with parents, Goldman noted.

Not sharing Johnson’s optimism, Goldman wrote that “there is no ‘preferred’ or ‘ideal’ way to do online age authentication.” Even a perfect system that accurately authenticates age every time would be flawed, he suggested.

“Rather, they each fall on a spectrum of ‘dangerous in one way’ to ‘dangerous in a different way,'” he wrote, concluding that “every solution has serious privacy, accuracy, or security problems.”

Kids at “grave risk” from uninformed laws

As a “burgeoning” age verification industry swells, Goldman wants to see more earnest efforts from lawmakers to “develop a wider and more thoughtful toolkit of online child safety measures.” They could start, he suggested, by consistently defining minors in laws so it’s clear who is being regulated and what access is being restricted. They could then provide education to parents and minors to help them navigate online harms.

Without such careful consideration, Goldman predicts a dystopian future prompted by age verification laws. If SCOTUS endorses them, users could become so accustomed to age gates that they start entering sensitive information into various web platforms without a second thought. Even the government knows that would be a disaster, Goldman said.

“Governments around the world want people to think twice before sharing sensitive biometric information due to the information’s immutability if stolen,” Goldman wrote. “Mandatory age authentication teaches them the opposite lesson.”

Goldman recommends that lawmakers start seeking an information-based solution to age verification problems rather than depending on tech to save the day.

“Treating the online age authentication challenges as purely technological encourages the unsupportable belief that its problems can be solved if technologists ‘nerd harder,'” Goldman wrote. “This reductionist thinking is a categorical error. Age authentication is fundamentally an information problem, not a technology problem. Technology can help improve information accuracy and quality, but it cannot unilaterally solve information challenges.”

Lawmakers could potentially minimize risks to kids by only verifying age when someone tries to access restricted content or “by compelling age authenticators to minimize their data collection” and “promptly delete any highly sensitive information” collected. That likely wouldn’t stop some vendors from collecting or retaining data anyway, Goldman suggested. But it could be a better standard to protect users of all ages from inevitable data breaches, since we know that “numerous authenticators have suffered major data security failures that put authenticated individuals at grave risk.”

“If the policy goal is to protect minors online because of their potential vulnerability, then forcing minors to constantly decide whether or not to share highly sensitive information with strangers online is a policy failure,” Goldman wrote. “Child safety online needs a whole-of-society response, not a delegate-and-pray approach.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Redditor accidentally reinvents discarded ’90s tool to escape today’s age gates Read More »

monty-python-and-the-holy-grail-turns-50

Monty Python and the Holy Grail turns 50


Ars staffers reflect upon the things they love most about this masterpiece of absurdist comedy.

king arthur's and his knights staring up at something.

Credit: EMI Films/Python (Monty) Pictures

Credit: EMI Films/Python (Monty) Pictures

Monty Python and the Holy Grail is widely considered to be among the best comedy films of all time, and it’s certainly one of the most quotable. This absurdist masterpiece sending up Arthurian legend turns 50 (!) this year.

It was partly Python member Terry Jones’ passion for the Middle Ages and Arthurian legend that inspired Holy Grail and its approach to comedy. (Jones even went on to direct a 2004 documentary, Medieval Lives.) The troupe members wrote several drafts beginning in 1973, and Jones and Terry Gilliam were co-directors—the first full-length feature for each, so filming was one long learning process. Reviews were mixed when Holy Grail was first released—much like they were for Young Frankenstein (1974), another comedic masterpiece—but audiences begged to differ. It was the top-grossing British film screened in the US in 1975. And its reputation has only grown over the ensuing decades.

The film’s broad cultural influence extends beyond the entertainment industry. Holy Grail has been the subject of multiple scholarly papers examining such topics as its effectiveness at teaching Arthurian literature or geometric thought and logic, the comedic techniques employed, and why the depiction of a killer rabbit is so fitting (killer rabbits frequently appear drawn in the margins of Gothic manuscripts). My personal favorite was a 2018 tongue-in-cheek paper on whether the Black Knight could have survived long enough to make good on his threat to bite King Arthur’s legs off (tl;dr: no).

So it’s not at all surprising that Monty Python and the Holy Grail proved to be equally influential and beloved by Ars staffers, several of whom offer their reminiscences below.

They were nerd-gassing before it was cool

The Monty Python troupe famously made Holy Grail on a shoestring budget—so much so that they couldn’t afford to have the knights ride actual horses. (There are only a couple of scenes featuring a horse, and apparently it’s the same horse.) Rather than throwing up their hands in resignation, that very real constraint fueled the Pythons’ creativity. The actors decided the knights would simply pretend to ride horses while their porters followed behind, banging halves of coconut shells together to mimic the sound of horses’ hooves—a time-honored Foley effect dating back to the early days of radio.

Being masters of absurdist humor, naturally, they had to call attention to it. Arthur and his trusty servant, Patsy (Gilliam), approach the castle of their first potential recruit. When Arthur informs the guards that they have “ridden the length and breadth of the land,” one of the guards isn’t having it. “What, ridden on a horse? You’re using coconuts! You’ve got two empty halves of coconut, and you’re bangin’ ’em together!”

That raises the obvious question: Where did they get the coconuts? What follows is one of the greatest examples of nerd-gassing yet to appear on film. Arthur claims he and Patsy found them, but the guard is incredulous since the coconut is tropical and England is a temperate zone. Arthur counters by invoking the example of migrating swallows. Coconuts do not migrate, but Arthur suggests they could be carried by swallows gripping a coconut by the husk.

The guard still isn’t having it. It’s a question of getting the weight ratios right, you see, to maintain air-speed velocity. Another guard gets involved, suggesting it might be possible with an African swallow, but that species is non-migratory. And so on. The two are still debating the issue as an exasperated Arthur rides off to find another recruit.

The best part? There’s a callback to that scene late in the film when the knights must answer three questions to cross the Bridge of Death or else be chucked into the Gorge of Eternal Peril. When it’s Arthur’s turn, the third question is “What is the air-speed velocity of an unladen swallow?” Arthur asks whether this is an African or a European swallow. This stumps the Bridgekeeper, who gets flung into the gorge. Sir Belvedere asks how Arthur came to know so much about swallows. Arthur replies, “Well, you have to know these things when you’re a king, you know.”

The plucky Black Knight (“It’s just a flesh wound!”) will always hold a special place in my heart, but that debate over air-speed velocities of laden versus unladen swallows encapsulates what makes Holy Grail a timeless masterpiece.

Jennifer Ouellette

A bunny out for blood

“Oh, it’s just a harmless little bunny, isn’t it?”

Despite their appearances, rabbits aren’t always the most innocent-looking animals. Recent reports of rabbit strikes on airplanes are the latest examples of the mayhem these creatures of chaos can inflict on unsuspecting targets.

I learned that lesson a long time ago, though, thanks partly to my way-too-early viewings of the animated Watership Down and Monty Python and the Holy Grail. There I was, about 8 years old and absent of paternal accompaniment, watching previously cuddly creatures bloodying each other and severing the heads of King Arthur’s retinue. While Watership Down’s animal-on-animal violence might have been a bit scarring at that age, I enjoyed the slapstick humor of the Rabbit of Caerbannog scene (many of the jokes my colleagues highlight went over my head upon my initial viewing).

Despite being warned of the creature’s viciousness by Tim the Enchanter, the Knights of the Round Table dismiss the Merlin stand-in’s fear and charge the bloodthirsty creature. But the knights quickly realize they’re no match for the “bad-tempered rodent,” which zips around in the air, goes straight for the throat, and causes the surviving knights to run away in fear. If Arthur and his knights possessed any self-awareness, they might have learned a lesson about making assumptions about appearances.

But hopefully that’s a takeaway for viewers of 1970s British pop culture involving rabbits. Even cute bunnies, as sweet as they may seem initially, can be engines of destruction: “Death awaits you all with nasty, big, pointy teeth.”

Jacob May

Can’t stop the music

The most memorable songs from Monty Python and the Holy Grail were penned by Neil Innes, who frequently collaborated with the troupe and appears in the film. His “Brave Sir Robin” amusingly parodied minstrel tales of valor by imagining all the torturous ways that one knight might die. Then there’s his “Knights of the Round Table,” the first musical number performed by the cast—if you don’t count the monk chants punctuated with slaps on the head with wooden planks. That song hilariously rouses not just wild dancing from knights but also claps from prisoners who otherwise dangle from cuffed wrists.

But while these songs have stuck in my head for decades, Monty Python’s Terry Jones once gave me a reason to focus on the canned music instead, and it weirdly changed the way I’ve watched the movie ever since.

Back in 2001, Jones told Billboard that an early screening for investors almost tanked the film. He claimed that after the first five minutes, the movie got no laughs whatsoever. For Jones, whose directorial debut could have died in that moment, the silence was unthinkable. “It can’t be that unfunny,” he told Billboard. “There must be something wrong.”

Jones soon decided that the soundtrack was the problem, immediately cutting the “wonderfully rich, atmospheric” songs penned by Innes that seemed to be “overpowering the funny bits” in favor of canned music.

Reading this prompted an immediate rewatch because I needed to know what the first bit was that failed to get a laugh from that fateful audience. It turned out to be the scene where King Arthur encounters peasants in a field who deny knowing that there even was a king. As usual, I was incapable of holding back a burst of laughter when one peasant woman grieves, “Well, I didn’t vote for you” while packing random clumps of mud into the field. It made me wonder if any song might have robbed me of that laugh, and that made me pay closer attention to how Jones flipped the script and somehow meticulously used the canned music to extract more laughs.

The canned music was licensed from a British sound library that helped the 1920s movie business evolve past silent films. They’re some of the earliest songs to summon emotion from viewers whose eyes were glued to a screen. In Monty Python and the Holy Grail, which features a naive King Arthur enduring his perilous journey on a wood stick horse, the canned music provides the most predictable soundtrack you could imagine that might score a child’s game of make-believe. It also plays the straight man by earnestly pulsing to convey deep trouble as knights approach the bridge of death or heavenly trumpeting the anticipated appearance of the Holy Grail.

It’s easy to watch the movie without noticing the canned music, as the colorful performances are Jones’ intended focus. Not relying on punchlines, the group couldn’t afford any nuance to be lost. But there is at least one moment where Jones obviously relies on the music to overwhelm the acting to compel a belly laugh. Just before “the most foul, cruel, bad-tempered rodent” appears, a quick surge of dramatic music that cuts out just as suddenly makes it all the more absurd when the threat emerges and appears to be an “ordinary rabbit.”

It’s during this scene, too, that King Arthur delivers a line that sums up how predictably odd but deceptively artful the movie’s use of canned music really is. When he meets Tim the Enchanter—who tries to warn the knights about the rabbit’s “pointy teeth” by evoking loud thunder rolls and waggling his fingers in front of his mouth—Arthur turns to the knights and says, “What an eccentric performance.”

Ashley Belanger

Thank the “keg rock conclave”

I tried to make music a big part of my teenage identity because I didn’t have much else. I was a suburban kid with a B-minus/C-plus average, no real hobbies, sports, or extra-curriculars, plus a deeply held belief that Nine Inch Nails, the Beastie Boys, and Aphex Twin would never get their due as geniuses. Classic Rock, the stuff jocks listened to at parties and practice? That my dad sang along to after having a few? No thanks.

There were cultural heroes, there were musty, overwrought villains, and I knew the score. Or so I thought.

I don’t remember exactly where I found the little fact that scarred my oppositional ego forever. It might have been Spin magazine, a weekend MTV/VH1 feature, or that Rolling Stone book about the ’70s (I bought it for the punks, I swear). But at some point, I learned that a who’s-who of my era’s played-out bands—Led Zeppelin, Pink Floyd, even Jethro (freaking) Tull—personally funded one of my favorite subversive movies. Jimmy Page and Robert Plant, key members of the keg-rock conclave, attended the premiere.

It was such a small thing, but it raised such big, naive, adolescent questions. Somebody had to pay for Holy Grail—it didn’t just arrive as something passed between nerds? People who make things I might not enjoy could financially support things I do enjoy? There was a time when today’s overcelebrated dinosaurs were cool and hip in the subculture? I had common ground with David Gilmour?

Ever since, when a reference to Holy Grail is made, especially to how cheap it looks, I think about how I once learned that my beloved nerds (or theater kids) wouldn’t even have those coconut horses were it not for some decent-hearted jocks.

Kevin Purdy

A masterpiece of absurdism

“I blow my nose at you, English pig-dog!” EMI Films/Python (Monty) Pictures

I was young enough that I’d never previously stayed awake until midnight on New Year’s Eve. My parents were off to a party, my younger brother was in bed, and my older sister had a neglectful attitude toward babysitting me. So I was parked in front of the TV when the local PBS station aired a double feature of The Yellow Submarine and The Holy Grail.

At the time, I probably would have said my mind was blown. In retrospect, I’d prefer to think that my mind was expanded.

For years, those films mostly existed as a source of one-line evocations of sketch comedy nirvana that I’d swap with my friends. (I’m not sure I’ve ever lacked a group of peers where a properly paced “With… a herring!” had meaning.) But over time, I’ve come to appreciate other ways that the films have stuck with me. I can’t say whether they set me on an aesthetic trajectory that has continued for decades or if they were just the first things to tickle some underlying tendencies that were lurking in my not-yet-fully-wired brain.

In either case, my brain has developed into a huge fan of absurdism, whether in sketch comedy, longer narratives like Arrested Development or the lyrics of Courtney Barnett. Or, let’s face it, any stream of consciousness lyrics I’ve been able to hunt down. But Monty Python remains a master of the form, and The Holy Grail’s conclusion in a knight bust remains one of its purest expressions.

A bit less obviously, both films are probably my first exposures to anti-plotting, where linearity and a sense of time were really besides the point. With some rare exceptions—the eating of Sir Robin’s minstrels, Ringo putting a hole in his pocket—the order of the scenes were completely irrelevant. Few of the incidents had much consequence for future scenes. Since I was unused to staying up past midnight at that age, I’d imagine the order of events was fuzzy already by the next day. By the time I was swapping one-line excerpts with friends, it was long gone. And it just didn’t matter.

In retrospect, I think that helped ready my brain for things like Catch-22 and its convoluted, looping, non-Euclidean plotting. The novel felt like a revelation when I first read it, but I’ve since realized it fits a bit more comfortably within a spectrum of works that play tricks with time and find clever connections among seemingly random events.

I’m not sure what possessed someone to place these two films together as appropriate New Year’s Eve programming. But I’d like to think it was more intentional than I had any reason to suspect at the time. And I feel like I owe them a debt.

—John Timmer

A delightful send-up of autocracy

King Arthur attempting to throttle a peasant in the field

“See the violence inherent in the system!” Credit: Python (Monty) Pictures

What an impossible task to pick just a single thing I love about this film! But if I had to choose one scene, it would be when a lost King Arthur comes across an old woman—but oops, it’s actually a man named Dennis—and ends up in a discussion about medieval politics. Arthur explains that he is king because the Lady of the Lake conferred the sword Excalibur on him, signifying that he should rule as king of the Britons by divine right.

To this, Dennis replies, “Strange women lying in ponds distributing swords is no basis for a system of government. Supreme executive power derives from a mandate from the masses, not from some farcical aquatic ceremony.”

Even though it was filmed half a century ago, the scene offers a delightful send-up of autocracy. And not to be too much of a downer here, but all of us living in the United States probably need to be reminded that living in an autocracy would suck for a lot of reasons. So let’s not do that.

Eric Berger

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Monty Python and the Holy Grail turns 50 Read More »

ios-and-android-juice-jacking-defenses-have-been-trivial-to-bypass-for-years

iOS and Android juice jacking defenses have been trivial to bypass for years


SON OF JUICE JACKING ARISES

New ChoiceJacking attack allows malicious chargers to steal data from phones.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

About a decade ago, Apple and Google started updating iOS and Android, respectively, to make them less susceptible to “juice jacking,” a form of attack that could surreptitiously steal data or execute malicious code when users plug their phones into special-purpose charging hardware. Now, researchers are revealing that, for years, the mitigations have suffered from a fundamental defect that has made them trivial to bypass.

“Juice jacking” was coined in a 2011 article on KrebsOnSecurity detailing an attack demonstrated at a Defcon security conference at the time. Juice jacking works by equipping a charger with hidden hardware that can access files and other internal resources of phones, in much the same way that a computer can when a user connects it to the phone.

An attacker would then make the chargers available in airports, shopping malls, or other public venues for use by people looking to recharge depleted batteries. While the charger was ostensibly only providing electricity to the phone, it was also secretly downloading files or running malicious code on the device behind the scenes. Starting in 2012, both Apple and Google tried to mitigate the threat by requiring users to click a confirmation button on their phones before a computer—or a computer masquerading as a charger—could access files or execute code on the phone.

The logic behind the mitigation was rooted in a key portion of the USB protocol that, in the parlance of the specification, dictates that a USB port can facilitate a “host” device or a “peripheral” device at any given time, but not both. In the context of phones, this meant they could either:

  • Host the device on the other end of the USB cord—for instance, if a user connects a thumb drive or keyboard. In this scenario, the phone is the host that has access to the internals of the drive, keyboard or other peripheral device.
  • Act as a peripheral device that’s hosted by a computer or malicious charger, which under the USB paradigm is a host that has system access to the phone.

An alarming state of USB security

Researchers at the Graz University of Technology in Austria recently made a discovery that completely undermines the premise behind the countermeasure: They’re rooted under the assumption that USB hosts can’t inject input that autonomously approves the confirmation prompt. Given the restriction against a USB device simultaneously acting as a host and peripheral, the premise seemed sound. The trust models built into both iOS and Android, however, present loopholes that can be exploited to defeat the protections. The researchers went on to devise ChoiceJacking, the first known attack to defeat juice-jacking mitigations.

“We observe that these mitigations assume that an attacker cannot inject input events while establishing a data connection,” the researchers wrote in a paper scheduled to be presented in August at the Usenix Security Symposium in Seattle. “However, we show that this assumption does not hold in practice.”

The researchers continued:

We present a platform-agnostic attack principle and three concrete attack techniques for Android and iOS that allow a malicious charger to autonomously spoof user input to enable its own data connection. Our evaluation using a custom cheap malicious charger design reveals an alarming state of USB security on mobile platforms. Despite vendor customizations in USB stacks, ChoiceJacking attacks gain access to sensitive user files (pictures, documents, app data) on all tested devices from 8 vendors including the top 6 by market share.

In response to the findings, Apple updated the confirmation dialogs in last month’s release of iOS/iPadOS 18.4 to require a user authentication in the form of a PIN or password. While the researchers were investigating their ChoiceJacking attacks last year, Google independently updated its confirmation with the release of version 15 in November. The researchers say the new mitigation works as expected on fully updated Apple and Android devices. Given the fragmentation of the Android ecosystem, however, many Android devices remain vulnerable.

All three of the ChoiceJacking techniques defeat the original Android juice-jacking mitigations. One of them also works against those defenses in Apple devices. In all three, the charger acts as a USB host to trigger the confirmation prompt on the targeted phone.

The attacks then exploit various weaknesses in the OS that allow the charger to autonomously inject “input events” that can enter text or click buttons presented in screen prompts as if the user had done so directly into the phone. In all three, the charger eventually gains two conceptual channels to the phone: (1) an input one allowing it to spoof user consent and (2) a file access connection that can steal files.

An illustration of ChoiceJacking attacks. (1) The victim device is attached to the malicious charger. (2) The charger establishes an extra input channel. (3) The charger initiates a data connection. User consent is needed to confirm it. (4) The charger uses the input channel to spoof user consent. Credit: Draschbacher et al.

It’s a keyboard, it’s a host, it’s both

In the ChoiceJacking variant that defeats both Apple- and Google-devised juice-jacking mitigations, the charger starts as a USB keyboard or a similar peripheral device. It sends keyboard input over USB that invokes simple key presses, such as arrow up or down, but also more complex key combinations that trigger settings or open a status bar.

The input establishes a Bluetooth connection to a second miniaturized keyboard hidden inside the malicious charger. The charger then uses the USB Power Delivery, a standard available in USB-C connectors that allows devices to either provide or receive power to or from the other device, depending on messages they exchange, a process known as the USB PD Data Role Swap.

A simulated ChoiceJacking charger. Bidirectional USB lines allow for data role swaps. Credit: Draschbacher et al.

With the charger now acting as a host, it triggers the file access consent dialog. At the same time, the charger still maintains its role as a peripheral device that acts as a Bluetooth keyboard that approves the file access consent dialog.

The full steps for the attack, provided in the Usenix paper, are:

1. The victim device is connected to the malicious charger. The device has its screen unlocked.

2. At a suitable moment, the charger performs a USB PD Data Role (DR) Swap. The mobile device now acts as a USB host, the charger acts as a USB input device.

3. The charger generates input to ensure that BT is enabled.

4. The charger navigates to the BT pairing screen in the system settings to make the mobile device discoverable.

5. The charger starts advertising as a BT input device.

6. By constantly scanning for newly discoverable Bluetooth devices, the charger identifies the BT device address of the mobile device and initiates pairing.

7. Through the USB input device, the charger accepts the Yes/No pairing dialog appearing on the mobile device. The Bluetooth input device is now connected.

8. The charger sends another USB PD DR Swap. It is now the USB host, and the mobile device is the USB device.

9. As the USB host, the charger initiates a data connection.

10. Through the Bluetooth input device, the charger confirms its own data connection on the mobile device.

This technique works against all but one of the 11 phone models tested, with the holdout being an Android device running the Vivo Funtouch OS, which doesn’t fully support the USB PD protocol. The attacks against the 10 remaining models take about 25 to 30 seconds to establish the Bluetooth pairing, depending on the phone model being hacked. The attacker then has read and write access to files stored on the device for as long as it remains connected to the charger.

Two more ways to hack Android

The two other members of the ChoiceJacking family work only against the juice-jacking mitigations that Google put into Android. In the first, the malicious charger invokes the Android Open Access Protocol, which allows a USB host to act as an input device when the host sends a special message that puts it into accessory mode.

The protocol specifically dictates that while in accessory mode, a USB host can no longer respond to other USB interfaces, such as the Picture Transfer Protocol for transferring photos and videos and the Media Transfer Protocol that enables transferring files in other formats. Despite the restriction, all of the Android devices tested violated the specification by accepting AOAP messages sent, even when the USB host hadn’t been put into accessory mode. The charger can exploit this implementation flaw to autonomously complete the required user confirmations.

The remaining ChoiceJacking technique exploits a race condition in the Android input dispatcher by flooding it with a specially crafted sequence of input events. The dispatcher puts each event into a queue and processes them one by one. The dispatcher waits for all previous input events to be fully processed before acting on a new one.

“This means that a single process that performs overly complex logic in its key event handler will delay event dispatching for all other processes or global event handlers,” the researchers explained.

They went on to note, “A malicious charger can exploit this by starting as a USB peripheral and flooding the event queue with a specially crafted sequence of key events. It then switches its USB interface to act as a USB host while the victim device is still busy dispatching the attacker’s events. These events therefore accept user prompts for confirming the data connection to the malicious charger.”

The Usenix paper provides the following matrix showing which devices tested in the research are vulnerable to which attacks.

The susceptibility of tested devices to all three ChoiceJacking attack techniques. Credit: Draschbacher et al.

User convenience over security

In an email, the researchers said that the fixes provided by Apple and Google successfully blunt ChoiceJacking attacks in iPhones, iPads, and Pixel devices. Many Android devices made by other manufacturers, however, remain vulnerable because they have yet to update their devices to Android 15. Other Android devices—most notably those from Samsung running the One UI 7 software interface—don’t implement the new authentication requirement, even when running on Android 15. The omission leaves these models vulnerable to ChoiceJacking. In an email, principal paper author Florian Draschbacher wrote:

The attack can therefore still be exploited on many devices, even though we informed the manufacturers about a year ago and they acknowledged the problem. The reason for this slow reaction is probably that ChoiceJacking does not simply exploit a programming error. Rather, the problem is more deeply rooted in the USB trust model of mobile operating systems. Changes here have a negative impact on the user experience, which is why manufacturers are hesitant. [It] means for enabling USB-based file access, the user doesn’t need to simply tap YES on a dialog but additionally needs to present their unlock PIN/fingerprint/face. This inevitably slows down the process.

The biggest threat posed by ChoiceJacking is to Android devices that have been configured to enable USB debugging. Developers often turn on this option so they can troubleshoot problems with their apps, but many non-developers enable it so they can install apps from their computer, root their devices so they can install a different OS, transfer data between devices, and recover bricked phones. Turning it on requires a user to flip a switch in Settings > System > Developer options.

If a phone has USB Debugging turned on, ChoiceJacking can gain shell access through the Android Debug Bridge. From there, an attacker can install apps, access the file system, and execute malicious binary files. The level of access through the Android Debug Mode is much higher than that through Picture Transfer Protocol and Media Transfer Protocol, which only allow read and write access to system files.

The vulnerabilities are tracked as:

    • CVE-2025-24193 (Apple)
    • CVE-2024-43085 (Google)
    • CVE-2024-20900 (Samsung)
    • CVE-2024-54096 (Huawei)

A Google spokesperson confirmed that the weaknesses were patched in Android 15 but didn’t speak to the base of Android devices from other manufacturers, who either don’t support the new OS or the new authentication requirement it makes possible. Apple declined to comment for this post.

Word that juice-jacking-style attacks are once again possible on some Android devices and out-of-date iPhones is likely to breathe new life into the constant warnings from federal authorities, tech pundits, news outlets, and local and state government agencies that phone users should steer clear of public charging stations. Special-purpose cords that disconnect data access remain a viable mitigation, but the researchers noted that “data blockers also interfere with modern

power negotiation schemes, thereby degrading charge speed.”

As I reported in 2023, these warnings are mostly scaremongering, and the advent of ChoiceJacking does little to change that, given that there are no documented cases of such attacks in the wild. That said, people using Android devices that don’t support Google’s new authentication requirement may want to refrain from public charging.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

iOS and Android juice jacking defenses have been trivial to bypass for years Read More »

in-the-age-of-ai,-we-must-protect-human-creativity-as-a-natural-resource

In the age of AI, we must protect human creativity as a natural resource


Op-ed: As AI outputs flood the Internet, diverse human perspectives are our most valuable resource.

Ironically, our present AI age has shone a bright spotlight on the immense value of human creativity as breakthroughs in technology threaten to undermine it. As tech giants rush to build newer AI models, their web crawlers vacuum up creative content, and those same models spew floods of synthetic media, risking drowning out the human creative spark in an ocean of pablum.

Given this trajectory, AI-generated content may soon exceed the entire corpus of historical human creative works, making the preservation of the human creative ecosystem not just an ethical concern but an urgent imperative. The alternative is nothing less than a gradual homogenization of our cultural landscape, where machine learning flattens the richness of human expression into a mediocre statistical average.

A limited resource

By ingesting billions of creations, chatbots learn to talk, and image synthesizers learn to draw. Along the way, the AI companies behind them treat our shared culture like an inexhaustible resource to be strip-mined, with little thought for the consequences.

But human creativity isn’t the product of an industrial process; it’s inherently throttled precisely because we are finite biological beings who draw inspiration from real lived experiences while balancing creativity with the necessities of life—sleep, emotional recovery, and limited lifespans. Creativity comes from making connections, and it takes energy, time, and insight for those connections to be meaningful. Until recently, a human brain was a prerequisite for making those kinds of connections, and there’s a reason why that is valuable.

Every human brain isn’t just a store of data—it’s a knowledge engine that thinks in a unique way, creating novel combinations of ideas. Instead of having one “connection machine” (an AI model) duplicated a million times, we have seven billion neural networks, each with a unique perspective. Relying on the cognitive diversity of human thought helps us escape the monolithic thinking that may emerge if everyone were to draw from the same AI-generated sources.

Today, the AI industry’s business models unintentionally echo the ways in which early industrialists approached forests and fisheries—as free inputs to exploit without considering ecological limits.

Just as pollution from early factories unexpectedly damaged the environment, AI systems risk polluting the digital environment by flooding the Internet with synthetic content. Like a forest that needs careful management to thrive or a fishery vulnerable to collapse from overexploitation, the creative ecosystem can be degraded even if the potential for imagination remains.

Depleting our creative diversity may become one of the hidden costs of AI, but that diversity is worth preserving. If we let AI systems deplete or pollute the human outputs they depend on, what happens to AI models—and ultimately to human society—over the long term?

AI’s creative debt

Every AI chatbot or image generator exists only because of human works, and many traditional artists argue strongly against current AI training approaches, labeling them plagiarism. Tech companies tend to disagree, although their positions vary. For example, in 2023, imaging giant Adobe took an unusual step by training its Firefly AI models solely on licensed stock photos and public domain works, demonstrating that alternative approaches are possible.

Adobe’s licensing model offers a contrast to companies like OpenAI, which rely heavily on scraping vast amounts of Internet content without always distinguishing between licensed and unlicensed works.

Photo of a mining dumptruck and water tank in an open pit copper mine.

OpenAI has argued that this type of scraping constitutes “fair use” and effectively claims that competitive AI models at current performance levels cannot be developed without relying on unlicensed training data, despite Adobe’s alternative approach.

The “fair use” argument often hinges on the legal concept of “transformative use,” the idea that using works for a fundamentally different purpose from creative expression—such as identifying patterns for AI—does not violate copyright. Generative AI proponents often argue that their approach is how human artists learn from the world around them.

Meanwhile, artists are expressing growing concern about losing their livelihoods as corporations turn to cheap, instantaneously generated AI content. They also call for clear boundaries and consent-driven models rather than allowing developers to extract value from their creations without acknowledgment or remuneration.

Copyright as crop rotation

This tension between artists and AI reveals a deeper ecological perspective on creativity itself. Copyright’s time-limited nature was designed as a form of resource management, like crop rotation or regulated fishing seasons that allow for regeneration. Copyright expiration isn’t a bug; its designers hoped it would ensure a steady replenishment of the public domain, feeding the ecosystem from which future creativity springs.

On the other hand, purely AI-generated outputs cannot be copyrighted in the US, potentially brewing an unprecedented explosion in public domain content, although it’s content that contains smoothed-over imitations of human perspectives.

Treating human-generated content solely as raw material for AI training disrupts this ecological balance between “artist as consumer of creative ideas” and “artist as producer.” Repeated legislative extensions of copyright terms have already significantly delayed the replenishment cycle, keeping works out of the public domain for much longer than originally envisioned. Now, AI’s wholesale extraction approach further threatens this delicate balance.

The resource under strain

Our creative ecosystem is already showing measurable strain from AI’s impact, from tangible present-day infrastructure burdens to concerning future possibilities.

Aggressive AI crawlers already effectively function as denial-of-service attacks on certain sites, with Cloudflare documenting GPTBot’s immediate impact on traffic patterns. Wikimedia’s experience provides clear evidence of current costs: AI crawlers caused a documented 50 percent bandwidth surge, forcing the nonprofit to divert limited resources to defensive measures rather than to its core mission of knowledge sharing. As Wikimedia says, “Our content is free, our infrastructure is not.” Many of these crawlers demonstrably ignore established technical boundaries like robots.txt files.

Beyond infrastructure strain, our information environment also shows signs of degradation. Google has publicly acknowledged rising volumes of “spammy, low-quality,” often auto-generated content appearing in search results. A Wired investigation found concrete examples of AI-generated plagiarism sometimes outranking original reporting in search results. This kind of digital pollution led Ross Anderson of Cambridge University to compare it to filling oceans with plastic—it’s a contamination of our shared information spaces.

Looking to the future, more risks may emerge. Ted Chiang’s comparison of LLMs to lossy JPEGs offers a framework for understanding potential problems, as each AI generation summarizes web information into an increasingly “blurry” facsimile of human knowledge. The logical extension of this process—what some researchers term “model collapse“—presents a risk of degradation in our collective knowledge ecosystem if models are trained indiscriminately on their own outputs. (However, this differs from carefully designed synthetic data that can actually improve model efficiency.)

This downward spiral of AI pollution may soon resemble a classic “tragedy of the commons,” in which organizations act from self-interest at the expense of shared resources. If AI developers continue extracting data without limits or meaningful contributions, the shared resource of human creativity could eventually degrade for everyone.

Protecting the human spark

While AI models that simulate creativity in writing, coding, images, audio, or video can achieve remarkable imitations of human works, this sophisticated mimicry currently lacks the full depth of the human experience.

For example, AI models lack a body that endures the pain and travails of human life. They don’t grow over the course of a human lifespan in real time. When an AI-generated output happens to connect with us emotionally, it often does so by imitating patterns learned from a human artist who has actually lived that pain or joy.

A photo of a young woman painter in her art studio.

Even if future AI systems develop more sophisticated simulations of emotional states or embodied experiences, they would still fundamentally differ from human creativity, which emerges organically from lived biological experience, cultural context, and social interaction.

That’s because the world constantly changes. New types of human experience emerge. If an ethically trained AI model is to remain useful, researchers must train it on recent human experiences, such as viral trends, evolving slang, and cultural shifts.

Current AI solutions, like retrieval-augmented generation (RAG), address this challenge somewhat by retrieving up-to-date, external information to supplement their static training data. Yet even RAG methods depend heavily on validated, high-quality human-generated content—the very kind of data at risk if our digital environment becomes overwhelmed with low-quality AI-produced output.

This need for high-quality, human-generated data is a major reason why companies like OpenAI have pursued media deals (including a deal signed with Ars Technica parent Condé Nast last August). Yet paradoxically, the same models fed on valuable human data often produce the low-quality spam and slop that floods public areas of the Internet, degrading the very ecosystem they rely on.

AI as creative support

When used carelessly or excessively, generative AI is a threat to the creative ecosystem, but we can’t wholly discount the tech as a tool in a human creative’s arsenal. The history of art is full of technological changes (new pigments, brushes, typewriters, word processors) that transform the nature of artistic production while augmenting human creativity.

Bear with me because there’s a great deal of nuance here that is easy to miss among today’s more impassioned reactions to people using AI as a blunt instrument of creating mediocrity.

While many artists rightfully worry about AI’s extractive tendencies, research published in Harvard Business Review indicates that AI tools can potentially amplify rather than merely extract creative capacity, suggesting that a symbiotic relationship is possible under the right conditions.

Inherent in this argument is that the responsible use of AI is reflected in the skill of the user. You can use a paintbrush to paint a wall or paint the Mona Lisa. Similarly, generative AI can mindlessly fill a canvas with slop, or a human can utilize it to express their own ideas.

Machine learning tools (such as those in Adobe Photoshop) already help human creatives prototype concepts faster, iterate on variations they wouldn’t have considered, or handle some repetitive production tasks like object removal or audio transcription, freeing humans to focus on conceptual direction and emotional resonance.

These potential positives, however, don’t negate the need for responsible stewardship and respecting human creativity as a precious resource.

Cultivating the future

So what might a sustainable ecosystem for human creativity actually involve?

Legal and economic approaches will likely be key. Governments could legislate that AI training must be opt-in, or at the very least, provide a collective opt-out registry (as the EU’s “AI Act” does).

Other potential mechanisms include robust licensing or royalty systems, such as creating a royalty clearinghouse (like the music industry’s BMI or ASCAP) for efficient licensing and fair compensation. Those fees could help compensate human creatives and encourage them to keep creating well into the future.

Deeper shifts may involve cultural values and governance. Inspired by models like Japan’s “Living National Treasures“—where the government funds artisans to preserve vital skills and support their work. Could we establish programs that similarly support human creators while also designating certain works or practices as “creative reserves,” funding the further creation of certain creative works even if the economic market for them dries up?

Or a more radical shift might involve an “AI commons”—legally declaring that any AI model trained on publicly scraped data should be owned collectively as a shared public domain, ensuring that its benefits flow back to society and don’t just enrich corporations.

Photo of family Harvesting Organic Crops On Farm

Meanwhile, Internet platforms have already been experimenting with technical defenses against industrial-scale AI demands. Examples include proof-of-work challenges, slowdown “tarpits” (e.g., Nepenthes), shared crawler blocklists (“ai.robots.txt“), commercial tools (Cloudflare’s AI Labyrinth), and Wikimedia’s “WE5: Responsible Use of Infrastructure” initiative.

These solutions aren’t perfect, and implementing any of them would require overcoming significant practical hurdles. Strict regulations might slow beneficial AI development; opt-out systems burden creators, while opt-in models can be complex to track. Meanwhile, tech defenses often invite arms races. Finding a sustainable, equitable balance remains the core challenge. The issue won’t be solved in a day.

Invest in people

While navigating these complex systemic challenges will take time and collective effort, there is a surprisingly direct strategy that organizations can adopt now: investing in people. Don’t sacrifice human connection and insight to save money with mediocre AI outputs.

Organizations that cultivate unique human perspectives and integrate them with thoughtful AI augmentation will likely outperform those that pursue cost-cutting through wholesale creative automation. Investing in people acknowledges that while AI can generate content at scale, the distinctiveness of human insight, experience, and connection remains priceless.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

In the age of AI, we must protect human creativity as a natural resource Read More »

review:-ryzen-ai-cpu-makes-this-the-fastest-the-framework-laptop-13-has-ever-been

Review: Ryzen AI CPU makes this the fastest the Framework Laptop 13 has ever been


With great power comes great responsibility and subpar battery life.

The latest Framework Laptop 13, which asks you to take the good with the bad. Credit: Andrew Cunningham

The latest Framework Laptop 13, which asks you to take the good with the bad. Credit: Andrew Cunningham

At this point, the Framework Laptop 13 is a familiar face, an old friend. We have reviewed this laptop five other times, and in that time, the idea of a repairable and upgradeable laptop has gone from a “sounds great if they can pull it off” idea to one that’s become pretty reliable and predictable. And nearly four years out from the original version—which shipped with an 11th-generation Intel Core processor—we’re at the point where an upgrade will get you significant boosts to CPU and GPU performance, plus some other things.

We’re looking at the Ryzen AI 300 version of the Framework Laptop today, currently available for preorder and shipping in Q2 for people who buy one now. The laptop starts at $1,099 for a pre-built version and $899 for a RAM-less, SSD-less, Windows-less DIY version, and we’ve tested the Ryzen AI 9 HX 370 version that starts at $1,659 before you add RAM, an SSD, or an OS.

This board is a direct upgrade to Framework’s Ryzen 7040-series board from mid-2023, with most of the same performance benefits we saw last year when we first took a look at the Ryzen AI 300 series. It’s also, if this matters to you, the first Framework Laptop to meet Microsoft’s requirements for its Copilot+ PC initiative, giving users access to some extra locally processed AI features (including but not limited to Recall) with the promise of more to come.

For this upgrade, Ryzen AI giveth, and Ryzen AI taketh away. This is the fastest the Framework Laptop 13 has ever been (at least, if you spring for the Ryzen AI 9 HX 370 chip that our review unit shipped with). If you’re looking to do some light gaming (or non-Nvidia GPU-accelerated computing), the Radeon 890M GPU is about as good as it gets. But you’ll pay for it in battery life—never a particularly strong point for Framework, and less so here than in most of the Intel versions.

What’s new, Framework?

This Framework update brings the return of colorful translucent accessories, parts you can also add to an older Framework Laptop if you want. Credit: Andrew Cunningham

We’re going to focus on what makes this particular Framework Laptop 13 different from the past iterations. We talk more about the build process and the internals in our review of the 12th-generation Intel Core version, and we ran lots of battery tests with the new screen in our review of the Intel Core Ultra version. We also have coverage of the original Ryzen version of the laptop, with the Ryzen 7 7840U and Radeon 780M GPU installed.

Per usual, every internal refresh of the Framework Laptop 13 comes with another slate of external parts. Functionally, there’s not a ton of exciting stuff this time around—certainly nothing as interesting as the higher-resolution 120 Hz screen option we got with last year’s Intel Meteor Lake update—but there’s a handful of things worth paying attention to.

Functionally, Framework has slightly improved the keyboard, with “a new key structure” on the spacebar and shift keys that “reduce buzzing when your speakers are cranked up.” I can’t really discern a difference in the feel of the keyboard, so this isn’t a part I’d run out to add to my own Framework Laptop, but it’s a fringe benefit if you’re buying an all-new laptop or replacing your keyboard for some other reason.

Keyboard legends have also been tweaked; pre-built Windows versions get Microsoft’s dedicated (and, within limits, customizable) Copilot key, while DIY editions come with a Framework logo on the Windows/Super key (instead of the word “super”) and no Copilot key.

Cosmetically, Framework is keeping the dream of the late ’90s alive with translucent plastic parts, namely the bezel around the display and the USB-C Expansion Modules. I’ll never say no to additional customization options, though I still think that “silver body/lid with colorful bezel/ports” gives the laptop a rougher, unfinished-looking vibe.

Like the other Ryzen Framework Laptops (both 13 and 16), not all of the Ryzen AI board’s four USB-C ports support all the same capabilities, so you’ll want to arrange your ports carefully.

Framework’s recommendations for how to configure the Ryzen AI laptop’s expansion modules. Credit: Framework

Framework publishes a graphic to show you which ports do what; if you’re looking at the laptop from the front, ports 1 and 3 are on the back, and ports 2 and 4 are toward the front. Generally, ports 1 and 3 are the “better” ones, supporting full USB4 speeds instead of USB 3.2 and DisplayPort 2.0 instead of 1.4. But USB-A modules should go in ports 2 or 4 because they’ll consume extra power in bays 1 and 3. All four do support display output, though, which isn’t the case for the Ryzen 7040 Framework board, and all four continue to support USB-C charging.

The situation has improved from the 7040 version of the Framework board, where not all of the ports could do any kind of display output. But it still somewhat complicates the laptop’s customizability story relative to the Intel versions, where any expansion card can go into any port.

I will also say that this iteration of the Framework laptop hasn’t been perfectly stable for me. The problems are intermittent but persistent, despite using the latest BIOS version (3.03 as of this writing) and driver package available from Framework. I had a couple of total-system freezes/crashes, occasional problems waking from sleep, and sporadic rendering glitches in Microsoft Edge. These weren’t problems I’ve had with the other Ryzen AI laptops I’ve used so far or with the Ryzen 7040 version of the Framework 13. They also persisted across two separate clean installs of Windows.

It’s possible/probable that some combination of firmware and driver updates can iron out these problems, and they generally didn’t prevent me from using the laptop the way I wanted to use it, but I thought it was worth mentioning since my experience with new Framework boards has usually been a bit better than this.

Internals and performance

“Ryzen AI” is AMD’s most recent branding update for its high-end laptop chips, but you don’t actually need to care about AI to appreciate the solid CPU and GPU speed upgrades compared to the last-generation Ryzen Framework or older Intel versions of the laptop.

Our Framework Laptop board uses the fastest processor offering: a Ryzen AI 9 HX 370 with four of AMD’s Zen 5 CPU cores, eight of the smaller, more power-efficient Zen 5c cores, and a Radeon 890M integrated GPU with 16 of AMD’s RDNA 3.5 graphics cores.

There are places where the Intel Arc graphics in the Core Ultra 7/Meteor Lake version of the Framework Laptop are still faster than what AMD can offer, though your experience may vary depending on the games or apps you’re trying to use. Generally, our benchmarks show the Arc GPU ahead by a small amount, but it’s not faster across the board.

Relative to other Ryzen AI systems, the Framework Laptop’s graphics performance also suffers somewhat because socketed DDR5 DIMMs don’t run as fast as RAM that’s been soldered to the motherboard. This is one of the trade-offs you’re probably OK with making if you’re looking at a Framework Laptop in the first place, but it’s worth mentioning.

A few actual game benchmarks. Ones with ray-tracing features enabled tend to favor Intel’s Arc GPU, while the Radeon 890M pulls ahead in some other games.

But the new Ryzen chip’s CPU is dramatically faster than Meteor Lake at just about everything, as well as the older Ryzen 7 7840U in the older Framework board. This is the fastest the Framework Laptop has ever been, and it’s not particularly close (but if you’re waffling between the Ryzen AI version, the older AMD version that Framework sells for a bit less money or the Core Ultra 7 version, wait to see the battery life results before you spend any money). Power efficiency has also improved for heavy workloads, as demonstrated by our Handbrake video encoding tests—the Ryzen AI chip used a bit less power under heavy load and took less time to transcode our test video, so it uses quite a bit less power overall to do the same work.

Power efficiency tests under heavy load using the Handbrake transcoding tool. Test uses CPU for encoding and not hardware-accelerated GPU-assisted encoding.

We didn’t run specific performance tests on the Ryzen AI NPU, but it’s worth noting that this is also Framework’s first laptop with a neural processing unit (NPU) fast enough to support the full range of Microsoft’s Copilot+ PC features—this was one of the systems I used to test Microsoft’s near-final version of Windows Recall, for example. Intel’s other Core Ultra 100 chips, all 200-series Core Ultra chips other than the 200V series (codenamed Lunar Lake), and AMD’s Ryzen 7000- and 8000-series processors often include NPUs, but they don’t meet Microsoft’s performance requirements.

The Ryzen AI chips are also the only Copilot+ compatible processors on the market that Framework could have used while maintaining the Laptop’s current level of upgradeability. Qualcomm’s Snapdragon X Elite and Plus chips don’t support external RAM—at least, Qualcomm only lists support for soldered-down LPDDR5X in its product sheets—and Intel’s Core Ultra 200V processors use RAM integrated into the processor package itself. So if any of those features appeal to you, this is the only Framework Laptop you can buy to take advantage of them.

Battery and power

Battery tests. The Ryzen AI 300 doesn’t do great, though it’s similar to the last-gen Ryzen Framework.

When paired with the higher-resolution screen option and Framework’s 61 WHr battery, the Ryzen AI version of the laptop lasted around 8.5 hours in a PCMark Modern Office battery life test with the screen brightness set to a static 200 nits. This is a fair bit lower than the Intel Core Ultra version of the board, and it’s even worse when compared to what a MacBook Air or a more typical PC laptop will give you. But it’s holding roughly even with the older Ryzen version of the Framework board despite being much faster.

You can improve this situation somewhat by opting for the cheaper, lower-resolution screen; we didn’t test it with the Ryzen AI board, and Framework won’t sell you the lower-resolution screen with the higher-end chip. But for upgraders using the older panel, the higher-res screen reduced battery life by between 5 and 15 percent in past testing of older Framework Laptops. The slower Ryzen AI 5 and Ryzen AI 7 versions will also likely last a little longer, though Framework usually only sends us the highest-end versions of its boards to test.

A routine update

This combo screwdriver-and-spudger is still the only tool you need to take a Framework Laptop apart. Credit: Andrew Cunningham

It’s weird that my two favorite laptops right now are probably Apple’s MacBook Air and the Framework Laptop 13, but that’s where I am. They represent opposite visions of computing, each of which appeals to a different part of my brain: The MacBook Air is the personal computer at its most appliance-like, the thing you buy (or recommend) if you just don’t want to think about your computer that much. Framework embraces a more traditionally PC-like approach, favoring open standards and interoperable parts; the result is more complicated and chaotic but also more flexible. It’s the thing you buy when you like thinking about your computer.

Framework Laptop buyers continue to pay a price for getting a more repairable and modular laptop. Battery life remains OK at best, and Framework doesn’t seem to have substantially sped up its firmware or driver releases since we talked with them about it last summer. You’ll need to be comfortable taking things apart, and you’ll need to make sure you put the right expansion modules in the right bays. And you may end up paying more than you would to get the same specs from a different laptop manufacturer.

But what you get in return still feels kind of magical, and all the more so because Framework has now been shipping product for four years. The Ryzen AI version of the laptop is probably the one I’d recommend if you were buying a new one, and it’s also a huge leap forward for anyone who bought into the first-generation Framework Laptop a few years ago and is ready for an upgrade. It’s by far the fastest CPU (and, depending on the app, the fastest or second-fastest GPU) Framework has shipped in the Laptop 13. And it’s nice to at least have the option of using Copilot+ features, even if you’re not actually interested in the ones Microsoft is currently offering.

If none of the other Framework Laptops have interested you yet, this one probably won’t, either. But it’s yet another improvement in what has become a steady, consistent sequence of improvements. Mediocre battery life is hard to excuse in a laptop, but if that’s not what’s most important to you, Framework is still offering something laudable and unique.

The good

  • Framework still gets all of the basics right—a matte 3:2 LCD that’s pleasant to look at, a nice-feeling keyboard and trackpad, and a design
  • Fastest CPU ever in the Framework Laptop 13, and the fastest or second-fastest integrated GPU
  • First Framework Laptop to support Copilot+ features in Windows, if those appeal to you at all
  • Fun translucent customization options
  • Modular, upgradeable, and repairable—more so than with most laptops, you’re buying a laptop that can change along with your needs and which will be easy to refurbish or hand down to someone else when you’re ready to replace it
  • Official support for both Windows and Linux

The bad

  • Occasional glitchiness that may or may not be fixed with future firmware or driver updates
  • Some expansion modules are slower or have higher power draw if you put them in the wrong place
  • Costs more than similarly specced laptops from other OEMs
  • Still lacks certain display features some users might require or prefer—in particular, there are no OLED, touchscreen, or wide-color-gamut options

The ugly

  • Battery life remains an enduring weak point.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Review: Ryzen AI CPU makes this the fastest the Framework Laptop 13 has ever been Read More »

bicycle-bling:-all-the-accessories-you’ll-need-for-your-new-e-bike

Bicycle bling: All the accessories you’ll need for your new e-bike


To accompany our cargo bike shopper’s guide, here’s the other you’ll want.

Credit: LueratSatichob/Getty Images

If you’ve read our cargo e-bike shopper’s guide, you may be well on your way to owning a new ride. Now comes the fun part.

Part of the joy of diving into a new hobby is researching and acquiring the necessary (and less-than-necessary) stuff. And cycling (or, for the casual or transportation-first rider, “riding bikes”) is no different—there are hundreds of ways to stock up on talismanic, Internet-cool parts and accessories that you may or may not need.

That’s not necessarily a bad thing! And you can even get creative—PC case LEDs serve the same function as a very specific Japanese reflective triangle that hangs from your saddle. But let’s start with the strictly necessary.

This article is aimed at the fully beginner cyclist, but I invite the experienced cyclists among us to fill the comments with anything I’ve missed. If this is your first run at owning a bike that gets ridden frequently, the below is a good starting point to keep you (and your cargo) safe—and your bike running.

First thing’s first: Safety stuff

Helmets

I once was asked by another cargo bike dad, “Are people wearing helmets on these? Is that uncool?”

“You’re already riding the uncoolest bike on earth—buy a helmet,” I told him.

For the most part, any helmet you pick up at a big box store or your local bike shop will do a perfectly fine job keeping your brains inside your skull. Even so, the goodly nerds over at Virginia Tech have partnered with the Insurance Institute for Highways Safety (IIHS) to rate 238 bike helmets using the STAR evaluation system. Sort by your use case and find something within your budget, but I’ve found that something in the $70–$100 range is more than adequate—any less and you’re sacrificing comfort, and any more and you won’t notice the difference. Save your cash.

Giro, Bell, Smith, POC, and Kask are all reputable brands with a wide range of shapes to fit bulbous and diminutive noggins alike.

Additionally, helmets are not “buy it for life” items—manufacturers recommend replacing them every four to five years because the foam and glues degrade with sun exposure. So there’s a built-in upgrade cycle on that one.

Lights

Many cargo e-bikes come with front and rear lights prewired into the electric system. If you opted for an acoustic bike, you’ll want to get some high-lumen visibility from dedicated bike lights (extra bike nerd bonus points for a dynamo system). Front and rear lights can be as cheap as you need or as expensive as you want. Depending on the brands your local bike shop carries, you will find attractive options from Bontraeger, Lezyne, and Knog. Just make sure whatever you’re buying is USB-rechargeable and has the appropriate mounts to fit your bike.

Additionally, you can go full Fast and the Furious and get nuts with cheap, adhesive-backed LEDs for fun and safety. I’ve seen light masts on the back of longtails, and I have my Long John blinged out with LEDs that pulse to music. This is 82 percent for the enjoyment of other bike parents.

A minimalist’s mobile toolkit

You will inevitably blow a tire on the side of the road, or something will rattle loose while your kid is screaming at you. With this in mind, I always have an everything-I-need kit in a zip-top bag in my work backpack. Some version of this assemblage lives on every bike I own in its own seat bag, but on my cargo bike, it’s split between the pockets of the atrociously expensive but very well thought-out Fahrer Panel Bags. This kit includes:

A pocket pump

Lezyne is a ubiquitous name in bike accessories, and for good reason. I’ve had the previous version of their Pocket Drive mini pump for the better part of a decade, and it shows no sign of stopping. What sets this pump apart is the retractable reversible tube that connects to your air valve, providing some necessary flexibility as you angrily pump up a tire on the side of the road. I don’t mess with CO2 canisters because I’ve had too many inflators explode due to user error, and they’re not recommended for tubeless systems, which are starting to be far more common.

If you spend any amount of time on bike Instagram and YouTube, you’ve seen pocketable USB-rechargeable air compressors made to replace manual pumps. We haven’t tested any of the most common models yet, but these could be a solid solution if your budget outweighs your desire to be stuck on the side of the road.

The Pocket Drive HV Pump from Lezyne.

A multi-tool

Depending on the style and vintage of your ride, you’ll have at least two to three different-sized bolts or connectors throughout the frame. If you have thru-axle wheels, you may need a 6 mm hex key to remove them in the event of a flat. Crank Brothers makes what I consider to be the most handsome, no-nonsense multi-tools on the market. They have tools in multiple configurations, allowing you to select the sizes that best apply to your gear—no more, no less.

The M20 minitool from Crank Brothers. Credit: Crankbrothers

Tube + patch kit

As long as you’re not counting grams, the brand of bike tube you use does not matter. Make sure it’s the right size for your wheel and tire combo and that it has the correct inflator valve (there are two styles: Presta and Schrader, with the former being more popular for bikes you’d buy at your local shop). Just go into your local bike shop and buy a bunch and keep them for when you need ’em.

The Park Tool patch kit has vulcanization glue included (I’d recommend avoiding sticker-style patches)—they’re great and cheap, and there’s no excuse for excluding them from your kit. Park Tool makes some really nice bike-specific tools, and they produce This Old House-quality bike repair tutorials hosted by the GOAT Calvin Jones. In the event of a single failure, many riders find it sensible to simply swap the tube and save the patching for when they’re back at their workbench.

With that said, because of their weight and potentially complicated drivetrains, it can be a bit of a pain to get wheels out of a cargo bike to change a tire, so it’s best to practice at home.

A big lock

If you’re regularly locking up outside an office or running errands, you’re going to need to buy (and learn to appropriately use) a lock to protect your investment. I’ve been a happy owner of a few Kryptonite U-Locks over the years, but even these beefy bois are easily defeated by a cordless angle grinder and a few minutes of effort. These days, there are u-locks from Abus, Hiplok, and LiteLok with grinder-resistant coatings that are eye-wateringly expensive, but if your bike costs as much as half of a used Honda Civic, they’re absolutely worth it.

Thing retention

Though you may not always carry stuff, it’s a good idea to be prepared for the day when your grocery run gets out of hand. A small bag with a net, small cam straps, and various sizes of bungee cords has saved my bacon more than once. Looking for a fun gift for the bike parent in your life? Overengineered, beautifully finished cam buckles from Austere Manufacturing are the answer.

Tot totage

Depending on whether we’re on an all-day adventure or just running down to school, I have a rotating inventory of stuff that gets thrown into the front of my bike with my daughter, including:

  • An old UE Wonderboom on a carabiner bumping Frozen club remixes
  • A small bag with snacks and water that goes into a netted area ahead of her feet

And even if it’s not particularly cool, I like to pack a camping blanket like a Rumpl. By the time we’re on our way home, she is invariably tired and wants a place to lay her little helmeted head.

Floor pump

When I first started riding, it didn’t occur to me that one should check their tire pressure before every ride. You don’t have to do this if your tires consistently maintain pressure day-to-day, but I’m a big boy, and it behooves me to call this out. That little pump I recommended above? You don’t want to be using that every day. No, you want what’s called a floor pump.

Silca makes several swervy versions ranging from $150 all the way up to $495. With that said, I’ve had the Lezyne Sport Floor Drive for over 10 years, and I can’t imagine not having it for another 20. Mine has a wood handle, which has taken on some patina and lends a more luxurious feel, and most importantly, it’s totally user-serviceable. This spring, I regreased the seals and changed out the o-rings without any special tools—just a quick trip to the plumbing store. I was also able to upgrade the filler chuck to Lezyne’s new right-angle ABS 1.0 chuck.

The Lezyne Sport Floor Drive 3.5.

No matter what floor pump you go for, at the very least, you’ll want to get one with a pressure gauge. Important tip: Do not just fill your tires to the max pressure on the side of the tire. This will make for an uncomfortable ride, and depending on how fancy of a wheelset you have, it could blow the tire right off the rim. Start with around 80 PSI with 700×28 tires on normal city roads and adjust from there. The days of busting your back at 100 PSI are over, gang.

Hex wrenches

Even if you don’t plan on wrenching on your own bike, it’s handy to have the right tools for making minor fit adjustments and removing your wheels to fix flats. The most commonly used bolts on bikes are metric hex bolts, with Torx bolts used on high-end gear and some small components. A set of Bondhus ball-end Allen wrenches will handle 99 percent of what you need, though fancy German tool manufacturer Wera makes some legitimately drool-worthy wrenches.

If you have blessed your bike with carbon bits (or just want the peace of mind that you’ve cranked down those bolts to the appropriate spec), you may want to pick up a torque wrench. They come in a few flavors geared at the low-torque specs of bikes, in ascending price points and user-friendliness: beam-type, adjustable torque drivers, and ratcheting click wrenches. All should be calibrated at some point, but each comes with its own pros and cons.

Keep in mind that overtightening is just as bad as undertightening because you can crack the component or shear the bolt head off. It happens to the best of us! (Usually after having said, “I don’t feel like grabbing the torque wrench” and just making the clicking sound with your mouth).

Lube

Keeping your chain (fairly) clean and (appropriately) lubricated will extend its life and prolong the life of the rest of your drivetrain. You’ll need to replace the chain once it becomes too worn out, and then every second chain, you’ll want to replace your cassette (the gears). Depending on how well you’ve cared for it, how wet your surroundings are, and how often you’re riding, an 11-speed chain can last anywhere from 1,000 to 1,500 miles, but your mileage may vary.

You can get the max mileage out of your drivetrain by periodically wiping down your chain with an old T-shirt or microfiber towel and reapplying chain lube every 200–300 miles, or counterintuitively, more frequently if you ride less frequently. Your local shop can recommend the lube that best suits your climate and riding environment, but I’m a big fan of Rock’n’Roll Extreme chain lube for my more-or-less dry Northern California rides. The best advice I’ve gotten is that it doesn’t matter what chain lube you use as long as it’s on the chain.

Also, do not use WD-40. That is not a lubricant.

That’s it! There may be a few more items you’ll want to add over time, but this list should give you a great start. Get out there and get riding—and enjoy the hours of further research this article has inevitably prompted.

Bicycle bling: All the accessories you’ll need for your new e-bike Read More »

resist,-eggheads!-universities-are-not-as-weak-as-they-have-chosen-to-be.

Resist, eggheads! Universities are not as weak as they have chosen to be.

The wholesale American cannibalism of one of its own crucial appendages—the world-famous university system—has begun in earnest. The campaign is predictably Trumpian, built on a flagrantly pretextual basis and executed with the sort of vicious but chaotic idiocy that has always been a hallmark of the authoritarian mind.

At a moment when the administration is systematically waging war on diversity initiatives of every kind, it has simultaneously discovered that it is really concerned about both “viewpoint diversity” and “antisemitism” on college campuses—and it is using the two issues as a club to beat on the US university system until it either dies or conforms to MAGA ideology.

Reaching this conclusion does not require reading any tea leaves or consulting any oracles; one need only listen to people like Vice President JD Vance, who in 2021 gave a speech called “The Universities are the Enemy” to signal that, like every authoritarian revolutionary, he intended to go after the educated.

“If any of us want to do the things that we want to do for our country,” Vance said, “and for the people who live in it, we have to honestly and aggressively attack the universities in this country.” Or, as conservative activist Christopher Rufo put it in a New York Times piece exploring the attack campaign, “We want to set them back a generation or two.”

The goal is capitulation or destruction. And “destruction” is not a hyperbolic term; some Trump aides have, according to the same piece, “spoken privately of toppling a high-profile university to signal their seriousness.”

Consider, in just a few months, how many battles have been launched:

  • The Trump administration is now snatching non-citizen university students, even those in the country legally, off the streets using plainclothes units and attempting to deport them based on their speech or beliefs.
  • It has opened investigations of more than 50 universities.
  • It has threatened grants and contracts at, among others, Brown ($510 million), Columbia ($400 million), Cornell ($1 billion), Harvard ($9 billion), Penn ($175 million), and Princeton ($210 million).
  • It has reached a widely criticized deal with Columbia that would force Columbia to change protest and security policies but would also single out one academic department (Middle Eastern, South Asian, and African Studies) for enhanced scrutiny. This deal didn’t even get Columbia its $400 million back; it only paved the way for future “negotiations” about the money. And the Trump administration is potentially considering a consent decree with Columbia, giving it leverage over the school for years to come.
  • It has demanded that Harvard audit every department for “viewpoint diversity,” hiring faculty who meet the administration’s undefined standards.
  • Trump himself has explicitly threatened to revoke Harvard’s tax-exempt nonprofit status after it refused to bow to his demands. And the IRS looks ready to do it.
  • The government has warned that it could choke off all international students—an important diplomatic asset but also a key source of revenue—at any school it likes.
  • Ed Martin—the extremely Trumpy interim US Attorney for Washington, DC—has already notified Georgetown that his office will not hire any of that school’s graduates if the school “continues to teach and utilize DEI.”

What’s next? Project 2025 lays it out for us, envisioning the federal government getting heavily involved in accreditation—thus giving the government another way to bully schools—and privatizing many student loans. Right-wing wonks have already begun to push for “a never-ending compliance review” of elite schools’ admissions practices, one that would see the Harvard admissions office filled with federal monitors scrutinizing every single admissions decision. Trump has also called for “patriotic education” in K–12 schools; expect similar demands of universities, though probably under the rubrics of “viewpoint discrimination” and “diversity.”

Universities may tell themselves that they would never comply with such demands, but a school without accreditation and without access to federal funds, international students, and student loan dollars could have trouble surviving for long.

Some of the top leaders in academia are ringing the alarm bells. Princeton’s president, Christopher Eisgruber, wrote a piece in The Atlantic warning that the Trump administration has already become “the greatest threat to American universities since the Red Scare of the 1950s. Every American should be concerned.”

Lee Bollinger, who served as president of both the University of Michigan and Columbia University, gave a fiery interview to the Chronicle of Higher Education in which he said, “We’re in the midst of an authoritarian takeover of the US government… We cannot get ourselves to see how this is going to unfold in its most frightening versions. You neutralize the branches of government; you neutralize the media; you neutralize universities, and you’re on your way. We’re beginning to see the effects on universities. It’s very, very frightening.”

But for the most part, even though faculty members have complained and even sued, administrators have stayed quiet. They are generally willing to fight for their cash in court—but not so much in the court of public opinion. The thinking is apparently that there is little to be gained by antagonizing a ruthless but also chaotic administration that just might flip the money spigot back on as quickly as it was shut off. (See also: tariff policy.)

This academic silence also comes after many universities course-corrected following years of administrators weighing in on global and political events outside a school’s basic mission. When that practice finally caused problems for institutions, as it did following the Gaza/Israel fighting, numerous schools adopted a posture of “institutional neutrality” and stopped offering statements except on core university concerns. This may be wise policy, but unfortunately, schools are clinging to it even though the current moment could not be more central to their mission.

To critics, the public silence looks a lot like “appeasement”—a word used by our sister publication The New Yorker to describe how “universities have cut previously unthinkable ‘deals’ with the Administration which threaten academic freedom.” As one critic put it recently, “still there is no sign of organized resistance on the part of universities. There is not even a joint statement in defense of academic freedom or an assertion of universities’ value to society.”

Even Michael Roth, the president of Wesleyan University, has said that universities’ current “infatuation with institutional neutrality is just making cowardice into a policy.”

Appeasing narcissistic strongmen bent on “dominance” is a fool’s errand, as is entering a purely defensive crouch. Weakness in such moments is only an invitation to the strongman to dominate you further. You aren’t going to outlast your opponent when the intended goal appears to be not momentary “wins” but the weakening of all cultural forces that might resist the strongman. (See also: Trump’s brazen attacks on major law firms and the courts.)

As an Atlantic article put it recently, “Since taking office, the Trump administration has been working to dismantle the global order and the nation’s core institutions, including its cultural ones, to strip them of their power. The future of the nation’s universities is very much at stake. This is not a challenge that can be met with purely defensive tactics.”

The temperamental caution of university administrators means that some can be poor public advocates for their universities in an age of anger and distrust, and they may have trouble finding a clear voice to speak with when they come under thundering public attacks from a government they are more used to thinking of as a funding source.

But the moment demands nothing less. This is not a breeze; this is the whirlwind. And it will leave a state-dependent, nationalist university system in its wake unless academia arises, feels its own power, and non-violently resists.

Fighting back

Finally, on April 14, something happened: Harvard decided to resist in far more public fashion. The Trump administration had demanded, as a condition of receiving $9 billion in grants over multiple years, that Harvard reduce the power of student and faculty leaders, vet every academic department for undefined “viewpoint diversity,” run plagiarism checks on all faculty, share hiring information with the administration, shut down any program related to diversity or inclusion, and audit particular departments for antisemitism, including the Divinity School. (Numerous Jewish groups want nothing to do with the campaign, writing in an open letter that “our safety as Jews has always been tied to the rule of law, to the safety of others, to the strength of civil society, and to the protection of rights and liberties for all.”)

If you think this sounds a lot like government control, giving the Trump administration the power to dictate hiring and teaching practices, you’re not alone; Harvard president Alan Garber rejected the demands in a letter, saying, “The university will not surrender its independence or relinquish its constitutional rights. Neither Harvard nor any other private university can allow itself to be taken over by the federal government.”

The Trump administration immediately responded by cutting billions in Harvard funding, threatening the university’s tax-exempt status, and claiming it might block international students from attending Harvard.

Perhaps Harvard’s example will provide cover for other universities to make hard choices. And these are hard choices. But Columbia and Harvard have already shown that the only way you have a chance at getting the money back is to sell whatever soul your institution has left.

Given that, why not fight? If you have to suffer, suffer for your deepest values.

Fare forward

“Resistance” does not mean a refusal to change, a digging in, a doubling down. No matter what part of the political spectrum you inhabit, universities—like most human institutions—are “target-rich environments” for complaints. To see this, one has only to read about recent battles over affirmative action, the Western canon, “legacy” admissions, the rise and fall of “theory” in the humanities, Gaza/Palestine protests, the “Varsity Blues” scandal, critiques of “meritocracy,” mandatory faculty “diversity statements,” the staggering rise in tuition costs over the last few decades, student deplatforming of invited speakers, or the fact that so many students from elite institutions cannot imagine a higher calling than management consulting. Even top university officials acknowledge there are problems.

Famed Swiss theologian Karl Barth lost his professorship and was forced to leave Germany in 1935 because he would not bend the knee to Adolf Hitler. He knew something about standing up for one’s academic and spiritual values—and about the importance of not letting any approach to the world ossify into a reactionary, bureaucratic conservatism that punishes all attempts at change or dissent. The struggle for knowledge, truth, and justice requires forward movement even as the world changes, as ideas and policies are tested, and as cultures develop. Barth’s phrase for this was “Ecclesia semper reformanda est“—the church must always be reformed—and it applies just as well to the universities where he spent much of his career.

As universities today face their own watershed moment of resistance, they must still find ways to remain intellectually curious and open to the world. They must continue to change, always imperfectly but without fear. It is important that their resistance not be partisan. Universities can only benefit from broad-based social support, and the idea that they are fighting “against conservatives” or “for Democrats” will be deeply unhelpful. (Just as it would be if universities capitulated to government oversight of their faculty hires or gave in to “patriotic education.”)

This is difficult when one is under attack, as the natural reaction is to defend what currently exists. But the assault on the universities is about deeper issues than admissions policies or the role of elite institutions in American life. It is about the rule of law, freedom of speech, scientific research, and the very independence of the university—things that should be able to attract broad social and judicial support if schools do not retreat into ideology.

Why it matters

Ars Technica was founded by grad students and began with a “faculty model” drawn from universities: find subject matter experts and turn them loose to find interesting stories in their domains of expertise, with minimal oversight and no constant meetings.

From Minnesota Bible colleges to the halls of Harvard, from philosophy majors to chemistry PhDs, from undergrads to post-docs, Ars has employed people from a wide range of schools and disciplines. We’ve been shaped by the university system, and we cover it regularly as a source of scientific research and computer science breakthroughs. While we differ in many ways, we recognize the value of a strong, independent, mission-focused university system that, despite current flaws, remains one of America’s storied achievements. And we hope that universities can collectively find the strength to defend themselves, just as we in the media must learn to do.

The assault on universities and on the knowledge they produce has been disorienting in its swiftness, animus, and savagery. But universities are not starfish, flopping about helplessly on a beach while a cruel child slices off their arms one by one. They can do far more than hope to survive another day, regrowing missing limbs in some remote future. They have real power, here and now. But they need to move quickly, they need to move in solidarity, and they need to use the resources that they have, collectively, assembled.

Because, if they aren’t going to use those resources when their very mission comes under assault, what was the point of gathering them in the first place?

Here are a few of those resources.

Money

Cash is not always the most important force in human affairs, but it doesn’t hurt to have a pile of it when facing off against a feral US government. When the government threatened Harvard with multiyear cuts of $9 billion, for instance, it was certainly easier for the university to resist while sitting on a staggering $53 billion endowment. In 2024, the National Association of College and University Business Officers reported that higher ed institutions in the US collectively have over $800 billion in endowment money.

It’s true that many endowment funds are donor-restricted and often invested in non-liquid assets, making them unavailable for immediate use or to bail out university programs whose funding has been cut. But it’s also true that $800 billion is a lot of money—it’s more than the individual GDP of all but two dozen countries.

No trustee of this sort of legacy wants to squander an institution’s future by spending money recklessly, but what point is there in having a massive endowment if it requires your school to become some sort of state-approved adjunct?

Besides, one might choose not to spend that money now only to find that it is soon requisitioned regardless. People in Trump’s orbit have talked for years about placing big new taxes on endowment revenue as a way of bringing universities to heel. Trump himself recently wrote on social media that Harvard “perhaps” should “lose its Tax Exempt Status and be Taxed as a Political Entity if it keeps pushing political, ideological, and terrorist inspired/supporting “Sickness?” Remember, Tax Exempt Status is totally contingent on acting in the PUBLIC INTEREST!”

So spend wisely, but do spend. This is the kind of moment such resources were accumulated to weather.

Students

Fifteen million students are currently enrolled in higher education across the country. The total US population is 341 million people. That means students comprise over 4 percent of the total population; when you add in faculty and staff, higher education’s total share of the population is even greater.

So what? Political science research over the last three decades looked at nonviolent protest movements and found that they need only 3.5 percent of the population to actively participate. Most movements that hit that threshold succeed, even in authoritarian states. Higher ed alone has those kinds of numbers.

Students are not a monolith, of course, and many would not participate—nor should universities look at their students merely as potential protesters who might serve university interests. But students have been well-known for a willingness to protest, and one of the odd features of the current moment has been that so many students protested the Gaza/Israel conflict even though so few have protested the current government assault on the very schools where they have chosen to spend their time and money. It is hard to say whether both schools and their students are burned out from recent, bruising protests, or whether the will to resist remains.

But if it does, the government assault on higher education could provoke an interesting realignment of forces: students, faculty, and administrators working together for once in resistance and protest, upending the normal dynamics of campus movements. And the numbers exist to make a real national difference if higher ed can rally its own full range of resources.

Institutions

Depending on how you count, the US has around 4,000 colleges and universities. The sheer number and diversity of these institutions is a strength—but only if they can do a better job working together on communications, lobbying, and legal defenses.

Schools are being attacked individually, through targeted threats rather than broad laws targeting all higher education. And because schools are in many ways competitors rather than collaborators, it can be difficult to think in terms of sharing resources or speaking with one voice. But joint action will be essential, given that many smaller schools are already under economic pressure and will have a hard time resisting government demands, losing their nonprofit status, or finding their students blocked from the country or cut off from loan money.

Plenty of trade associations and professional societies exist within the world of higher education, of course, but they are often dedicated to specific tasks and lack the public standing and authority to make powerful public statements.

Faculty/alumni

The old stereotype of the out-of-touch, tweed-wearing egghead, spending their life lecturing on the lesser plays of Ben Jonson, is itself out of touch. The modern university is stuffed with lawyers, data scientists, computer scientists, cryptographers, marketing researchers, writers, media professionals, and tech policy mavens. They are a serious asset, though universities sometimes leave faculty members to operate so autonomously that group action is difficult or, at least, institutionally unusual. At a time of crisis, that may need to change.

Faculty are an incredible resource because of what they know, of course. Historians and political scientists can offer context and theory for understanding populist movements and authoritarian regimes. Those specializing in dialogue across difference, or in truth and reconciliation movements, or in peace and conflict studies, can offer larger visions for how even deep social conflicts might be transcended. Communications professors can help universities think more carefully about articulating what they do in the public marketplace of ideas. And when you are on the receiving end of vindictive and pretextual legal activity, it doesn’t hurt to have a law school stuffed with top legal minds.

But faculty power extends beyond facts. Relationships with students, across many years, are a hallmark of the best faculty members. When generations of those students have spread out into government, law, and business, they make a formidable network.

Universities that realize the need to fight back already know this. Ed Martin, the interim US Attorney for the District of Columbia, attacked Georgetown in February and asked if it had “eliminated all DEI from your school and its curriculum?” He ended his “clarification” letter by claiming that “no applicant for our fellows program, our summer internship, or employment in our office who is a student or affiliated with a law school or university that continues to teach and utilize DEI will be considered.”

When Georgetown Dean Bill Treanor replied to Martin, he did not back down, noting Martin’s threat to “deny our students and graduates government employment opportunities until you, as Interim United States Attorney for the District of Columbia, approve of our curriculum.” (Martin himself had managed to omit the “interim” part of his title.) Such a threat would violate “the First Amendment’s protection of a university’s freedom to determine its own curriculum and how to deliver it.”

There was no “negotiating” here, no attempt to placate a bully. Treanor barely addressed Martin’s questions. Instead, he politely but firmly noted that the inquiry itself was illegitimate, even under recent Supreme Court jurisprudent and Trump Department of Education policy. And he tied everything in his response to the university’s mission as a Jesuit school committed to “intellectual, ethical, and spiritual understanding.”

The letter’s final paragraph, in which Treanor told Martin that he expected him to back down from his threats, opened with a discussion of Georgetown’s faculty.

Georgetown Law has one of the preeminent faculties in the country, fostering groundbreaking scholarship, educating students in a wide variety of perspectives, and thriving on the robust exchange of ideas. Georgetown Law faculty have educated world leaders, members of Congress, and Justice Department officials, from diverse backgrounds and perspectives.

Implicit in these remarks are two reminders:

  1. Georgetown is home to many top legal minds who aren’t about to be steamrolled by a January 6 defender whose actions in DC have already been so comically outrageous that Sen. Adam Schiff has placed a hold on his nomination to get the job permanently.
  2. Georgetown faculty have good relationships with many powerful people across the globe who are unlikely to sympathize with some legal hack trying to bully their alma mater.

The letter serves as a good reminder: Resist with firmness and rely on your faculty. Incentivize their work, providing the time and resources to write more popular-level distillations of their research or to educate alumni groups about the threats campuses are facing. Get them into the media and onto lecture hall stages. Tap their expertise for internal working groups. Don’t give in to the caricatures but present a better vision of how faculty contribute to students, to research, and to society.

Real estate

Universities collectively possess a real estate portfolio of land and buildings—including lecture halls, stages, dining facilities, stadiums, and dormitories—that would make even a developer like Donald Trump salivate. It’s an incredible resource that is already well-used but might be put toward purposes that meet the moment even more clearly.

Host more talks, not just on narrow specialty topics, but on the kinds of broad-based political debates that a healthy society needs. Make the universities essential places for debate, discussion, and civic organizing. Encourage more campus conferences in the summer, with vastly reduced rates for groups that effectively aid civic engagement, depolarization, and dialogue across political differences. Provide the physical infrastructure for fruitful cross-party political encounters and anti-authoritarian organizing. Use campuses to house regional and national hubs that develop best practices in messaging, legal tactics, local outreach, and community service from students, faculty, and administrators.

Universities do these things, of course; many are filled with “dialogue centers” and civic engagement offices. But many of these resources exist primarily for students; to survive and thrive, universities will need to rebuild broader social confidence. The other main criticism is that they can be siloed off from the other doings of the university. If “dialogue” is taken care of at the “dialogue center,” then other departments and administrative units may not need to worry about it. But with something as broad and important as “resistance,” the work cannot be confined to particular units.

With so many different resources, from university presses to libraries to lecture halls, academia can do a better job at making its campuses useful both to students and to the surrounding community—so long as the universities know their own missions and make sure their actions align with them.

Athletics

During times of external stress, universities need to operate more than ever out of their core, mission-driven values. While educating the whole person, mentally and physically, is a worthy goal, it is not one that requires universities to submit to a Two Minutes Hate while simultaneously providing mass entertainment and betting material for the gambling-industrial complex.

When up against a state that seeks “leverage” of every kind over the university sector, realize that academia itself controls some of the most popular sports competitions in America. That, too, is leverage, if one knows how to use it.

Such leverage could, of course, be Trumpian in its own bluntness—no March Madness tournament, for instance, so long as thousands of researchers are losing their jobs and health care networks are decimated and the government is insisting on ideological control over hiring and department makeup. (That would certainly be interesting—though quite possibly counterproductive.)

But universities might use their control of NCAA sporting events to better market themselves and their impact—and to highlight what’s really happening to them. Instead, we continue to get the worst kinds of anodyne spots during football and basketball games: frisbee on the quad, inspiring shots of domes and flags, a professor lecturing in front of a chalkboard.

Be creative! But do something. Saying and doing nothing—letting the games go on without comment as the boot heel comes down on the whole sector, is a complete abdication of mission and responsibility.

DOD and cyber research

The Trump administration seems to believe that it has the only thing people want: grant funding. It seems not even to care if broader science funding in the US simply evaporates, if labs close down, or if the US loses its world-beating research edge.

But even if “science” is currently expendable, the US government itself relies heavily on university researchers to produce innovations required by the Department of Defense and the intelligence community. Cryptography, cybersecurity tools, the AI that could power battlefield drone swarms—much of it is produced by universities under contract with the feds. And there’s no simple, short-term way for the government to replace this system.

Even other countries believe that US universities do valuable cyber work for the federal government; China just accused the University of California and Virginia Tech of aiding in an alleged cyberattack by the NSA, for instance.

That gives the larger universities—the ones that often have these contracts—additional leverage. They should find a way to use it.

Medical facilities

Many of the larger universities run sprawling and sophisticated health networks that serve whole communities and regions; indeed, much of the $9 billion in federal money at issue in the Harvard case was going to Harvard’s medical system of labs and hospitals.

If it seems unthinkable to you that the US government would treat the health of its own people as collateral damage in a war to become the Thought Police, remember that this is the same administration that has already tried to stop funds to the state of Maine—funds used to “feed children and disabled adults in schools and care settings across the state”—just because Maine allowed a couple of transgender kids to play on sports teams. What does the one have to do with the other? Nothing—except that the money provides leverage.

But health systems are not simply weapons for the Trump administration to use by refusing or delaying contracts, grants, and reimbursements. Health systems can improve people’s lives in the most tangible of ways. And that means they ought to be shining examples of community support and backing, providing a perfect opportunity to highlight the many good things that universities do for society.

Now, to the extent that these health care systems in the US have suffered from the general flaws of all US health care—lack of universal coverage leading to medical debt and the overuse of emergency rooms by the indigent, huge salaries commanded by doctors, etc.—the Trump war on these systems and on the universities behind them might provide a useful wake-up call from “business as usual.” Universities might use this time to double down on mission-driven values, using these incredible facilities even more to extend care, to lower barriers, and to promote truly public and community health. What better chance to show one’s city, region, and state the value of a university than massively boosting free and easy access to mental and physical health resources? Science research can be esoteric; saving someone’s body or mind is not.

Conclusion

This moment calls out for moral clarity and resolve. It asks universities to take their mission in society seriously and to resist being co-opted by government forces.

But it asks something of all of us, too. University leaders will make their choices, but to stand strong, they need the assistance of students, faculty, and alumni. In an age of polarization, parts of society have grown skeptical about the value of higher education. Some of these people are your friends, family, and neighbors. Universities must continue to make changes as they seek to build knowledge and justice and community, but those of us no longer within their halls and quads also have a part to play in sharing a more nuanced story about the value of the university system, both to our own lives and to the country.

If we don’t, our own degrees may be from institutions that have become almost unrecognizable.

Resist, eggheads! Universities are not as weak as they have chosen to be. Read More »

diablo-vs.-darkest-dungeon:-rpg-devs-on-balancing-punishment-and-power

Diablo vs. Darkest Dungeon: RPG devs on balancing punishment and power

For Sigman and the Darkest Dungeon team, it was important to establish an overarching design philosophy that was set in place. That said, the details within that framework may change or evolve significantly during development.

“In this age of early access and easily updatable games, balance is a living thing,” Sigman said. “It’s highly iterative throughout the game’s public life. We will update balance based upon community feedback, analytics, evolving metas, and also reflections on our own design philosophies and approaches.”

In Darkest Dungeon 2, a group of adventures sits by a table, exhausted

A screen for managing inventory and more in Darkest Dungeon II. Credit: Red Hook Studios

The problem, of course, is that every change to an existing game is a double-edged sword. With each update, you risk breaking the very elements you’re trying to fix.

Speaking to that ongoing balancing act, Sigman admits, “It’s not without its challenges. We’ve found that many players eagerly await such updates, but a subset gets really angry when developers change balance elements.”

Getting one of your favorite heroes or abilities nerfed can absolutely sink a game or destroy a strategy you’ve relied on for success. The team relies on a number of strictly mathematical tools to help isolate and solve balance problems, but on some level, it’s an artistic and philosophical question.

“A good example is how to address ‘exploits’ in a game,” Sigman said. “Some games try to hurriedly stamp out all possible exploits. With a single-player game, I think you have more leeway to let some exploits stand. It’s nice to let players get away with some stuff. If you kick sand over every exploit that appears, you remove some of the fun.”

As with so many aspects of game design, perfecting the balance between adversity and empowerment comes down to a simple question.

“One amazing piece of wisdom from Sid Meier, my personal favorite designer, is to remember to ask yourself, ‘Who is having the fun here? The designer or the player?’ It should be the player,” Sigman told us.

It’s the kind of approach that players love to hear. Even if a decision is made to make a game more difficult, particularly in an existing game, it should be done to make the play experience more enjoyable. If it begins to feel like devs are making balance changes just to scale down players’ power, it can begin to feel like you’re being punished for having fun.

The fine balance between power and challenge is a hard one to strike, but what players ultimately want is to have a good time. Sometimes that means feeling like a world-destroying demigod, and sometimes it means squeaking through a bloody boss encounter with a single hit point. Most often, though, you’re looking for a happy medium: a worthy challenge overcome through power and skill.

Diablo vs. Darkest Dungeon: RPG devs on balancing punishment and power Read More »

looking-at-the-universe’s-dark-ages-from-the-far-side-of-the-moon

Looking at the Universe’s dark ages from the far side of the Moon


meet you in the dark side of the moon

Building an observatory on the Moon would be a huge challenge—but it would be worth it.

A composition of the moon with the cosmos radiating behind it

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

There is a signal, born in the earliest days of the cosmos. It’s weak. It’s faint. It can barely register on even the most sensitive of instruments. But it contains a wealth of information about the formation of the first stars, the first galaxies, and the mysteries of the origins of the largest structures in the Universe.

Despite decades of searching for this signal, astronomers have yet to find it. The problem is that our Earth is too noisy, making it nearly impossible to capture this whisper. The solution is to go to the far side of the Moon, using its bulk to shield our sensitive instruments from the cacophony of our planet.

Building telescopes on the far side of the Moon would be the greatest astronomical challenge ever considered by humanity. And it would be worth it.

The science

We have been scanning and mapping the wider cosmos for a century now, ever since Edwin Hubble discovered that the Andromeda “nebula” is actually a galaxy sitting 2.5 million light-years away. Our powerful Earth-based observatories have successfully mapped the detailed location to millions of galaxies, and upcoming observatories like the Vera C. Rubin Observatory and Nancy Grace Roman Space Telescope will map millions more.

And for all that effort, all that technological might and scientific progress, we have surveyed less than 1 percent of the volume of the observable cosmos.

The vast bulk of the Universe will remain forever unobservable to traditional telescopes. The reason is twofold. First, most galaxies will simply be too dim and too far away. Even the James Webb Space Telescope, which is explicitly designed to observe the first generation of galaxies, has such a limited field of view that it can only capture a handful of targets at a time.

Second, there was a time, within the first few hundred million years after the Big Bang, before stars and galaxies had even formed. Dubbed the “cosmic dark ages,” this time naturally makes for a challenging astronomical target because there weren’t exactly a lot of bright sources to generate light for us to look at.

But there was neutral hydrogen. Most of the Universe is made of hydrogen, making it the most common element in the cosmos. Today, almost all of that hydrogen is ionized, existing in a super-heated plasma state. But before the first stars and galaxies appeared, the cosmic reserves of hydrogen were cool and neutral.

Neutral hydrogen is made of a single proton and a single electron. Each of these particles has a quantum property known as spin (which kind of resembles the familiar, macroscopic property of spin, but it’s not quite the same—though that’s a different article). In its lowest-energy state, the proton and electron will have spins oriented in opposite directions. But sometimes, through pure random quantum chance, the electron will spontaneously flip around. Very quickly, the hydrogen notices and gets the electron to flip back to where it belongs. This process releases a small amount of energy in the form of a photon with a wavelength of 21 centimeters.

This quantum transition is exceedingly rare, but with enough neutral hydrogen, you can build a substantial signal. Indeed, observations of 21-cm radiation have been used extensively in astronomy, especially to build maps of cold gas reservoirs within the Milky Way.

So the cosmic dark ages aren’t entirely dark; those clouds of primordial neutral hydrogen are emitting tremendous amounts of 21-cm radiation. But that radiation was emitted in the distant past, well over 13 billion years ago. As it has traveled through the cosmic distances, all those billions of light-years on its way to our eager telescopes, it has experienced the redshift effects of our expanding Universe.

By the time that dark age 21-cm radiation reaches us, it has stretched by a factor of 10, turning the neutral hydrogen signal into radio waves with wavelengths of around 2 meters.

The astronomy

Humans have become rather fond of radio transmissions in the past century. Unfortunately, the peak of this primordial signal from the dark ages sits right below the FM dial of your radio, which pretty much makes it impossible to detect from Earth. Our emissions are simply too loud, too noisy, and too difficult to remove. Teams of astronomers have devised clever ways to reduce or eliminate interference, featuring arrays scattered around the most desolate deserts in the world, but they have not been able to confirm the detection of a signal.

So those astronomers have turned in desperation to the quietest desert they can think of: the far side of the Moon.

It wasn’t until 1959 when the Soviet Luna 3 probe gave us our first glimpse of the Moon’s far side, and it wasn’t until 2019 when the Chang’e 4 mission made the first soft landing. Compared to the near side, and especially low-Earth orbit, there is very little human activity there. We’ve had more active missions on the surface of Mars than on the lunar far side.

Chang’e-4 landing zone on the far side of the moon. Credit: Xiao Xiao and others (CC BY 4.0)

And that makes the far side of the Moon the ideal location for a dark-age-hunting radio telescope, free from human interference and noise.

Ideas abound to make this a possibility. The first serious attempt was DARE, the Dark Ages Radio Explorer. Rather than attempting the audacious goal of building an actual telescope on the surface, DARE was a NASA-funded concept to develop an observatory (and when it comes to radio astronomy, “observatory” can be as a simple as a single antenna) to orbit the Moon and take data when it’s on the opposite side as the Earth.

For various bureaucratic reasons, NASA didn’t develop the DARE concept further. But creative astronomers have put forward even bolder proposals.

The FarView concept, for example, is a proposed radio telescope array that would dwarf anything on the Earth. It would be sensitive to frequency ranges between 5 and 40 MHz, allowing it to target the dark ages and the birth of the first stars. The proposed design contains 100,000 individual elements, with each element consisting of a single, simple dipole antenna, dispersed over a staggering 200 square kilometers. It would be infeasible to deliver that many antennae directly to the surface of the Moon. Instead, we’d have to build them, mining lunar regolith and turning it into the necessary components.

The design of this array is what’s called an interferometer. Instead of a single big dish, the individual antennae collect data on their own and then correlate all their signals together later. The effective resolution of an interferometer is the same as a single dish as big as the widest distance among the elements. The downside of an interferometer is that most of the incoming radiation just hits dirt (or in this case, lunar regolith), so the interferometer has to collect a lot of data to build up a decent signal.

Attempting these kinds of observations on the Earth requires constant maintenance and cleaning to remove radio interference and have essentially sunk all attempts to measure the dark ages. But a lunar-based interferometer will have all the time in the world it needs, providing a much cleaner and easier-to-analyze stream of data.

If you’re not in the mood for building 100,000 antennae on the Moon’s surface, then another proposal seeks to use the Moon’s natural features—namely, its craters. If you squint hard enough, they kind of look like radio dishes already. The idea behind the project, named the Lunar Crater Radio Telescope, is to find a suitable crater and use it as the support structure for a gigantic, kilometer-wide telescope.

This idea isn’t without precedent. Both the beloved Arecibo and the newcomer FAST observatories used depressions in the natural landscape of Puerto Rico and China, respectively, to take most of the load off of the engineering to make their giant dishes. The Lunar Telescope would be larger than both of those combined, and it would be tuned to hunt for dark ages radio signals that we can’t observe using Earth-based observatories because they simply bounce off the Earth’s ionosphere (even before we have to worry about any additional human interference). Essentially, the only way that humanity can access those wavelengths is by going beyond our ionosphere, and the far side of the Moon is the best place to park an observatory.

The engineering

The engineering challenges we need to overcome to achieve these scientific dreams are not small. So far, humanity has only placed a single soft-landed mission on the distant side of the Moon, and both of these proposals require an immense upgrade to our capabilities. That’s exactly why both far-side concepts were funded by NIAC, NASA’s Innovative Advanced Concepts program, which gives grants to researchers who need time to flesh out high-risk, high-reward ideas.

With NIAC funds, the designers of the Lunar Crater Radio Telescope, led by Saptarshi Bandyopadhyay at the Jet Propulsion Laboratory, have already thought of the challenges they will need to overcome to make the mission a success. Their mission leans heavily on another JPL concept, the DuAxel, which consists of a rover that can split into two single-axel rovers connected by a tether.

To build the telescope, several DuAxels are sent to the crater. One of each pair “sits” to anchor itself on the crater wall, while another one crawls down the slope. At the center, they are met with a telescope lander that has deployed guide wires and the wire mesh frame of the telescope (again, it helps for assembling purposes that radio dishes are just strings of metal in various arrangements). The pairs on the crater rim then hoist their companions back up, unfolding the mesh and lofting the receiver above the dish.

The FarView observatory is a much more capable instrument—if deployed, it would be the largest radio interferometer ever built—but it’s also much more challenging. Led by Ronald Polidan of Lunar Resources, Inc., it relies on in-situ manufacturing processes. Autonomous vehicles would dig up regolith, process and refine it, and spit out all the components that make an interferometer work: the 100,000 individual antennae, the kilometers of cabling to run among them, the solar arrays to power everything during lunar daylight, and batteries to store energy for round-the-lunar-clock observing.

If that sounds intense, it’s because it is, and it doesn’t stop there. An astronomical telescope is more than a data collection device. It also needs to crunch some numbers and get that precious information back to a human to actually study it. That means that any kind of far side observing platform, especially the kinds that will ingest truly massive amounts of data such as these proposals, would need to make one of two choices.

Choice one is to perform most of the data correlation and processing on the lunar surface, sending back only highly refined products to Earth for further analysis. Achieving that would require landing, installing, and running what is essentially a supercomputer on the Moon, which comes with its own weight, robustness, and power requirements.

The other choice is to keep the installation as lightweight as possible and send the raw data back to Earthbound machines to handle the bulk of the processing and analysis tasks. This kind of data throughput is outright impossible with current technology but could be achieved with experimental laser-based communication strategies.

The future

Astronomical observatories on the far side of the Moon face a bit of a catch-22. To deploy and run a world-class facility, either embedded in a crater or strung out over the landscape, we need some serious lunar manufacturing capabilities. But those same capabilities come with all the annoying radio fuzz that already bedevil Earth-based radio astronomy.

Perhaps the best solution is to open up the Moon to commercial exploitation but maintain the far side as a sort of out-world nature preserve, owned by no company or nation, left to scientists to study and use as a platform for pristine observations of all kinds.

It will take humanity several generations, if not more, to develop the capabilities needed to finally build far-side observatories. But it will be worth it, as those facilities will open up the unseen Universe for our hungry eyes, allowing us to pierce the ancient fog of our Universe’s past, revealing the machinations of hydrogen in the dark ages, the birth of the first stars, and the emergence of the first galaxies. It will be a fountain of cosmological and astrophysical data, the richest possible source of information about the history of the Universe.

Ever since Galileo ground and polished his first lenses and through the innovations that led to the explosion of digital cameras, astronomy has a storied tradition of turning the technological triumphs needed to achieve science goals into the foundations of various everyday devices that make life on Earth much better. If we’re looking for reasons to industrialize and inhabit the Moon, the noble goal of pursuing a better understanding of the Universe makes for a fine motivation. And we’ll all be better off for it.

Photo of Paul Sutter

Looking at the Universe’s dark ages from the far side of the Moon Read More »

an-ars-technica-history-of-the-internet,-part-1

An Ars Technica history of the Internet, part 1


Intergalactic Computer Network

In our new 3-part series, we remember the people and ideas that made the Internet.

A collage of vintage computer elements

Credit: Collage by Aurich Lawson

Credit: Collage by Aurich Lawson

In a very real sense, the Internet, this marvelous worldwide digital communications network that you’re using right now, was created because one man was annoyed at having too many computer terminals in his office.

The year was 1966. Robert Taylor was the director of the Advanced Research Projects Agency’s Information Processing Techniques Office. The agency was created in 1958 by President Eisenhower in response to the launch of Sputnik. So Taylor was in the Pentagon, a great place for acronyms like ARPA and IPTO. He had three massive terminals crammed into a room next to his office. Each one was connected to a different mainframe computer. They all worked slightly differently, and it was frustrating to remember multiple procedures to log in and retrieve information.

Author’s re-creation of Bob Taylor’s office with three teletypes. Credit: Rama & Musée Bolo (Wikipedia/Creative Commons), steve lodefink (Wikipedia/Creative Commons), The Computer Museum @ System Source

In those days, computers took up entire rooms, and users accessed them through teletype terminals—electric typewriters hooked up to either a serial cable or a modem and a phone line. ARPA was funding multiple research projects across the United States, but users of these different systems had no way to share their resources with each other. Wouldn’t it be great if there was a network that connected all these computers?

The dream is given form

Taylor’s predecessor, Joseph “J.C.R.” Licklider, had released a memo in 1963 that whimsically described an “Intergalactic Computer Network” that would allow users of different computers to collaborate and share information. The idea was mostly aspirational, and Licklider wasn’t able to turn it into a real project. But Taylor knew that he could.

In a 1998 interview, Taylor explained: “In most government funding, there are committees that decide who gets what and who does what. In ARPA, that was not the way it worked. The person who was responsible for the office that was concerned with that particular technology—in my case, computer technology—was the person who made the decision about what to fund and what to do and what not to do. The decision to start the ARPANET was mine, with very little or no red tape.”

Taylor marched into the office of his boss, Charles Herzfeld. He described how a network could save ARPA time and money by allowing different institutions to share resources. He suggested starting with a small network of four computers as a proof of concept.

“Is it going to be hard to do?” Herzfeld asked.

“Oh no. We already know how to do it,” Taylor replied.

“Great idea,” Herzfeld said. “Get it going. You’ve got a million dollars more in your budget right now. Go.”

Taylor wasn’t lying—at least, not completely. At the time, there were multiple people around the world thinking about computer networking. Paul Baran, working for RAND, published a paper in 1964 describing how a distributed military networking system could be made resilient even if some nodes were destroyed in a nuclear attack. Over in the UK, Donald Davies independently came up with a similar concept (minus the nukes) and invented a term for the way these types of networks would communicate. He called it “packet switching.”

On a regular phone network, after some circuit switching, a caller and answerer would be connected via a dedicated wire. They had exclusive use of that wire until the call was completed. Computers communicated in short bursts and didn’t require pauses the way humans did. So it would be a waste for two computers to tie up a whole line for extended periods. But how could many computers talk at the same time without their messages getting mixed up?

Packet switching was the answer. Messages were divided into multiple snippets. The order and destination were included with each message packet. The network could then route the packets in any way that made sense. At the destination, all the appropriate packets were put into the correct order and reassembled. It was like moving a house across the country: It was more efficient to send all the parts in separate trucks, each taking their own route to avoid congestion.

A simplified diagram of how packet switching works. Credit: Jeremy Reimer

By the end of 1966, Taylor had hired a program director, Larry Roberts. Roberts sketched a diagram of a possible network on a napkin and met with his team to propose a design. One problem was that each computer on the network would need to use a big chunk of its resources to manage the packets. In a meeting, Wes Clark passed a note to Roberts saying, “You have the network inside-out.” Clark’s alternative plan was to ship a bunch of smaller computers to connect to each host. These dedicated machines would do all the hard work of creating, moving, and reassembling packets.

With the design complete, Roberts sent out a request for proposals for constructing the ARPANET. All they had to do now was pick the winning bid, and the project could begin.

BB&N and the IMPs

IBM, Control Data Corporation, and AT&T were among the first to respond to the request. They all turned it down. Their reasons were the same: None of these giant companies believed the network could be built. IBM and CDC thought the dedicated computers would be too expensive, but AT&T flat-out said that packet switching wouldn’t work on its phone network.

In late 1968, ARPA announced a winner for the bid: Bolt Beranek and Newman. It seemed like an odd choice. BB&N had started as a consulting firm that calculated acoustics for theaters. But the need for calculations led to the creation of a computing division, and its first manager had been none other than J.C.R. Licklider. In fact, some BB&N employees had been working on a plan to build a network even before the ARPA bid was sent out. Robert Kahn led the team that drafted BB&N’s proposal.

Their plan was to create a network of “Interface Message Processors,” or IMPs, out of Honeywell 516 computers. They were ruggedized versions of the DDP-516 16-bit minicomputer. Each had 24 kilobytes of core memory and no mass storage other than a paper tape reader, and each cost $80,000 (about $700,000 today). In comparison, an IBM 360 mainframe cost between $7 million and $12 million at the time.

An original IMP, the world’s first router. It was the size of a large refrigerator. Credit: Steve Jurvetson (CC BY 2.0)

The 516’s rugged appearance appealed to BB&N, who didn’t want a bunch of university students tampering with its IMPs. The computer came with no operating system, but it didn’t really have enough RAM for one. The software to control the IMPs was written on bare metal using the 516’s assembly language. One of the developers was Will Crowther, who went on to create the first computer adventure game.

One other hurdle remained before the IMPs could be put to use: The Honeywell design was missing certain components needed to handle input and output. BB&N employees were dismayed that the first 516, which they named IMP-0, didn’t have working versions of the hardware additions they had requested.

It fell on Ben Barker, a brilliant undergrad student interning at BB&N, to manually fix the machine. Barker was the best choice, even though he had slight palsy in his hands. After several stressful 16-hour days wrapping and unwrapping wires, all the changes were complete and working. IMP-0 was ready.

In the meantime, Steve Crocker at the University of California, Los Angeles, was working on a set of software specifications for the host computers. It wouldn’t matter if the IMPs were perfect at sending and receiving messages if the computers themselves didn’t know what to do with them. Because the host computers were part of important academic research, Crocker didn’t want to seem like he was a dictator telling people what to do with their machines. So he titled his draft a “Request for Comments,” or RFC.

This one act of politeness forever changed the nature of computing. Every change since has been done as an RFC, and the culture of asking for comments pervades the tech industry even today.

RFC No. 1 proposed two types of host software. The first was the simplest possible interface, in which a computer pretended to be a dumb terminal. This was dubbed a “terminal emulator,” and if you’ve ever done any administration on a server, you’ve probably used one. The second was a more complex protocol that could be used to transfer large files. This became FTP, which is still used today.

A single IMP connected to one computer wasn’t much of a network. So it was very exciting in September 1969 when IMP-1 was delivered to BB&N and then shipped via air freight to UCLA. The first test of the ARPANET was done with simultaneous phone support. The plan was to type “LOGIN” to start a login sequence. This was the exchange:

“Did you get the L?”

“I got the L!”

“Did you get the O?”

“I got the O!”

“Did you get the G?”

“Oh no, the computer crashed!”

It was an inauspicious beginning. The computer on the other end was helpfully filling in the “GIN” part of “LOGIN,” but the terminal emulator wasn’t expecting three characters at once and locked up. It was the first time that autocomplete had ruined someone’s day. The bug was fixed, and the test completed successfully.

IMP-2, IMP-3, and IMP-4 were delivered to the Stanford Research Institute (where Doug Engelbart was keen to expand his vision of connecting people), UC Santa Barbara, and the University of Utah.

Now that the four-node test network was complete, the team at BB&N could work with the researchers at each node to put the ARPANET through its paces. They deliberately created the first ever denial of service attack in January 1970, flooding the network with packets until it screeched to a halt.

The original ARPANET, predecessor of the Internet. Circles are IMPs, and rectangles are computers. Credit: DARPA

Surprisingly, many of the administrators of the early ARPANET nodes weren’t keen to join the network.  They didn’t like the idea of anyone else being able to use resources on “their” computers. Taylor reminded them that their hardware and software projects were mostly ARPA-funded, so they couldn’t opt out.

The next month, Stephen Carr, Stephen Crocker, and Vint Cerf released RFC No. 33. It described a Network Control Protocol (NCP) that standardized how the hosts would communicate with each other. After this was adopted, the network was off and running.

J.C.R. Licklider, Bob Taylor, Larry Roberts, Steve Crocker, and Vint Cerf. Credit: US National Library of Medicine, WIRED, Computer Timeline, Steve Crocker, Vint Cerf

The ARPANET grew significantly over the next few years. Important events included the first ever email between two different computers, sent by Roy Tomlinson in July 1972. Another groundbreaking demonstration involved a PDP-10 in Harvard simulating, in real-time, an aircraft landing on a carrier. The data was sent over the ARPANET to a MIT-based graphics terminal, and the wireframe graphical view was shipped back to a PDP-1 at Harvard and displayed on a screen. Although it was primitive and slow, it was technically the first gaming stream.

A big moment came in October 1972 at the International Conference on Computer Communication. This was the first time the network had been demonstrated to the public. Interest in the ARPANET was growing, and people were excited. A group of AT&T executives noticed a brief crash and laughed, confident that they were correct in thinking that packet switching would never work. Overall, however, the demonstration was a resounding success.

But the ARPANET was no longer the only network out there.

The two keystrokes on a Model 33 Teletype that changed history. Credit: Marcin Wichary (CC BY 2.0)

A network of networks

The rest of the world had not been standing still. In Hawaii, Norman Abramson and Franklin Kuo created ALOHAnet, which connected computers on the islands using radio. It was the first public demonstration of a wireless packet switching network. In the UK, Donald Davies’ team developed the National Physical Laboratory (NPL) network. It seemed like a good idea to start connecting these networks together, but they all used different protocols, packet formats, and transmission rates. In 1972, the heads of several national networking projects created an International Networking Working Group. Cerf was chosen to lead it.

The first attempt to bridge this gap was SATNET, also known as the Atlantic Packet Satellite Network. Using satellite links, it connected the US-based ARPANET with networks in the UK. Unfortunately, SATNET itself used its own set of protocols. In true tech fashion, an attempt to make a universal standard had created one more standard instead.

Robert Kahn asked Vint Cerf to try and fix these problems once and for all. They came up with a new plan called the Transmission Control Protocol, or TCP. The idea was to connect different networks through specialized computers, called “gateways,” that translated and forwarded packets. TCP was like an envelope for packets, making sure they got to the right destination on the correct network. Because some networks were not guaranteed to be reliable, when one computer successfully received a complete and undamaged message, it would send an acknowledgement (ACK) back to the sender. If the ACK wasn’t received in a certain amount of time, the message was retransmitted.

In December 1974, Cerf, Yogen Dalal, and Carl Sunshine wrote a complete specification for TCP. Two years later, Cerf and Kahn, along with a dozen others, demonstrated the first three-network system. The demo connected packet radio, the ARPANET, and SATNET, all using TCP. Afterward, Cerf, Jon Postel, and Danny Cohen suggested a small but important change: They should take out all the routing information and put it into a new protocol, called the Internet Protocol (IP). All the remaining stuff, like breaking and reassembling messages, detecting errors, and retransmission, would stay in TCP. Thus, in 1978, the protocol officially became known as, and was forever thereafter, TCP/IP.

A map of the Internet in 1977. White dots are IMPs, and rectangles are host computers. Jagged lines connect to other networks. Credit: The Computer History Museum

If the story of creating the Internet was a movie, the release of TCP/IP would have been the triumphant conclusion. But things weren’t so simple. The world was changing, and the path ahead was murky at best.

At the time, joining the ARPANET required leasing high-speed phone lines for $100,000 per year. This limited it to large universities, research companies, and defense contractors. The situation led the National Science Foundation (NSF) to propose a new network that would be cheaper to operate. Other educational networks arose at around the same time. While it made sense to connect these networks to the growing Internet, there was no guarantee that this would continue. And there were other, larger forces at work.

By the end of the 1970s, computers had improved significantly. The invention of the microprocessor set the stage for smaller, cheaper computers that were just beginning to enter people’s homes. Bulky teletypes were being replaced with sleek, TV-like terminals. The first commercial online service, CompuServe, was released to the public in 1979. For just $5 per hour, you could connect to a private network, get weather and financial reports, and trade gossip with other users. At first, these systems were completely separate from the Internet. But they grew quickly. By 1987, CompuServe had 380,000 subscribers.

A magazine ad for CompuServe from 1980. Credit: marbleriver

Meanwhile, the adoption of TCP/IP was not guaranteed. At the beginning of the 1980s, the Open Systems Interconnection (OSI) group at the International Standardization Organization (ISO) decided that what the world needed was more acronyms—and also a new, global, standardized networking model.

The OSI model was first drafted in 1980, but it wasn’t published until 1984. Nevertheless, many European governments, and even the US Department of Defense, planned to transition from TCP/IP to OSI. It seemed like this new standard was inevitable.

The seven-layer OSI model. If you ever thought there were too many layers, you’re not alone. Credit: BlueCat Networks

While the world waited for OSI, the Internet continued to grow and evolve. In 1981, the fourth version of the IP protocol, IPv4, was released. On January 1, 1983, the ARPANET itself fully transitioned to using TCP/IP. This date is sometimes referred to as the “birth of the Internet,” although from a user’s perspective, the network still functioned the same way it had for years.

A map of the Internet from 1982. Ovals are networks, and rectangles are gateways. Hosts are not shown, but number in the hundreds. Note the appearance of modern-looking IPv4 addresses. Credit: Jon Postel

In 1986, the NFSNET came online, running under TCP/IP and connected to the rest of the Internet. It also used a new standard, the Domain Name System (DNS). This system, still in use today, used easy-to-remember names to point to a machine’s individual IP address. Computer names were assigned “top-level” domains based on their purpose, so you could connect to “frodo.edu” at an educational institution, or “frodo.gov” at a governmental one.

The NFSNET grew rapidly, dwarfing the ARPANET in size. In 1989, the original ARPANET was decommissioned. The IMPs, long since obsolete, were retired. However, all the ARPANET hosts were successfully migrated to other Internet networks. Like a Ship of Theseus, the ARPANET lived on even after every component of it was replaced.

The exponential growth of the ARPANET/Internet during its first two decades. Credit: Jeremy Reimer

Still, the experts and pundits predicted that all of these systems would eventually have to transfer over to the OSI model. The people who had built the Internet were not impressed. In 1987, writing RFC No. 1,000, Crocker said, “If we had only consulted the ancient mystics, we would have seen immediately that seven layers were required.”

The Internet pioneers felt they had spent many years refining and improving a working system. But now, OSI had arrived with a bunch of complicated standards and expected everyone to adopt their new design. Vint Cerf had a more pragmatic outlook. In 1982, he left ARPA for a new job at MCI, where he helped build the first commercial email system (MCI Mail) that was connected to the Internet. While at MCI, he contacted researchers at IBM, Digital, and Hewlett-Packard and convinced them to experiment with TCP/IP. Leadership at these companies still officially supported OSI, however.

The debate raged on through the latter half of the 1980s and into the early 1990s. Tired of the endless arguments, Cerf contacted the head of the National Institute of Standards and Technology (NIST) and asked him to write a blue ribbon report comparing OSI and TCP/IP. Meanwhile, while planning a successor to IPv4, the Internet Advisory Board (IAB) was looking at the OSI Connectionless Network Protocol and its 128-bit addressing for inspiration. In an interview with Ars, Vint Cerf explained what happened next.

“It was deliberately misunderstood by firebrands in the IETF [Internet Engineering Task Force] that we are traitors by adopting OSI,” he said. “They raised a gigantic hoo-hah. The IAB was deposed, and the authority in the system flipped. IAB used to be the decision makers, but the fight flips it, and IETF becomes the standard maker.”

To calm everybody down, Cerf performed a striptease at a meeting of the IETF in 1992. He revealed a T-shirt that said “IP ON EVERYTHING.” At the same meeting, David Clark summarized the feelings of the IETF by saying, “We reject kings, presidents, and voting. We believe in rough consensus and running code.”

Vint Cerf strips down to the bare essentials. Credit: Boardwatch and Light Reading

The fate of the Internet

The split design of TCP/IP, which was a small technical choice at the time, had long-lasting political implications. In 2001, David Clark and Marjory Blumenthal wrote a paper that looked back on the Protocol War. They noted that the Internet’s complex functions were performed at the endpoints, while the network itself ran only the IP part and was concerned simply with moving data from place to place. These “end-to-end principles” formed the basis of “… the ‘Internet Philosophy’: freedom of action, user empowerment, end-user responsibility for actions undertaken, and lack of controls ‘in’ the Net that limit or regulate what users can do,” they said.

In other words, the battle between TCP/IP and OSI wasn’t just about two competing sets of acronyms. On the one hand, you had a small group of computer scientists who had spent many years building a relatively open network and wanted to see it continue under their own benevolent guidance. On the other hand, you had a huge collective of powerful organizations that believed they should be in charge of the future of the Internet—and maybe the behavior of everyone on it.

But this impossible argument and the ultimate fate of the Internet was about to be decided, and not by governments, committees, or even the IETF. The world was changed forever by the actions of one man. He was a mild-mannered computer scientist, born in England and working for a physics research institute in Switzerland.

That’s the story covered in the next article in our series.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

An Ars Technica history of the Internet, part 1 Read More »