history

a-history-of-the-internet,-part-2:-the-high-tech-gold-rush-begins

A history of the Internet, part 2: The high-tech gold rush begins


The Web Era arrives, the browser wars flare, and a bubble bursts.

Welcome to the second article in our three-part series on the history of the Internet. If you haven’t already, read part one here.

As a refresher, here’s the story so far:

The ARPANET was a project started by the Defense Department’s Advanced Research Project Agency in 1969 to network different mainframe computers together across the country.  Later, it evolved into the Internet, connecting multiple global networks together using a common TCP/IP protocol.

By the late 1980s, investments from the National Science Foundation (NSF) had established an “Internet backbone” supporting hundreds of thousands of users worldwide. These users were mostly professors, researchers, and graduate students.

In the meantime, commercial online services like CompuServe were growing rapidly. These systems connected personal computer users, using dial-up modems, to a mainframe running proprietary software. Once online, people could read news articles and message other users. In 1989, CompuServe added the ability to send email to anyone on the Internet.

In 1965, Ted Nelson submitted a paper to the Association for Computing Machinery. He wrote: “Let me introduce the word ‘hypertext’ to mean a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.” The paper was part of a grand vision he called Xanadu, after the poem by Samuel Coleridge.

A decade later, in his book “Dream Machines/Computer Lib,” he described Xanadu thusly: “To give you a screen in your home from which you can see into the world’s hypertext libraries.” He admitted that the world didn’t have any hypertext libraries yet, but that wasn’t the point. One day, maybe soon, it would. And he was going to dedicate his life to making it happen.

As the Internet grew, it became more and more difficult to find things on it. There were lots of cool documents like the Hitchhiker’s Guide To The Internet, but to read them, you first had to know where they were.

The community of helpful programmers on the Internet leapt to the challenge. Alan Emtage at McGill University in Montreal wrote a tool called Archie. It searched a list of public file transfer protocol (FTP) servers. You still had to know the file name you were looking for, but Archie would let you download it no matter what server it was on.

An improved search engine was Gopher, written by a team headed by Mark McCahill at the University of Minnesota. It used a text-based menu system so that users didn’t have to remember file names or locations. Gopher servers could display a customized collection of links inside nested menus, and they integrated with other services like Archie and Veronica to help users search for more resources.

Gopher is a text-based Internet search and retrieval system. It’s still running in 2025! Jeremy Reimer

A Gopher server could provide many of the things we take for granted today: search engines, personal pages that could contain links, and downloadable files. But this wasn’t enough for a British computer scientist who was working at CERN, an intergovernmental institute that operated the world’s largest particle physics lab.

The World Wide Web

Hypertext had come a long way since Ted Nelson had coined the word in 1965. Bill Atkinson, a member of the original Macintosh development team, released HyperCard in 1987. It used the Mac’s graphical interface to let anyone develop “stacks,” collections of text, graphics, and sounds that could be connected together with clickable links. There was no networking, but stacks could be shared with other users by sending the files on a floppy disk.

The home screen of HyperCard 1.0 for Macintosh. Jeremy Reimer

Hypertext was so big that conferences were held just to discuss it in 1987 and 1988. Even Ted Nelson had finally found a sponsor for his personal dream: Autodesk founder John Walker had agreed to spin up a subsidiary to create a commercial version of Xanadu.

It was in this environment that CERN fellow Tim Berners-Lee drew up his own proposal in March 1989 for a new hypertext environment. His goal was to make it easier for researchers at CERN to collaborate and share information about new projects.

The proposal (which he called “Mesh”) had several objectives. It would provide a system for connecting information about people, projects, documents, and hardware being developed at CERN. It would be decentralized and distributed over many computers. Not all the computers at CERN were the same—there were Digital Equipment minis running VMS, some Macintoshes, and an increasing number of Unix workstations. Each of them should be able to view the information in the same way.

As Berners-Lee described it, “There are few products which take Ted Nelson’s idea of a wide ‘docuverse’ literally by allowing links between nodes in different databases. In order to do this, some standardization would be necessary.”

The original proposal document for the web, written in Microsoft Word for Macintosh 4.0, downloaded from Tim Berners-Lee’s website. Credit: Jeremy Reimer

The document ended by describing the project as “practical” and estimating that it might take two people six to 12 months to complete. Berners-Lee’s manager called it “vague, but exciting.” Robert Cailliau, who had independently proposed a hypertext system for CERN, joined Berners-Lee to start designing the project.

The computer Berners-Lee used was a NeXT cube, from the company Steve Jobs started after he was kicked out of Apple. NeXT workstations were expensive, but they came with a software development environment that was years ahead of its time. If you could afford one, it was like a coding accelerator. John Carmack would later write DOOM on a NeXT.

The NeXT workstation that Tim Berners-Lee used to create the World Wide Web. Please do not power down the World Wide Web. Credit: Coolcaesar (CC BY-SA 3.0)

Berners-Lee called his application “WorldWideWeb.” The software consisted of a server, which delivered pages of text over a new protocol called “Hypertext Transport Protocol,” or HTTP, and a browser that rendered the text. The browser translated markup code like “h1” to indicate a larger header font or “a” to indicate a link. There was also a graphical webpage editor, but it didn’t work very well and was abandoned.

The very first website was published, running on the development NeXT cube, on December 20, 1990. Anyone who had a NeXT machine and access to the Internet could view the site in all its glory.

The original WorldWideWeb browser running on NeXTstep 3, browsing the world’s first webpage. Jeremy Reimer

Because NeXT only sold 50,000 computers in total, that intersection did not represent a lot of people. Eight months later, Berners-Lee posted a reply to a question about interesting projects on the alt.hypertext Usenet newsgroup. He described the World Wide Web project and included links to all the software and documentation.

That one post changed the world forever.

Mosaic

On December 9, 1991, President George H.W. Bush signed into law the High Performance Computing Act, also known as the Gore Bill. The bill paid for an upgrade of the NSFNET backbone, as well as a separate funding initiative for the National Center for Supercomputing Applications (NCSA).

NCSA, based out of the University of Illinois, became a dream location for computing research. “NCSA was heaven,” recalled Alex Totic, who was a student there. “They had all the toys, from Thinking Machines to Crays to Macs to beautiful networks. It was awesome.” As is often the case in academia, the professors came up with research ideas but assigned most of the actual work to their grad students.

One of those students was Marc Andreessen, who joined NCSA as a part-time programmer for $6.85 an hour. Andreessen was fascinated by the World Wide Web, especially browsers. A new browser for Unix computers, ViolaWWW, was making the rounds at NCSA. No longer confined to the NeXT workstation, the web had caught the attention of the Unix community. But that community was still too small for Andreessen.

“To use the Net, you had to understand Unix,” he said in an interview with Forbes. “And the current users had no interest in making it easier. In fact, there was a definite element of not wanting to make it easier, of actually wanting to keep the riffraff out.”

Andreessen enlisted the help of his colleague, programmer Eric Bina, and started developing a new web browser in December 1992. In a little over a month, they released version 0.5 of “NCSA X Mosaic”—so called because it was designed to work with Unix’s X Window System. Ports for the Macintosh and Windows followed shortly thereafter.

Being available on the most popular graphical computers changed the trajectory of the web. In just 18 months, millions of copies of Mosaic were downloaded, and the rate was accelerating. The riffraff was here to stay.

Netscape

The instant popularity of Mosaic caused the management at NCSA to take a deeper interest in the project. Jon Mittelhauser, who co-wrote the Windows version, recalled that the small team “suddenly found ourselves in meetings with forty people planning our next features, as opposed to the five of us making plans at 2 am over pizzas and Cokes.”

Andreessen was told to step aside and let more experienced managers take over. Instead, he left NCSA and moved to California, looking for his next opportunity. “I thought I had missed the whole thing,” Andreessen said. “The overwhelming mood in the Valley when I arrived was that the PC was done, and by the way, the Valley was probably done because there was nothing else to do.”

But his reputation had preceded him. Jim Clark, the founder of Silicon Graphics, was also looking to start something new. A friend had shown him a demo of Mosaic, and Clark reached out to meet with Andreessen.

At a meeting, Andreessen pitched the idea of building a “Mosaic killer.” He showed Clark a graph that showed web users doubling every five months. Excited by the possibilities, the two men founded Mosaic Communications Corporation on April 4, 1994. Andreessen quickly recruited programmers from his former team, and they got to work. They codenamed their new browser “Mozilla” since it was going to be a monster that would devour Mosaic. Beta versions were titled “Mosaic Netscape,” but the University of Illinois threatened to sue the new company. To avoid litigation, the name of the company and browser were changed to Netscape, and the programmers audited their code to ensure none of it had been copied from NCSA.

Netscape became the model for all Internet startups to follow. Programmers were given unlimited free sodas and encouraged to basically never leave the office. “Netscape Time” accelerated software development schedules, and because updates could be delivered over the Internet, old principles of quality assurance went out the window. And the business model? It was simply to “get big fast,” and profits could be figured out later.

Work proceeded quickly, and the 1.0 version of Netscape Navigator and the Netsite web server were released on December 15, 1994, for Windows, Macintosh, and Unix systems running X Windows. The browser was priced at $39 for commercial users, but there was no charge for “academic and non-profit use, as well as for free evaluation purposes.”

Version 0.9 was called “Mosaic Netscape,” and the logo and company were still Mosaic. Jeremy Reimer

Netscape quickly became the standard. Within six months, it captured over 70 percent of the market share for web browsers. On August 9, 1995, only 16 months after the founding of the company, Netscape filed for an Initial Public Offering. A last-minute decision doubled the offering price to $28 per share, and on the first day of trading, the stock soared to $75 and closed at $58.25. The Web Era had officially arrived.

The web battles proprietary solutions

The excitement over a new way to transmit text and images to the public over phone lines wasn’t confined to the World Wide Web. Commercial online systems like CompuServe were also evolving to meet the graphical age. These companies released attractive new front-ends for their services that ran on DOS, Windows, and Macintosh computers. There were also new services that were graphics-only, like Prodigy, a cooperation between IBM and Sears, and an upstart that had sprung from the ashes of a Commodore 64 service called Quantum Link. This was America Online, or AOL.

Even Microsoft was getting into the act. Bill Gates believed that the “Information Superhighway” was the future of computing, and he wanted to make sure that all roads went through his company’s toll booth. The highly anticipated Windows 95 was scheduled to ship with a bundled dial-up online service called the Microsoft Network, or MSN.

At first, it wasn’t clear which of these online services would emerge as the winner. But people assumed that at least one of them would beat the complicated, nerdy Internet. CompuServe was the oldest, but AOL was nimbler and found success by sending out millions of free “starter” disks (and later, CDs) to potential customers. Microsoft was sure that bundling MSN with the upcoming Windows 95 would ensure victory.

Most of these services decided to hedge their bets by adding a sort of “side access” to the World Wide Web. After all, if they didn’t, their competitors would. At the same time, smaller companies (many of them former bulletin board services) started becoming Internet service providers. These smaller “ISPs” could charge less money than the big services because they didn’t have to create any content themselves. Thousands of new websites were appearing on the Internet every day, much faster than new sections could be added to AOL or CompuServe.

The tipping point happened very quickly. Before Windows 95 had even shipped, Bill Gates wrote his famous “Internet Tidal Wave” memo, where he assigned the Internet the “highest level of importance.” MSN was quickly changed to become more of a standard ISP and moved all of its content to the web. Microsoft rushed to release its own web browser, Internet Explorer, and bundled it with the Windows 95 Plus Pack.

The hype and momentum were entirely with the web now. It was the most exciting, most transformative technology of its time. The decade-long battle to control the Internet by forcing a shift to a new OSI standards model was forgotten. The web was all anyone cared about, and the web ran on TCP/IP.

The browser wars

Netscape had never expected to make a lot of money from its browser, as it was assumed that most people would continue to download new “evaluation” versions for free. Executives were pleasantly surprised when businesses started sending Netscape huge checks. The company went from $17 million in revenue in 1995 to $346 million the following year, and the press started calling Marc Andreessen “the new Bill Gates.”

The old Bill Gates wasn’t having any of that. Following his 1995 memo, Microsoft worked hard to improve Internet Explorer and made it available for free, including to business users. Netscape tried to fight back. It added groundbreaking new features like JavaScript, which was inspired by LISP but with a syntax similar to Java, the hot new programming language from Sun Microsystems. But it was hard to compete with free, and Netscape’s market share started to fall. By 1996, both browsers had reached version 3.0 and were roughly equal in terms of features. The battle continued, but when the Apache Software Foundation released its free web server, Netscape’s other source of revenue dried up as well. The writing was on the wall.

There was no better way to declare your allegiance to a web browser in 1996 than adding “Best Viewed In” above one of these icons. Credit: Jeremy Reimer

The dot-com boom

In 1989, the NSF lifted the restrictions on providing commercial access to the Internet, and by 1991, it had removed all barriers to commercial trade on the network. With the sudden ascent of the web, thanks to Mosaic, Netscape, and Internet Explorer, new companies jumped into this high-tech gold rush. But at first, it wasn’t clear what the best business strategy was. Users expected everything on the web to be free, so how could you make money?

Many early web companies started as hobby projects. In 1994, Jerry Yang and David Filo were electrical engineering PhD students at Stanford University. After Mosaic started popping off, they began collecting and trading links to new websites. Thus, “Jerry’s Guide to the World Wide Web” was born, running on Yang’s Sun workstation. Renamed Yahoo! (Yet Another Hierarchical, Officious Oracle), the site exploded in popularity. Netscape put multiple links to Yahoo on its main navigation bar, which further accelerated growth. “We weren’t really sure if you could make a business out of it, though,” Yang told Fortune. Nevertheless, venture capital companies came calling. Sequoia, which had made millions investing in Apple, put in $1 million for 25 percent of Yahoo.

Yahoo.com as it would have appeared in 1995. Credit: Jeremy Reimer

Another hobby site, AuctionWeb, was started in 1995 by Pierre Omidyar. Running on his own home server using the regular $30 per month service from his ISP, the site let people buy and sell items of almost any kind. When traffic started growing, his ISP told him it was increasing his Internet fees to $250 per month, as befitting a commercial enterprise. Omidyar decided he would try to make it a real business, even though he didn’t have a merchant account for credit cards or even a way to enforce the new 5 percent or 2.5 percent royalty charges. That didn’t matter, as the checks started rolling in. He found a business partner, changed the name to eBay, and the rest was history.

AuctionWeb (later eBay) as it would have appeared in 1995. Credit: Jeremy Reimer

In 1993, Jeff Bezos, a senior vice president at a hedge fund company, was tasked with investigating business opportunities on the Internet. He decided to create a proof of concept for what he described as an “everything store.” He chose books as an ideal commodity to sell online, since a book in one store was identical to one in another, and a website could offer access to obscure titles that might not get stocked in physical bookstores.

He left the hedge fund company, gathered investors and software development talent, and moved to Seattle. There, he started Amazon. At first, the site wasn’t much more than an online version of an existing bookseller catalog called Books In Print. But over time, Bezos added inventory data from the two major book distributors, Ingram and Baker & Taylor. The promise of access to every book in the world was exciting for people, and the company grew quickly.

Amazon.com as it would have appeared in 1995. Credit: Jeremy Reimer

The explosive growth of these startups fueled a self-perpetuating cycle. As publications like Wired experimented with online versions of their magazines, they invented and sold banner ads to fund their websites. The best customers for these ads were other web startups. These companies wanted more traffic, and they knew ads on sites like Yahoo were the best way to get it. Yahoo salespeople could then turn around and point to their exponential ad sales curves, which caused Yahoo stock to rise. This encouraged people to fund more web startups, which would all need to advertise on Yahoo. These new startups also needed to buy servers from companies like Sun Microsystems, causing those stocks to rise as well.

The crash

In the latter half of the 1990s, it looked like everything was going great. The economy was booming, thanks in part to the rise of the World Wide Web and the huge boost it gave to computer hardware and software companies. The NASDAQ index of tech-focused stocks painted a clear picture of the boom.

The NASDAQ composite index in the 1990s. Credit: Jeremy Reimer

Federal Reserve chairman Alan Greenspan called this phenomenon “irrational exuberance” but didn’t seem to be in a hurry to stop it. The fact that most new web startups didn’t have a realistic business model didn’t seem to bother investors. Sure, WebVan might have been paying more to deliver groceries than they earned from customers, but look at that growth curve!

The exuberance couldn’t last forever. The NASDAQ peaked at 8,843.87 in February 2000 and started to go down. In one month, it lost 34 percent of its value, and by August 2001, it was down to 3,253.38. Web companies laid off employees or went out of business completely. The party was over.

Andreessen said that the tech crash scarred him. “The overwhelming message to our generation in the early nineties was ‘You’re dirty, you’re all about grunge—you guys are fucking losers!’ Then the tech boom hit, and it was ‘We are going to do amazing things!’ And then the roof caved in, and the wisdom was that the Internet was a mirage. I 100 percent believed that because the rejection was so personal—both what everybody thought of me and what I thought of myself.”

But while some companies quietly celebrated the end of the whole Internet thing, others would rise from the ashes of the dot-com collapse. That’s the subject of our third and final article.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

A history of the Internet, part 2: The high-tech gold rush begins Read More »

cambridge-mapping-project-solves-a-medieval-murder

Cambridge mapping project solves a medieval murder


“A tale of shakedowns, sex, and vengeance that expose[s] tensions between the church and England’s elite.”

Location of the murder of John Forde, taken from the Medieval Murder Maps. Credit: Medieval Murder Maps. University of Cambridge: Institute of Criminology

In 2019, we told you about a new interactive digital “murder map” of London compiled by University of Cambridge criminologist Manuel Eisner. Drawing on data catalogued in the city coroners’ rolls, the map showed the approximate location of 142 homicide cases in late medieval London. The Medieval Murder Maps project has since expanded to include maps of York and Oxford homicides, as well as podcast episodes focusing on individual cases.

It’s easy to lose oneself down the rabbit hole of medieval murder for hours, filtering the killings by year, choice of weapon, and location. Think of it as a kind of 14th-century version of Clue: It was the noblewoman’s hired assassins armed with daggers in the streets of Cheapside near St. Paul’s Cathedral. And that’s just the juiciest of the various cases described in a new paper published in the journal Criminal Law Forum.

The noblewoman was Ela Fitzpayne, wife of a knight named Sir Robert Fitzpayne, lord of Stogursey. The victim was a priest and her erstwhile lover, John Forde, who was stabbed to death in the streets of Cheapside on May 3, 1337. “We are looking at a murder commissioned by a leading figure of the English aristocracy,” said University of Cambridge criminologist Manuel Eisner, who heads the Medieval Murder Maps project. “It is planned and cold-blooded, with a family member and close associates carrying it out, all of which suggests a revenge motive.”

Members of the mapping project geocoded all the cases after determining approximate locations for the crime scenes. Written in Latin, the coroners’ rolls are records of sudden or suspicious deaths as investigated by a jury of local men, called together by the coroner to establish facts and reach a verdict. Those records contain such relevant information as where the body was found and by whom; the nature of the wounds; the jury’s verdict on cause of death; the weapon used and how much it was worth; the time, location, and witness accounts; whether the perpetrator was arrested, escaped, or sought sanctuary; and any legal measures taken.

A brazen killing

The murder of Forde was one of several premeditated revenge killings recorded in the area of Westcheap. Forde was walking on the street when another priest, Hascup Neville, caught up to him, ostensibly for a casual chat, just after Vespers but before sunset. As they approached Foster Lane, Neville’s four co-conspirators attacked: Ela Fitzpayne’s brother, Hugh Lovell; two of her former servants, Hugh of Colne and John Strong; and a man called John of Tindale. One of them cut Ford’s throat with a 12-inch dagger, while two others stabbed him in the stomach with long fighting knives.

At the inquest, the jury identified the assassins, but that didn’t result in justice. “Despite naming the killers and clear knowledge of the instigator, when it comes to pursuing the perpetrators, the jury turn a blind eye,” said Eisner. “A household of the highest nobility, and apparently no one knows where they are to bring them to trial. They claim Ela’s brother has no belongings to confiscate. All implausible. This was typical of the class-based justice of the day.”

Colne, the former servant, was eventually charged and imprisoned for the crime some five years later in 1342, but the other perpetrators essentially got away with it.

Eisner et al. uncovered additional historical records that shed more light on the complicated history and ensuing feud between the Fitzpaynes and Forde. One was an indictment in the Calendar of Patent Rolls of Edward III, detailing how Ela and her husband, Forde, and several other accomplices raided a Benedictine priory in 1321. Among other crimes, the intruders “broke [the prior’s] houses, chests and gates, took away a horse, a colt and a boar… felled his trees, dug in his quarry, and carried away the stone and trees.” The gang also stole 18 oxen, 30 pigs, and about 200 sheep and lambs.

There were also letters that the Archbishop of Canterbury wrote to the Bishop of Winchester. Translations of the letters are published for the first time on the project’s website. The archbishop called out Ela by name for her many sins, including adultery “with knights and others, single and married, and even with clerics and holy orders,” and devised a punishment. This included not wearing any gold, pearls, or precious stones and giving money to the poor and to monasteries, plus a dash of public humiliation. Ela was ordered to perform a “walk of shame”—a tamer version than Cersei’s walk in Game of Thrones—every fall for seven years, carrying a four-pound wax candle to the altar of Salisbury Cathedral.

The London Archives. Inquest number 15 on 1336-7 City of London Coroner’s Rolls (

The London Archives. Inquest number 15 on 1336-7 City of London Coroner’s Rolls. Credit: The London Archives

Ela outright refused to do any of that, instead flaunting “her usual insolence.” Naturally, the archbishop had no choice but to excommunicate her. But Eisner speculates that this may have festered within Ela over the ensuing years, thereby sparking her desire for vengeance on Forde—who may have confessed to his affair with Ela to avoid being prosecuted for the 1321 raid. The archbishop died in 1333, four years before Forde’s murder, so Ela was clearly a formidable person with the patience and discipline to serve her revenge dish cold. Her marriage to Robert (her second husband) endured despite her seemingly constant infidelity, and she inherited his property when he died in 1354.

“Attempts to publicly humiliate Ela Fitzpayne may have been part of a political game, as the church used morality to stamp its authority on the nobility, with John Forde caught between masters,” said Eisner. “Taken together, these records suggest a tale of shakedowns, sex, and vengeance that expose tensions between the church and England’s elites, culminating in a mafia-style assassination of a fallen man of god by a gang of medieval hitmen.”

I, for one, am here for the Netflix true crime documentary on Ela Fitzpayne, “a woman in 14th century England who raided priories, openly defied the Archbishop of Canterbury, and planned the assassination of a priest,” per Eisner.

The role of public spaces

The ultimate objective of the Medieval Murder Maps project is to learn more about how public spaces shaped urban violence historically, the authors said. There were some interesting initial revelations back in 2019. For instance, the murders usually occurred in public streets or squares, and Eisner identified a couple of “hot spots” with higher concentrations than other parts of London. One was that particular stretch of Cheapside running from St Mary-le-Bow church to St. Paul’s Cathedral, where John Forde met his grisly end. The other was a triangular area spanning Gracechurch, Lombard, and Cornhill, radiating out from Leadenhall Market.

The perpetrators were mostly men (in only four cases were women the only suspects). As for weapons, knives and swords of varying types were the ones most frequently used, accounting for 68 percent of all the murders. The greatest risk of violent death in London was on weekends (especially Sundays), between early evening and the first few hours after curfew.

Eisner et al. have now extended their spatial analysis to include homicides committed in York and London in the 14th century with similar conclusions. Murders most often took place in markets, squares, and thoroughfares—all key nodes of medieval urban life—in the evenings or on weekends. Oxford had significantly higher murder rates than York or London and also more organized group violence, “suggestive of high levels of social disorganization and impunity.” London, meanwhile, showed distinct clusters of homicides, “which reflect differences in economic and social functions,” the authors wrote. “In all three cities, some homicides were committed in spaces of high visibility and symbolic significance.”

Criminal Law Forum, 2025. DOI: 10.1007/s10609-025-09512-7  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Cambridge mapping project solves a medieval murder Read More »

live-demos-test-effectiveness-of-revolutionary-war-weapons

Live demos test effectiveness of Revolutionary War weapons


not just men with muskets

Pitting the Brown Bess against the long rifle, testing the first military submarine, and more.

The colonial victory against the British in the American Revolutionary War was far from a predetermined outcome. In addition to good strategy and the timely appearance of key allies like the French, Continental soldiers relied on several key technological innovations in weaponry. But just how accurate is an 18th-century musket when it comes to hitting a target? Did the rifle really determine the outcome of the war? And just how much damage did cannon inflict? A team of military weapons experts and re-enactors set about testing some of those questions in a new NOVA documentary, Revolutionary War Weapons.

The documentary examines the firing range and accuracy of Brown Bess muskets and long rifles used by both the British and the Continental Army during the Battles of Lexington and Concord; the effectiveness of Native American tomahawks for close combat (no, they were usually not thrown as depicted in so many popular films, but there are modern throwing competitions today); and the effectiveness of cannons against the gabions and other defenses employed to protect the British fortress during the pivotal Siege of Yorktown. There is even a fascinating segment on the first military submarine, dubbed “the Turtle,” created by American inventor David Bushnell.

To capture all the high-speed ballistics action, director Stuart Powell relied upon a range of high-speed cameras called the Phantom Range. “It is like a supercomputer,” Powell told Ars. “It is a camera, but it doesn’t feel like a camera. You need to be really well-coordinated on the day when you’re using it because it bursts for, like, 10 seconds. It doesn’t record constantly because it’s taking so much data. Depending on what the frame rate is, you only get a certain amount of time. So you’re trying to coordinate that with someone trying to fire a 250-year-old piece of technology. If the gun doesn’t go off, if something goes wrong on set, you’ll miss it. Then it takes five minutes to reboot and get ready for the new shot. So a lot of the shoot revolves around the camera; that’s not normally the case.”

Constraints to keep the run time short meant that not every experiment the crew filmed ended up in the final document, according to Powell. For instance, there was one experiment in a hypoxia chamber for the segment on the Turtle, meant to see how long a person could function once the sub had descended, limiting the oxygen supply. “We felt there was slightly too much on the Turtle,” said Powell. “It took up a third of the whole film.” Also cut, for similar reasons, were power demonstrations for the musket, using boards instead of ballistic gel. But these cuts were anomalies in the tightly planned shooting schedule; most of the footage found its way onscreen.

The task of setting up all those field experiments fell to experts like military historian and weapons expert Joel Bohy, who is a frequent appraiser for Antiques Roadshow. We caught up with Bohy to learn more.

Redcoat re-enactors play out the Battle of Lexington. GBH/NOVA

Ars Technica: Obviously you can’t work with the original weapons because they’re priceless. How did you go about making replicas as close as possible to the originals?

Joel Bohy: Prior to our live fire studies, I started to collect the best contemporary reproductions of all of the different arms that were used. Over the years, I’ve had these custom-built, and now I have about 14 of them, so that we can cover pretty much every different type of arm used in the Revolution. I have my pick when we want to go out to the range and shoot at ballistics gelatin. We’ve published some great papers. The latest one was in conjunction with a bullet strike study where we went through and used modern forensic techniques to not only locate where each shooter was, what caliber the gun was, using ballistics rods and lasers, but we also had 18th-century house sections built and shot at the sections to replicate that damage. It was a validation study, and those firearms came in very handy.

Ars Technica: What else can we learn from these kinds of experiments?

Joel Bohy: One of the things that’s great about the archeology end of it is when we’re finding fired ammunition. I mostly volunteer with archaeologists on the Revolutionary War. One of my colleagues has worked on the Little Bighorn battlefield doing firing pin impressions, which leave a fingerprint, so he could track troopers and Native Americans across the battlefields. With [the Revolutionary War], it’s harder to do because we’re using smooth-bore guns that don’t necessarily leave a signature. But what they do leave is a caliber, and they also leave a location. We GIS all this stuff and map it, and it’s told us things about the battles that we never knew before. We just did one last August that hasn’t been released yet that changes where people thought a battle took place.

We like to combine that with our live fire studies. So when we [conduct the latter], we take a shot, then we metal detect each shot, bag it, tag it. We record all the data that we see on our musket balls that we fired so that when we’re on an archeology project, we can correlate that with what we see in the ground. We can see if it hits a tree, if it hits rocks, how close was a soldier when they fired—all based upon the deformation of the musket ball.

Ars Technica: What is the experience of shooting a replica of a musket compared to, say, a modern rifle?

Joel Bohy: It’s a lot different. When you’re firing a modern rifle, you pull the trigger and it’s very quick—a matter of milliseconds and the bullet’s downrange. With the musket, it’s similar, but it’s slower, and you can anticipate the shot. By the time the cock goes down, the flint strikes the hammer, it ignites the powder in the pan, which goes through the vent and sets off the charge—there’s a lot more time involved in that. So you can anticipate and flinch. You may not necessarily get the best shot as you would on a more modern rifle. There’s still a lot of kick, and there’s a lot more smoke because of the black powder that’s being used. With modern smokeless powder, you have very little smoke compared to the muskets.

Ars Technica: It’s often said that throughout the history of warfare, whoever has the superior weapons wins. This series presents a more nuanced picture of how such conflicts play out.

John Hargreaves making David Bushnell’s submarine bomb. GBH/Nova

Joel Bohy: In the Revolutionary War, you have both sides basically using the same type of firearm. Yes, some were using rifles, depending on what region you were from, and units in the British Army used rifles. But for the most part, they’re all using flintlock mechanisms and smoothbore guns. What comes into play in the Revolution is, on the [Continental] side, they don’t have the supply of arms that the British do. There was an embargo in place in 1774 so that no British arms could be shipped into Boston and North America. So you have a lot of innovation with gunsmiths and blacksmiths and clockmakers, who were taking older gun parts, barrels, and locks and building a functional firearm.

You saw a lot of the Americans at the beginning of the war trying to scrape through with these guns made from old parts and cobbled together. They’re functional. We didn’t really have that lock-making and barrel-making industry here. A lot of that stuff we had imported. So even if a gun was being made here, the firing mechanism and the barrels were imported. So we had to come up with another way to do it.

We started to receive a trickle of arms from the French in 1777, and to my mind, that’s what helped change the outcome of the war. Not only did we have French troops arriving, but we also had French cloth, shoes, hats, tin, powder, flints, and a ton of arms being shipped in. The French took all of their old guns from their last model that they had issued to the army, and they basically sold them all to us. So we had this huge influx of French arms that helped resupply us and made the war viable for us.

Close-up of a cannon firing. GBH/NOVA

Ars Technica: There are a lot of popular misconceptions about the history of the American Civil War. What are a couple of things that you wish more Americans understood about that conflict?

Joel Bohy: The onset of the American Revolution, April 1775, when the war began—these weren’t just a bunch of farmers who grabbed their rifle from over the fireplace and went out and beat the British Army. These people had been training and arming themselves for a long time. They had been doing it for generations before in wars with Native forces and the French since the 17th century. So by the time the Revolution broke out, they were as prepared as they could be for it.

“The rifle won the Revolution” is one of the things that I hear. No, it didn’t. Like I said, the French arms coming in helped us win the Revolution. A rifle is a tool, just like a smoothbore musket is. It has its benefits and it has its downfalls. It’s slower to load, you can’t mount a bayonet on it, but it’s more accurate, whereas the musket, you can load and fire faster, and you can mount a bayonet. So the gun that really won the Revolution was the musket, not the rifle.

It’s all well and good to be proud of being an American and our history and everything else, but these people just didn’t jump out of bed and fight. These people were training, they were drilling, they were preparing and arming and supplying not just arms, but food, cloth, tents, things that they would need to continue to have an army once the war broke out. It wasn’t just a big—poof—this happened and we won.

Revolutionary War Weapons is now streaming on YouTube and is also available on PBS.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Live demos test effectiveness of Revolutionary War weapons Read More »

an-ars-technica-history-of-the-internet,-part-1

An Ars Technica history of the Internet, part 1


Intergalactic Computer Network

In our new 3-part series, we remember the people and ideas that made the Internet.

A collage of vintage computer elements

Credit: Collage by Aurich Lawson

Credit: Collage by Aurich Lawson

In a very real sense, the Internet, this marvelous worldwide digital communications network that you’re using right now, was created because one man was annoyed at having too many computer terminals in his office.

The year was 1966. Robert Taylor was the director of the Advanced Research Projects Agency’s Information Processing Techniques Office. The agency was created in 1958 by President Eisenhower in response to the launch of Sputnik. So Taylor was in the Pentagon, a great place for acronyms like ARPA and IPTO. He had three massive terminals crammed into a room next to his office. Each one was connected to a different mainframe computer. They all worked slightly differently, and it was frustrating to remember multiple procedures to log in and retrieve information.

Author’s re-creation of Bob Taylor’s office with three teletypes. Credit: Rama & Musée Bolo (Wikipedia/Creative Commons), steve lodefink (Wikipedia/Creative Commons), The Computer Museum @ System Source

In those days, computers took up entire rooms, and users accessed them through teletype terminals—electric typewriters hooked up to either a serial cable or a modem and a phone line. ARPA was funding multiple research projects across the United States, but users of these different systems had no way to share their resources with each other. Wouldn’t it be great if there was a network that connected all these computers?

The dream is given form

Taylor’s predecessor, Joseph “J.C.R.” Licklider, had released a memo in 1963 that whimsically described an “Intergalactic Computer Network” that would allow users of different computers to collaborate and share information. The idea was mostly aspirational, and Licklider wasn’t able to turn it into a real project. But Taylor knew that he could.

In a 1998 interview, Taylor explained: “In most government funding, there are committees that decide who gets what and who does what. In ARPA, that was not the way it worked. The person who was responsible for the office that was concerned with that particular technology—in my case, computer technology—was the person who made the decision about what to fund and what to do and what not to do. The decision to start the ARPANET was mine, with very little or no red tape.”

Taylor marched into the office of his boss, Charles Herzfeld. He described how a network could save ARPA time and money by allowing different institutions to share resources. He suggested starting with a small network of four computers as a proof of concept.

“Is it going to be hard to do?” Herzfeld asked.

“Oh no. We already know how to do it,” Taylor replied.

“Great idea,” Herzfeld said. “Get it going. You’ve got a million dollars more in your budget right now. Go.”

Taylor wasn’t lying—at least, not completely. At the time, there were multiple people around the world thinking about computer networking. Paul Baran, working for RAND, published a paper in 1964 describing how a distributed military networking system could be made resilient even if some nodes were destroyed in a nuclear attack. Over in the UK, Donald Davies independently came up with a similar concept (minus the nukes) and invented a term for the way these types of networks would communicate. He called it “packet switching.”

On a regular phone network, after some circuit switching, a caller and answerer would be connected via a dedicated wire. They had exclusive use of that wire until the call was completed. Computers communicated in short bursts and didn’t require pauses the way humans did. So it would be a waste for two computers to tie up a whole line for extended periods. But how could many computers talk at the same time without their messages getting mixed up?

Packet switching was the answer. Messages were divided into multiple snippets. The order and destination were included with each message packet. The network could then route the packets in any way that made sense. At the destination, all the appropriate packets were put into the correct order and reassembled. It was like moving a house across the country: It was more efficient to send all the parts in separate trucks, each taking their own route to avoid congestion.

A simplified diagram of how packet switching works. Credit: Jeremy Reimer

By the end of 1966, Taylor had hired a program director, Larry Roberts. Roberts sketched a diagram of a possible network on a napkin and met with his team to propose a design. One problem was that each computer on the network would need to use a big chunk of its resources to manage the packets. In a meeting, Wes Clark passed a note to Roberts saying, “You have the network inside-out.” Clark’s alternative plan was to ship a bunch of smaller computers to connect to each host. These dedicated machines would do all the hard work of creating, moving, and reassembling packets.

With the design complete, Roberts sent out a request for proposals for constructing the ARPANET. All they had to do now was pick the winning bid, and the project could begin.

BB&N and the IMPs

IBM, Control Data Corporation, and AT&T were among the first to respond to the request. They all turned it down. Their reasons were the same: None of these giant companies believed the network could be built. IBM and CDC thought the dedicated computers would be too expensive, but AT&T flat-out said that packet switching wouldn’t work on its phone network.

In late 1968, ARPA announced a winner for the bid: Bolt Beranek and Newman. It seemed like an odd choice. BB&N had started as a consulting firm that calculated acoustics for theaters. But the need for calculations led to the creation of a computing division, and its first manager had been none other than J.C.R. Licklider. In fact, some BB&N employees had been working on a plan to build a network even before the ARPA bid was sent out. Robert Kahn led the team that drafted BB&N’s proposal.

Their plan was to create a network of “Interface Message Processors,” or IMPs, out of Honeywell 516 computers. They were ruggedized versions of the DDP-516 16-bit minicomputer. Each had 24 kilobytes of core memory and no mass storage other than a paper tape reader, and each cost $80,000 (about $700,000 today). In comparison, an IBM 360 mainframe cost between $7 million and $12 million at the time.

An original IMP, the world’s first router. It was the size of a large refrigerator. Credit: Steve Jurvetson (CC BY 2.0)

The 516’s rugged appearance appealed to BB&N, who didn’t want a bunch of university students tampering with its IMPs. The computer came with no operating system, but it didn’t really have enough RAM for one. The software to control the IMPs was written on bare metal using the 516’s assembly language. One of the developers was Will Crowther, who went on to create the first computer adventure game.

One other hurdle remained before the IMPs could be put to use: The Honeywell design was missing certain components needed to handle input and output. BB&N employees were dismayed that the first 516, which they named IMP-0, didn’t have working versions of the hardware additions they had requested.

It fell on Ben Barker, a brilliant undergrad student interning at BB&N, to manually fix the machine. Barker was the best choice, even though he had slight palsy in his hands. After several stressful 16-hour days wrapping and unwrapping wires, all the changes were complete and working. IMP-0 was ready.

In the meantime, Steve Crocker at the University of California, Los Angeles, was working on a set of software specifications for the host computers. It wouldn’t matter if the IMPs were perfect at sending and receiving messages if the computers themselves didn’t know what to do with them. Because the host computers were part of important academic research, Crocker didn’t want to seem like he was a dictator telling people what to do with their machines. So he titled his draft a “Request for Comments,” or RFC.

This one act of politeness forever changed the nature of computing. Every change since has been done as an RFC, and the culture of asking for comments pervades the tech industry even today.

RFC No. 1 proposed two types of host software. The first was the simplest possible interface, in which a computer pretended to be a dumb terminal. This was dubbed a “terminal emulator,” and if you’ve ever done any administration on a server, you’ve probably used one. The second was a more complex protocol that could be used to transfer large files. This became FTP, which is still used today.

A single IMP connected to one computer wasn’t much of a network. So it was very exciting in September 1969 when IMP-1 was delivered to BB&N and then shipped via air freight to UCLA. The first test of the ARPANET was done with simultaneous phone support. The plan was to type “LOGIN” to start a login sequence. This was the exchange:

“Did you get the L?”

“I got the L!”

“Did you get the O?”

“I got the O!”

“Did you get the G?”

“Oh no, the computer crashed!”

It was an inauspicious beginning. The computer on the other end was helpfully filling in the “GIN” part of “LOGIN,” but the terminal emulator wasn’t expecting three characters at once and locked up. It was the first time that autocomplete had ruined someone’s day. The bug was fixed, and the test completed successfully.

IMP-2, IMP-3, and IMP-4 were delivered to the Stanford Research Institute (where Doug Engelbart was keen to expand his vision of connecting people), UC Santa Barbara, and the University of Utah.

Now that the four-node test network was complete, the team at BB&N could work with the researchers at each node to put the ARPANET through its paces. They deliberately created the first ever denial of service attack in January 1970, flooding the network with packets until it screeched to a halt.

The original ARPANET, predecessor of the Internet. Circles are IMPs, and rectangles are computers. Credit: DARPA

Surprisingly, many of the administrators of the early ARPANET nodes weren’t keen to join the network.  They didn’t like the idea of anyone else being able to use resources on “their” computers. Taylor reminded them that their hardware and software projects were mostly ARPA-funded, so they couldn’t opt out.

The next month, Stephen Carr, Stephen Crocker, and Vint Cerf released RFC No. 33. It described a Network Control Protocol (NCP) that standardized how the hosts would communicate with each other. After this was adopted, the network was off and running.

J.C.R. Licklider, Bob Taylor, Larry Roberts, Steve Crocker, and Vint Cerf. Credit: US National Library of Medicine, WIRED, Computer Timeline, Steve Crocker, Vint Cerf

The ARPANET grew significantly over the next few years. Important events included the first ever email between two different computers, sent by Roy Tomlinson in July 1972. Another groundbreaking demonstration involved a PDP-10 in Harvard simulating, in real-time, an aircraft landing on a carrier. The data was sent over the ARPANET to a MIT-based graphics terminal, and the wireframe graphical view was shipped back to a PDP-1 at Harvard and displayed on a screen. Although it was primitive and slow, it was technically the first gaming stream.

A big moment came in October 1972 at the International Conference on Computer Communication. This was the first time the network had been demonstrated to the public. Interest in the ARPANET was growing, and people were excited. A group of AT&T executives noticed a brief crash and laughed, confident that they were correct in thinking that packet switching would never work. Overall, however, the demonstration was a resounding success.

But the ARPANET was no longer the only network out there.

The two keystrokes on a Model 33 Teletype that changed history. Credit: Marcin Wichary (CC BY 2.0)

A network of networks

The rest of the world had not been standing still. In Hawaii, Norman Abramson and Franklin Kuo created ALOHAnet, which connected computers on the islands using radio. It was the first public demonstration of a wireless packet switching network. In the UK, Donald Davies’ team developed the National Physical Laboratory (NPL) network. It seemed like a good idea to start connecting these networks together, but they all used different protocols, packet formats, and transmission rates. In 1972, the heads of several national networking projects created an International Networking Working Group. Cerf was chosen to lead it.

The first attempt to bridge this gap was SATNET, also known as the Atlantic Packet Satellite Network. Using satellite links, it connected the US-based ARPANET with networks in the UK. Unfortunately, SATNET itself used its own set of protocols. In true tech fashion, an attempt to make a universal standard had created one more standard instead.

Robert Kahn asked Vint Cerf to try and fix these problems once and for all. They came up with a new plan called the Transmission Control Protocol, or TCP. The idea was to connect different networks through specialized computers, called “gateways,” that translated and forwarded packets. TCP was like an envelope for packets, making sure they got to the right destination on the correct network. Because some networks were not guaranteed to be reliable, when one computer successfully received a complete and undamaged message, it would send an acknowledgement (ACK) back to the sender. If the ACK wasn’t received in a certain amount of time, the message was retransmitted.

In December 1974, Cerf, Yogen Dalal, and Carl Sunshine wrote a complete specification for TCP. Two years later, Cerf and Kahn, along with a dozen others, demonstrated the first three-network system. The demo connected packet radio, the ARPANET, and SATNET, all using TCP. Afterward, Cerf, Jon Postel, and Danny Cohen suggested a small but important change: They should take out all the routing information and put it into a new protocol, called the Internet Protocol (IP). All the remaining stuff, like breaking and reassembling messages, detecting errors, and retransmission, would stay in TCP. Thus, in 1978, the protocol officially became known as, and was forever thereafter, TCP/IP.

A map of the Internet in 1977. White dots are IMPs, and rectangles are host computers. Jagged lines connect to other networks. Credit: The Computer History Museum

If the story of creating the Internet was a movie, the release of TCP/IP would have been the triumphant conclusion. But things weren’t so simple. The world was changing, and the path ahead was murky at best.

At the time, joining the ARPANET required leasing high-speed phone lines for $100,000 per year. This limited it to large universities, research companies, and defense contractors. The situation led the National Science Foundation (NSF) to propose a new network that would be cheaper to operate. Other educational networks arose at around the same time. While it made sense to connect these networks to the growing Internet, there was no guarantee that this would continue. And there were other, larger forces at work.

By the end of the 1970s, computers had improved significantly. The invention of the microprocessor set the stage for smaller, cheaper computers that were just beginning to enter people’s homes. Bulky teletypes were being replaced with sleek, TV-like terminals. The first commercial online service, CompuServe, was released to the public in 1979. For just $5 per hour, you could connect to a private network, get weather and financial reports, and trade gossip with other users. At first, these systems were completely separate from the Internet. But they grew quickly. By 1987, CompuServe had 380,000 subscribers.

A magazine ad for CompuServe from 1980. Credit: marbleriver

Meanwhile, the adoption of TCP/IP was not guaranteed. At the beginning of the 1980s, the Open Systems Interconnection (OSI) group at the International Standardization Organization (ISO) decided that what the world needed was more acronyms—and also a new, global, standardized networking model.

The OSI model was first drafted in 1980, but it wasn’t published until 1984. Nevertheless, many European governments, and even the US Department of Defense, planned to transition from TCP/IP to OSI. It seemed like this new standard was inevitable.

The seven-layer OSI model. If you ever thought there were too many layers, you’re not alone. Credit: BlueCat Networks

While the world waited for OSI, the Internet continued to grow and evolve. In 1981, the fourth version of the IP protocol, IPv4, was released. On January 1, 1983, the ARPANET itself fully transitioned to using TCP/IP. This date is sometimes referred to as the “birth of the Internet,” although from a user’s perspective, the network still functioned the same way it had for years.

A map of the Internet from 1982. Ovals are networks, and rectangles are gateways. Hosts are not shown, but number in the hundreds. Note the appearance of modern-looking IPv4 addresses. Credit: Jon Postel

In 1986, the NFSNET came online, running under TCP/IP and connected to the rest of the Internet. It also used a new standard, the Domain Name System (DNS). This system, still in use today, used easy-to-remember names to point to a machine’s individual IP address. Computer names were assigned “top-level” domains based on their purpose, so you could connect to “frodo.edu” at an educational institution, or “frodo.gov” at a governmental one.

The NFSNET grew rapidly, dwarfing the ARPANET in size. In 1989, the original ARPANET was decommissioned. The IMPs, long since obsolete, were retired. However, all the ARPANET hosts were successfully migrated to other Internet networks. Like a Ship of Theseus, the ARPANET lived on even after every component of it was replaced.

The exponential growth of the ARPANET/Internet during its first two decades. Credit: Jeremy Reimer

Still, the experts and pundits predicted that all of these systems would eventually have to transfer over to the OSI model. The people who had built the Internet were not impressed. In 1987, writing RFC No. 1,000, Crocker said, “If we had only consulted the ancient mystics, we would have seen immediately that seven layers were required.”

The Internet pioneers felt they had spent many years refining and improving a working system. But now, OSI had arrived with a bunch of complicated standards and expected everyone to adopt their new design. Vint Cerf had a more pragmatic outlook. In 1982, he left ARPA for a new job at MCI, where he helped build the first commercial email system (MCI Mail) that was connected to the Internet. While at MCI, he contacted researchers at IBM, Digital, and Hewlett-Packard and convinced them to experiment with TCP/IP. Leadership at these companies still officially supported OSI, however.

The debate raged on through the latter half of the 1980s and into the early 1990s. Tired of the endless arguments, Cerf contacted the head of the National Institute of Standards and Technology (NIST) and asked him to write a blue ribbon report comparing OSI and TCP/IP. Meanwhile, while planning a successor to IPv4, the Internet Advisory Board (IAB) was looking at the OSI Connectionless Network Protocol and its 128-bit addressing for inspiration. In an interview with Ars, Vint Cerf explained what happened next.

“It was deliberately misunderstood by firebrands in the IETF [Internet Engineering Task Force] that we are traitors by adopting OSI,” he said. “They raised a gigantic hoo-hah. The IAB was deposed, and the authority in the system flipped. IAB used to be the decision makers, but the fight flips it, and IETF becomes the standard maker.”

To calm everybody down, Cerf performed a striptease at a meeting of the IETF in 1992. He revealed a T-shirt that said “IP ON EVERYTHING.” At the same meeting, David Clark summarized the feelings of the IETF by saying, “We reject kings, presidents, and voting. We believe in rough consensus and running code.”

Vint Cerf strips down to the bare essentials. Credit: Boardwatch and Light Reading

The fate of the Internet

The split design of TCP/IP, which was a small technical choice at the time, had long-lasting political implications. In 2001, David Clark and Marjory Blumenthal wrote a paper that looked back on the Protocol War. They noted that the Internet’s complex functions were performed at the endpoints, while the network itself ran only the IP part and was concerned simply with moving data from place to place. These “end-to-end principles” formed the basis of “… the ‘Internet Philosophy’: freedom of action, user empowerment, end-user responsibility for actions undertaken, and lack of controls ‘in’ the Net that limit or regulate what users can do,” they said.

In other words, the battle between TCP/IP and OSI wasn’t just about two competing sets of acronyms. On the one hand, you had a small group of computer scientists who had spent many years building a relatively open network and wanted to see it continue under their own benevolent guidance. On the other hand, you had a huge collective of powerful organizations that believed they should be in charge of the future of the Internet—and maybe the behavior of everyone on it.

But this impossible argument and the ultimate fate of the Internet was about to be decided, and not by governments, committees, or even the IETF. The world was changed forever by the actions of one man. He was a mild-mannered computer scientist, born in England and working for a physics research institute in Switzerland.

That’s the story covered in the next article in our series.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

An Ars Technica history of the Internet, part 1 Read More »

how-the-malleus-maleficarum-fueled-the-witch-trial-craze

How the Malleus maleficarum fueled the witch trial craze


Invention of printing press, influence of nearby cities created perfect conditions for social contagion.

Between 1400 and 1775, a significant upsurge of witch trials swept across early-modern Europe, resulting in the execution of an estimated 40,000–60,000 accused witches. Historians and social scientists have long studied this period in hopes of learning more about how large-scale social changes occur. Some have pointed to the invention of the printing press and the publication of witch-hunting manuals—most notably the highly influential Malleus maleficarum—as a major factor, making it easier for the witch-hunting hysteria to spread across the continent.

The abrupt emergence of the craze and its rapid spread, resulting in a pronounced shift in social behaviors—namely, the often brutal persecution of suspected witches—is consistent with a theory of social change dubbed “ideational diffusion,” according to a new paper published in the journal Theory and Society. There is the introduction of new ideas, reinforced by social networks, that eventually take root and lead to widespread behavioral changes in a society.

The authors had already been thinking about cultural change and the driving forces by which it occurs, including social contagion—especially large cultural shifts like the Reformation and the Counter-Reformation, for example. One co-author, Steve Pfaff, a sociologist at Chapman University, was working on a project about witch trials in Scotland and was particularly interested in the role the Malleus maleficarum might have played.

“Plenty of other people have written about witch trials, specific trials or places or histories,” co-author Kerice Doten-Snitker, a social scientist with the Santa Fe Institute, told Ars. “We’re interested in building a general theory about change and wanted to use that as a particular opportunity. We realized that the printing of the Mallleus maleficarum was something we could measure, which is useful when you want to do empirical work, not just theoretical work.”

Ch-ch-ch-changes…

The Witch, No. 1, c. 1892 lithograph by Joseph E. Baker. shows a woman in a courtroom, in the dock with arms outstretched before a judge and jury.

The Witch, No. 1, c. 1892 lithograph by Joseph E. Baker.

Credit: Public domain

The Witch, No. 1, c. 1892 lithograph by Joseph E. Baker. Credit: Public domain

Modeling how sweeping cultural change happens has been a hot research topic for decades, hitting the cultural mainstream with the publication of Malcolm Gladwell’s 2000 bestseller The Tipping Point. Researchers continue to make advances in this area. University of Pennsylvania sociologist Damon Centola, for instance, published How Behavior Spreads: the Science of Complex Contagions in 2018, in which he applied new lessons learned in epidemiology—on how viral epidemics spread—to our understanding of how social networks can broadly alter human behavior. But while epidemiological modeling might be useful for certain simple forms of social contagion—people come into contact with something and it spreads rapidly, like a viral meme or hit song—other forms of social contagion are more complicated, per Doten-Snitker.

Doten-Snitker et al.’s ideational diffusion model differs from Centola’s in some critical respects. For cases like the spread of witch trials, “It’s not just that people are coming into contact with a new idea, but that there has to be something cognitively that is happening,” said Doten-Snitker. “People have to grapple with the ideas and undergo some kind of idea adoption. We talk about this as reinterpreting the social world. They have to rethink what’s happening around them in ways that make them think that not only are these attractive new ideas, but also those new ideas prescribe different types of behavior. You have to act differently because of what you’re encountering.”

The authors chose to focus on social networks and trade routes for their analysis of the witch trials, building on prior research that prioritized broader economic and environmental factors. Cultural elites were already exchanging ideas through letters, but published books added a new dimension to those exchanges. Researchers studying 21st century social contagion can download massive amounts of online data from social networks. That kind of data is sparse from the medieval era. “We don’t have the same archives of communication,” said Doten-Snitker. “There’s this dual thing happening: the book itself, and people sharing information, arguing back and forth with each other” about new ideas.

Graph showing the stages of the ideation diffusion model

The stages of the ideation diffusion model.

Credit: K. Dooten-Snitker et al., 2024

The stages of the ideation diffusion model. Credit: K. Dooten-Snitker et al., 2024

So she and her co-authors et al. turned to trade routes to determine which cities were more central and thus more likely to be focal points of new ideas and information. “The places that are more central in these trade networks have more stuff passing through and are more likely to come into contact with new ideas from multiple directions—specifically ideas about witchcraft,” said Doten-Snitker. Then they looked at which of 553 cities in Central Europe held their first witch trials, and when, as well as those where the Malleus maleficarum and similar manuals had been published.

Social contagion

They found that each new published edition of the Malleus maleficarum corresponded with a subsequent increase in witch trials. But that wasn’t the only contributing factor; trends in neighboring cities also influenced the increase, resulting in a slow-moving ripple effect that spread across the continent. “What’s the behavior of neighboring cities?” said Doten-Snitker. “Are they having witch trials? That makes your city more likely to have a witch trial when you have the opportunity.”

In epidemiological models like Centola’s, the pattern of change is a slow start with early adoption that then picks up speed and spreads before slowing down again as a saturation point is reached, because most people have now adopted the new idea or technology. That doesn’t happen with witch trials or other complex social processes such as the spread of medieval antisemitism. “Most things don’t actually spread that widely; they don’t reach complete saturation,” said Doten-Snitker. “So we need to have theories that build that in as well.”

In the case of witch trials, the publication of the Malleus maleficarum helped shift medieval attitudes toward witchcraft, from something that wasn’t viewed as a particularly pressing problem to something evil that was menacing society. The tome also offered practical advice on what should be done about it. “So there’s changing ideas about witchcraft and this gets coupled with, well, you need to do something about it,” said Doten-Snitker. “Not only is witchcraft bad, but it’s a threat. So you have a responsibility as a community to do something about witches.”

The term “witch hunt” gets bandied about frequently in modern times, particularly on social media, and is generally understood to reference a mob mentality unleashed on a given target. But Doten-Snitker emphasizes that medieval witch trials were not “mob justice”; they were organized affairs, with official accusations to an organized local judiciary that collected and evaluated evidence, using the Malleus malficarum and similar treatises as a guide. The process, she said, is similar to how today’s governments adopt new policies.

Why conspiracy theories take hold

Graphic showing cities where witch trials did and did not take place in Central EuropeWitch trials in Central Europe, 1400–1679, as well as those that printed the Malleus Maleficarum.

Cities where witch trials did and did not take place in Central Europe, 1400–1679, as well as those with printed copies of the Malleus Maleficarum.

Credit: K. Doten-Snitker et al., 2024

Cities where witch trials did and did not take place in Central Europe, 1400–1679, as well as those with printed copies of the Malleus Maleficarum. Credit: K. Doten-Snitker et al., 2024

The authors developed their model using the witch trials as a useful framework, but there are contemporary implications, particularly with regard to the rampant spread of misinformation and conspiracy theories via social media. These can also lead to changes in real-world behavior, including violent outbreaks like the January 6, 2021, attack on the US Capitol or, more recently, threats aimed at FEMA workers in the wake of Hurricane Helene. Doten-Snitker thinks their model could help identify the emergence of certain telltale patterns, notably the combination of the spread of misinformation or conspiracy theories on social media along with practical guidelines for responding.

“People have talked about the ways that certain conspiracy theories end up making sense to people,” said Doten-Snitker. “It’s because they’re constructing new ways of thinking about their world. This is why people start with one conspiracy theory belief that is then correlated with belief in others. It’s because you’ve already started rebuilding your image of what’s happening in the world around you and that serves as a basis for how you should act.”

On the plus side, “It’s actually hard for something that feels compelling to certain people to spread throughout the whole population,” she said. “We should still be concerned about ideas that spread that could be socially harmful. We just need to figure out where it might be most likely to happen and focus our efforts in those places rather than assuming it is a global threat.”

There was a noticeable sharp decline in both the frequency and intensity of witch trial persecutions in 1679 and onward, raising the question of how such cultural shifts eventually ran their course. That aspect is not directly addressed by their model, according to Doten-Snitker, but it does provide a framework for the kinds of things that might signal a similar major shift, such as people starting to push back against extreme responses or practices.  In the case of the tail end of the witch trials craze, for instance, there was increased pressure to prioritize clear and consistent judicial practices that excluded extreme measures such as extracting confessions via torture, for example, or excluding dreams as evidence of witchcraft.

“That then supplants older ideas about what is appropriate and how you should behave in the world and you could have a de-escalation of some of the more extremist tendencies,” said Doten-Snitker. “It’s not enough to simply say those ideas or practices are wrong. You have to actually replace it with something. And that is something that is in our model. You have to get people to re-interpret what’s happening around them and what they should do in response. If you do that, then you are undermining a worldview rather than just criticizing it.”

Theory and Society, 2024. DOI: 10.1007/s11186-024-09576-1  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior reporter at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

How the Malleus maleficarum fueled the witch trial craze Read More »

due-to-ai-fakes,-the-“deep-doubt”-era-is-here

Due to AI fakes, the “deep doubt” era is here

A person writing

Memento | Aurich Lawson

Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we’re seemingly entering a new age of media skepticism: the era of what I’m calling “deep doubt.” While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people’s existing skepticism toward online content from strangers may be reaching new heights.

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

The concept behind “deep doubt” isn’t new, but its real-world impact is becoming increasingly apparent. Since the term “deepfake” first surfaced in 2017, we’ve seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump’s baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried “AI” again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend years ago, coining the term “liar’s dividend” in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.

The rise of deepfakes, the persistence of doubt

Doubt has been a political weapon since ancient times. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.

Over the past decade, the rise of deep-learning technology has made it increasingly easy for people to craft false or modified pictures, audio, text, or video that appear to be non-synthesized organic media. Deepfakes were named after a Reddit user going by the name “deepfakes,” who shared AI-faked pornography on the service, swapping out the face of a performer with the face of someone else who wasn’t part of the original recording.

In the 20th century, one could argue that a certain part of our trust in media produced by others was a result of how expensive and time-consuming it was, and the skill it required, to produce documentary images and films. Even texts required a great deal of time and skill. As the deep doubt phenomenon grows, it will erode this 20th-century media sensibility. But it will also affect our political discourse, legal systems, and even our shared understanding of historical events that rely on that media to function—we rely on others to get information about the world. From photorealistic images to pitch-perfect voice clones, our perception of what we consider “truth” in media will need recalibration.

In April, a panel of federal judges highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials. The concern emerged during a meeting of the US Judicial Conference’s Advisory Committee on Evidence Rules, where the judges discussed the challenges of authenticating digital evidence in an era of increasingly sophisticated AI technology. Ultimately, the judges decided to postpone making any AI-related rule changes, but their meeting shows that the subject is already being considered by American judges.

Due to AI fakes, the “deep doubt” era is here Read More »

natgeo-documents-salvage-of-tuskegee-airman’s-lost-wwii-plane-wreckage

NatGeo documents salvage of Tuskegee Airman’s lost WWII plane wreckage

Remembering a hero this Juneteenth —

The Real Red Tails investigates the fatal crash of 2nd Lt. Frank Moody in 1944.

Michigan's State Maritime Archaeologist Wayne R. Lusardi takes notes underwater at the wreckage.

Enlarge / Michigan’s State Maritime Archaeologist Wayne R. Lusardi takes notes underwater at the Lake Huron WWII wreckage of 2nd Lt. Frank Moody’s P-39 Airacobra. Moody, one of the famed Tuskagee Airmen, fatally crashed in 1944.

National Geographic

In April 1944, a pilot with the Tuskegee Airmen, Second Lieutenant Frank Moody, was on a routine training mission when his plane malfunctioned. Moody lost control of the aircraft and plunged to his death in the chilly waters of Lake Huron. His body was recovered two months later, but the airplane was left at the bottom of the lake—until now. Over the last few years, a team of divers working with the Tuskegee Airmen National Historical Museum in Detroit has been diligently recovering the various parts of Moody’s plane to determine what caused the pilot’s fatal crash.

That painstaking process is the centerpiece of The Real Red Tails, a new documentary from National Geographic narrated by Sheryl Lee Ralph (Abbot Elementary). The documentary features interviews with the underwater archaeologists working to recover the plane, as well as firsthand accounts from Moody’s fellow airmen and stunning underwater footage from the wreck itself.

The Tuskegee Airmen were the first Black military pilots in the US Armed Forces and helped pave the way for the desegregation of the military. The men painted the tails of their P-47 planes red, earning them the nickname the Red Tails. (They initially flew Bell P-39 Airacobras like Moody’s downed plane, and later flew P-51 Mustangs.) It was then-First Lady Eleanor Roosevelt who helped tip popular opinion in favor of the fledgling unit when she flew with the Airmen’s chief instructor, C. Alfred Anderson, in March 1941. The Airmen earned praise for their skill and bravery in combat during World War II, with members being awarded three Distinguished Unit Citations, 96 Distinguished Flying Crosses, 14 Bronze Stars, 60 Purple Hearts, and at least one Silver Star.

  • 2nd Lt. Frank Moody’s official military portrait.

    National Archives and Records Administration

  • Tuskegee Airman Lt. Col. (Ret.) Harry T. Stewart.

    National Geographic/Rob Lyall

  • Stewart’s official portrait as a US Army Air Force pilot.

    National Archives and Records Administration

  • Tuskegee Airman Lt. Col. (Ret.) James H. Harvey.

    National Geographic/Rob Lyall

  • Harvey’s official portrait as a US Army Air Force pilot.

    National Archives and Records Administration

  • Stewart and Harvey (second and third, l-r).

    James Harvey

  • Stewart stands next to a restored WWII Mustang airplane at the Tuskegee Airmen National Museum in Detroit.

    National Geographic/Rob Lyall

A father-and-son team, David and Drew Losinski, discovered the wreckage of Moody’s plane in 2014 during cleanup efforts for a sunken barge. They saw what looked like a car door lying on the lake bed that turned out to be a door from a WWII-era P-39. The red paint on the tail proved it had been flown by a “Red Tail” and it was eventually identified as Moody’s plane. The Losinskis then joined forces with Wayne Lusardi, Michigan’s state maritime archaeologist, to explore the remarkably well-preserved wreckage. More than 600 pieces have been recovered thus far, including the engine, the propeller, the gearbox, machine guns, and the main 37mm cannon.

Ars caught up with Lusardi to learn more about this fascinating ongoing project.

Ars Technica: The area where Moody’s plane was found is known as Shipwreck Alley. Why have there been so many wrecks—of both ships and airplanes—in that region?

Wayne Lusardi: Well, the Great Lakes are big, and if you haven’t been on them, people don’t really understand they’re literally inland seas. Consequently, there has been a lot of maritime commerce on the lakes for hundreds of years. Wherever there’s lots of ships, there’s usually lots of accidents. It’s just the way it goes. What we have in the Great Lakes, especially around some places in Michigan, are really bad navigation hazards: hidden reefs, rock piles that are just below the surface that are miles offshore and right near the shipping lanes, and they often catch ships. We have bad storms that crop up immediately. We have very chaotic seas. All of those combined to take out lots of historic vessels. In Michigan alone, there are about 1,500 shipwrecks; in the Great Lakes, maybe close to 10,000 or so.

One of the biggest causes of airplanes getting lost offshore here is fog. Especially before they had good navigation systems, pilots got lost in the fog and sometimes crashed into the lake or just went missing altogether. There are also thunderstorms, weather conditions that impact air flight here, and a lot of ice and snow storms.

Just like commercial shipping, the aviation heritage of the Great Lakes is extensive; a lot of the bigger cities on the Eastern Seaboard extend into the Great Lakes. It’s no surprise that they populated the waterfront, the shorelines first, and in the early part of the 20th century, started connecting them through aviation. The military included the Great Lakes in their training regimes because during World War I, the conditions that you would encounter in the Great Lakes, like flying over big bodies of water, or going into remote areas to strafe or to bomb, mimicked what pilots would see in the European theater during the first World War. When Selfridge Field near Detroit was developed by the Army Air Corps in 1917, it was the farthest northern military air base in the United States, and it trained pilots to fly in all-weather conditions to prepare them for Europe.

NatGeo documents salvage of Tuskegee Airman’s lost WWII plane wreckage Read More »

shackleton-died-on-board-the-quest;-ship’s-wreckage-has-just-been-found

Shackleton died on board the Quest; ship’s wreckage has just been found

A ship called Quest —

“His final voyage kind of ended that Heroic Age of Exploration.”

Ghostly historical black and white photo of a ship breaking in two in the process of sinking

Enlarge / Ernest Shackleton died on board the Quest in 1922. Forty years later, the ship sank off Canada’s Atlantic Coast.

Tore Topp/Royal Canadian Geographical Society

Famed polar explorer Ernest Shackleton famously defied the odds to survive the sinking of his ship, Endurance, which became trapped in sea ice in 1914. His luck ran out on his follow-up expedition; he died unexpectedly of a heart attack in 1922 on board a ship called Quest. The ship survived that expedition and sailed for another 40 years, eventually sinking in 1962 after its hull was pierced by ice on a seal-hunting run. Shipwreck hunters have now located the remains of the converted Norwegian sealer in the Labrador Sea, off the coast of Newfoundland, Canada. The wreckage of Endurance was found in pristine condition in 2022 at the bottom of the Weddell Sea.

The Quest expedition’s relatively minor accomplishments might lack the nail-biting drama of the Endurance saga, but the wreck is nonetheless historically significant. “His final voyage kind of ended that Heroic Age of Exploration, of polar exploration, certainly in the south,” renowned shipwreck hunter David Mearns told the BBC. “Afterwards, it was what you would call the scientific age. In the pantheon of polar ships, Quest is definitely an icon.”

As previously reported, Endurance set sail from Plymouth, Massachusetts, on August 6, 1914, with Shackleton joining his crew in Buenos Aires, Argentina. By January 1915, the ship had become hopelessly locked in sea ice, unable to continue its voyage. For 10 months, the crew endured the freezing conditions, waiting for the ice to break up. The ship’s structure remained intact, but by October 25, Shackleton realized Endurance was doomed. He and his men opted to camp out on the ice some two miles (3.2 km) away, taking as many supplies as they could with them.

Compacted ice and snow continued to fill the ship until a pressure wave hit on November 13, crushing the bow and splitting the main mast—all of which was captured on camera by crew photographer Frank Hurley. Another pressure wave hit in late afternoon November 21, lifting the ship’s stern. The ice floes parted just long enough for Endurance to finally sink into the ocean, before closing again to erase any trace of the wreckage.

When the sea ice finally disintegrated in April 1916, the crew launched lifeboats and managed to reach Elephant Island five days later. Shackleton and five of his men set off for South Georgia the next month to get help—a treacherous 720-mile journey by open boat. A storm blew them off course, and they ended up landing on the unoccupied southern shore. So Shackleton left three men behind while he and a companion navigated dangerous mountain terrain to reach the whaling station at Stromness on May 2. A relief ship collected the other three men and finally arrived back on Elephant Island in August. Miraculously, Shackleton’s entire crew was still alive.

Endurance, which sank off the coast of Antarctica in 1915 after being crushed by pack ice. An expedition located the shipwreck in pristine condition in 2022 after nearly 107 years. ” height=”424″ src=”https://cdn.arstechnica.net/wp-content/uploads/2022/03/endurance2CROP-640×424.jpg” width=”640″>

Enlarge / This is the stern of the good ship Endurance, which sank off the coast of Antarctica in 1915 after being crushed by pack ice. An expedition located the shipwreck in pristine condition in 2022 after nearly 107 years.

Falklands Maritime Heritage Trust/NatGeo

Shackleton’s last voyage

By the time Shackleton got back to England, the country was embroiled in World War I, and many of his men enlisted. Shackleton was considered too old for active service. He was also deeply in debt from the Endurance expedition, earning a living on the lecture circuit. But he still dreamed of making another expedition to the Arctic Ocean north of Alaska to explore the Beaufort Sea. He got seed money (and eventually full funding) from an old school chum, John Quillier Rowett. Shackleton purchased a wooden Norwegian whaler, Foca I, which his wife Emily renamed Quest.

Shackleton died on board the Quest; ship’s wreckage has just been found Read More »

gaming-historians-preserve-what’s-likely-nintendo’s-first-us-commercial

Gaming historians preserve what’s likely Nintendo’s first US commercial

A Mega Mego find —

Mego’s “Time Out” spot pitched Nintendo’s Game & Watch handhelds under a different name.

Enlarge / “So slim you can play it anywhere.”

Gamers of a certain age may remember Nintendo’s Game & Watch line, which predated the cartridge-based Game Boy by offering simple, single-serving LCD games that can fetch a pretty penny at auction today. But even most ancient gamers probably don’t remember Mego’s “Time Out” line, which took the internal of Nintendo’s early Game & Watch titles and rebranded them for an American audience that hadn’t yet heard of the Japanese game maker.

Now, the Video Game History Foundation (VGHF) has helped preserve the original film of an early Mego Time Out commercial, marking the recovered, digitized video as “what we believe is the first commercial for a Nintendo product in the United States.” The 30-second TV spot—which is now available in a high-quality digital transfer for the first time—provides a fascinating glimpse into how marketers positioned some of Nintendo’s earliest games to a public that still needed to be sold on the very idea of portable gaming.

Imagine an “electronic sport”

A 1980 Mego catalog sells Nintendo's Game & Watch games under the toy company's

Enlarge / A 1980 Mego catalog sells Nintendo’s Game & Watch games under the toy company’s “Time Out” branding.

Founded in the 1950s, Mego made a name for itself in the 1970s with licensed movie action figures and early robotic toys like the 2-XL (a childhood favorite of your humble author). In 1980, though, Mego branched out to partner with a brand-new, pre-Donkey Kong Nintendo of America to release rebranded versions of four early Game & Watch titles: Ball (which became Mego’s “Toss-Up”), Vermin (“Exterminator”), Fire (“Fireman Fireman”), and Flagman (“Flag Man”).

While Mego would go out of business by 1983 (long before a 2018 brand revival), in 1980, the company had the pleasure and responsibility of introducing America to Nintendo games for the first time, even if they were being sold under the Mego name. And while home systems like the Atari VCS and Intellivision were already popular with the American public at the time, Mego had to sell the then-new idea of simple black-and-white games you could play away from the living room TV (Milton Bradley Microvision notwithstanding).

The 1980 Mego spot that introduced Nintendo games to the US, now preserved in high-resolution.

That’s where a TV spot from Durona Productions came in. If you were watching TV in the early ’80s, you might have heard an announcer doing a bad Howard Cosell impression selling the Time Out line as “the new electronic sport,” suitable as a pastime for athletes who have been injured jogging or playing tennis or basketball.

The ad also had to introduce even extremely basic gaming functions like “an easy game and a hard game,” high score tracking, and the ability to “tell time” (as Douglas Adams noted, humans were “so amazingly primitive that they still [thought] digital watches [were] a pretty neat idea”). And the ad made a point of highlighting that the game is “so slim you can play it anywhere,” complete with a close-up of the unit fitting in the back pocket of a rollerskater’s tight shorts.

Preserved for all time

This early Nintendo ad wasn’t exactly “lost media” before now; you could find fuzzy, video-taped versions online, including variations that talk up the pocket-sized games as sports “where size and strength won’t help.” But the Video Game History Foundation has now digitized and archived a much higher quality version of the ad, courtesy of an original film reel discovered in an online auction by game collector (and former game journalist) Chris Kohler. Kohler acquired the rare 16 mm film and provided it to VGHF, which in turn reached out to film restoration experts at Movette Film Transfer to help color-correct the faded, 40-plus-year-old print and encode it in full 2K resolution for the first time.

This important historical preservation work is as good an excuse as any to remember a time when toy companies were still figuring out how to convince the public that Nintendo’s newfangled portable games were something that could fit into their everyday life. As VGHF’s Phil Salvador writes, “it feels laser-targeted to the on-the-go yuppie generation of the ’80s with disposable income to spend on electronic toys. There’s shades of how Nintendo would focus on young, trendy, mobile demographics in their more recent marketing campaigns… but we’ve never seen an ad where someone plays Switch in the hospital.”

Gaming historians preserve what’s likely Nintendo’s first US commercial Read More »

can-an-online-library-of-classic-video-games-ever-be-legal?

Can an online library of classic video games ever be legal?

Legal eagles —

Preservationists propose access limits, but industry worries about a free “online arcade.”

The Q*Bert's so bright, I gotta wear shades.

Enlarge / The Q*Bert’s so bright, I gotta wear shades.

Aurich Lawson | Getty Images | Gottlieb

For years now, video game preservationists, librarians, and historians have been arguing for a DMCA exemption that would allow them to legally share emulated versions of their physical game collections with researchers remotely over the Internet. But those preservationists continue to face pushback from industry trade groups, which worry that an exemption would open a legal loophole for “online arcades” that could give members of the public free, legal, and widespread access to copyrighted classic games.

This long-running argument was joined once again earlier this month during livestreamed testimony in front of the Copyright Office, which is considering new DMCA rules as part of its regular triennial process. During that testimony, representatives of the Software Preservation Network and the Library Copyright Alliance defended their proposal for a system of “individualized human review” to help ensure that temporary remote game access would be granted “primarily for the purposes of private study, scholarship, teaching, or research.”

Lawyer Steve Englund, who represented the ESA at the Copyright Office hearing.

Enlarge / Lawyer Steve Englund, who represented the ESA at the Copyright Office hearing.

Speaking for the Entertainment Software Association trade group, though, lawyer Steve Englund said the new proposal was “not very much movement” on the part of the proponents and was “at best incomplete.” And when pressed on what would represent “complete” enough protections to satisfy the ESA, Englund balked.

“I don’t think there is at the moment any combination of limitations that ESA members would support to provide remote access,” Englund said. “The preservation organizations want a great deal of discretion to handle very valuable intellectual property. They have yet to… show a willingness on their part in a way that might be comforting to the owners of that IP.”

Getting in the way of research

Research institutions can currently offer remote access to digital copies of works like books, movies, and music due to specific DMCA exemptions issued by the Copyright Office. However, there is no similar exemption that allows for sending temporary digital copies of video games to interested researchers. That means museums like the Strong Museum of Play can only provide access to their extensive game archives if a researcher physically makes the trip to their premises in Rochester, New York.

Currently, the only way for researchers to access these games in the Strong Museum's collection is to visit Rochester, New York, in person.

Enlarge / Currently, the only way for researchers to access these games in the Strong Museum’s collection is to visit Rochester, New York, in person.

During the recent Copyright Office hearing, industry lawyer Robert Rothstein tried to argue that this amounts to more of a “travel problem” than a legal problem that requires new rule-making. But NYU professor Laine Nooney argued back that the need for travel represents “a significant financial and logistical impediment to doing research.”

For Nooney, getting from New York City to the Strong Museum in Rochester would require a five- to six-hour drive “on a good day,” they said, as well as overnight accommodations for any research that’s going to take more than a small part of one day. Because of this, Nooney has only been able to access the Strong collection twice in her career. For researchers who live farther afield—or for grad students and researchers who might not have as much funding—even a single research visit to the Strong might be out of reach.

“You don’t go there just to play a game for a couple of hours,” Nooney said. “Frankly my colleagues in literary studies or film history have pretty routine and regular access to digitized versions of the things they study… These impediments are real and significant and they do impede research in ways that are not equitable compared to our colleagues in other disciplines.”

Limited access

Lawyer Kendra Albert.

Enlarge / Lawyer Kendra Albert.

During the hearing, lawyer Kendra Albert said the preservationists had proposed the idea of human review of requests for remote access to “strike a compromise” between “concerns of the ESA and the need for flexibility that we’ve emphasized on behalf of preservation institutions.” They compared the proposed system to the one already used to grant access for libraries’ “special collections,” which are not made widely available to all members of the public.

But while preservation institutions may want to provide limited scholarly access, Englund argued that “out in the real world, people want to preserve access in order to play games for fun.” He pointed to public comments made to the Copyright Office from “individual commenters [who] are very interested in playing games recreationally” as evidence that some will want to exploit this kind of system.

Even if an “Ivy League” library would be responsible with a proposed DMCA exemption, Englund worried that less scrupulous organizations might simply provide an online “checkbox” for members of the public who could easily lie about their interest in “scholarly play.” If a human reviewed that checkbox affirmation, it could provide a legal loophole to widespread access to an unlimited online arcade, Englund argued.

Will any restrictions be enough?

VGHF Library Director Phil Salvador.

Enlarge / VGHF Library Director Phil Salvador.

Phil Salvador of the Video Game History Foundation said that Englund’s concern about this score was overblown. “Building a video game collection is a specialized skill that most libraries do not have the human labor to do, or the expertise, or the resources, or even the interest,” he said.

Salvador estimated that the number of institutions capable of building a physical collection of historical games is in the “single digits.” And that’s before you account for the significant resources needed to provide remote access to those collections; Rhizome Preservation Director Dragan Espenschied said it costs their organization “thousands of dollars a month” to run the sophisticated cloud-based emulation infrastructure needed for a few hundred users to access their Emulation as a Service art archives and gaming retrospectives.

Salvador also made reference to last year’s VGHF study that found a whopping 87 percent of games ever released are out of print, making it difficult for researchers to get access to huge swathes of video game history without institutional help. And the games of most interest to researchers are less likely to have had modern re-releases since they tend to be the “more primitive” early games with “less popular appeal,” Salvador said.

The Copyright Office is expected to rule on the preservation community’s proposed exemption later this year. But for the moment, there is some frustration that the industry has not been at all receptive to the significant compromises the preservation community feels it has made on these potential concerns.

“None of that is ever going to be sufficient to reassure these rights holders that it will not cause harm,” Albert said at the hearing. “If we’re talking about practical realities, I really want to emphasize the fact that proponents have continually proposed compromises that allow preservation institutions to provide the kind of access that is necessary for researchers. It’s not clear to me that it will ever be enough.”

Can an online library of classic video games ever be legal? Read More »

explore-a-digitized-collection-of-doomed-everest-climber’s-letters-home

Explore a digitized collection of doomed Everest climber’s letters home

“Because it’s there” —

Collection includes three letters found on Mallory’s body in 1999, preserved for 75 years.

the final letter from George Mallory from Camp I, Everest, to Ruth Mallory, 27 May 1924

Enlarge / The final letter from George Mallory from Camp I, Mount Everest, to his wife Ruth Mallory, May 27, 1924.

The Master and Fellows of Magdalene College, Cambridge

In June 1924, a British mountaineer named George Leigh Mallory and a young engineering student named Andrew “Sandy” Irvine set off for the summit of Mount Everest and disappeared—just two casualties of a peak that has claimed over 300 lives to date. Mallory was an alumnus of Magdalene College at the University of Cambridge, which maintains a collection of his personal correspondence, much of it between Mallory and his wife, Ruth. The college has now digitized the entire collection for public access. The letters can be accessed and downloaded here.

“It has been a real pleasure to work with these letters,” said Magdalene College archivist Katy Green. “Whether it’s George’s wife Ruth writing about how she was posting him plum cakes and a grapefruit to the trenches (he said the grapefruit wasn’t ripe enough), or whether it’s his poignant last letter where he says the chances of scaling Everest are ’50 to 1 against us,’ they offer a fascinating insight into the life of this famous Magdalene alumnus.”

As previously reported, Mallory is the man credited with uttering the famous line “because it’s there” in response to a question about why he would risk his life repeatedly to summit Everest. An avid mountaineer, Mallory had already been to the mountain twice before the 1924 expedition: once in 1921 as part of a reconnaissance expedition to produce the first accurate maps of the region and again in 1922—his first serious attempt to summit, although he was forced to turn back on all three attempts. A sudden avalanche killed seven Sherpas on his third try, sparking accusations of poor judgement on Mallory’s part.

Undeterred, Mallory was back in 1924 for the fated Everest expedition that would claim his life at age 37. He aborted his first summit attempt, but on June 4, he and Irvine left Advanced Base Camp (21,330 feet/6,500 meters). They reached Camp 5 on June 6, and Camp 6 the following day, before heading out for the summit on June 8. Team member Noel Odell reported seeing the two men climbing either the First or Second Step around 1 pm before they were “enveloped in a cloud once more.”

Nobody ever saw Mallory and Irvine again, although their spent oxygen tanks were found just below the First Step. Climbers also found Irvine’s ice axe in 1933. Mallory’s body wasn’t found until 1999, when an expedition partially sponsored by Nova and the BBC found the remains on the mountain’s north face, at 26,760 feet (8,157 meters)—just below where Irvine’s axe had been found. The name tags on the clothing read “G. Leigh Mallory.” Personal artifacts confirmed the identity: an altimeter, a pocket knife, snow goggles, a letter, and a bill for climbing equipment from a London supplier. Irvine’s body has yet to be found, despite the best efforts of a 2019 National Geographic expedition, detailed in the riveting 2020 documentary Lost on Everest.

Final page of letter from Ruth Mallory to George Mallory, March 3, 1924.

Enlarge / Final page of letter from Ruth Mallory to George Mallory, March 3, 1924.

The Master and Fellows of Magdalene College, Cambridge

The collection makes for some fascinating reading; Mallory led an adventurous life. Among the highlights of the Magdalene College collection is the final letter Mallory wrote to Ruth before attempting his fateful last summit attempt:

“Darling I wish you the best I can—that your anxiety will be at an end before you get this—with the best news. Which will also be the quickest. It is 50 to 1 against us but we’ll have a whack yet & do ourselves proud. Great love to you. Ever your loving, George.”

Three of the letters were found in Mallory’s jacket pocket 75 years after his disappearance when his body was discovered, exceptionally well-preserved. Other letters detailed his experiences at the Battle of the Somme during World War II; his first reconnaissance expedition to Everest; and the aforementioned second Everest expedition in which seven Sherpas were lost. On a lighter note are letters describing his adventures during a 1923 trip to the Prohibition Era US. (He would ask for milk at speakeasies and get whiskey served to him through a secret hatch.) There are also letters from Ruth—including her only surviving letter to Mallory during his Everest explorations—and from Mallory’s sister, Mary Brooke.

Explore a digitized collection of doomed Everest climber’s letters home Read More »

take-a-trip-through-gaming-history-with-this-charming-gdc-display

Take a trip through gaming history with this charming GDC display

Remember when —

Come for the retro Will Wright photo, stay for the game with a pack-in harmonica.

  • Only the most dedicated “Carmen” fans—or North Dakotan educators of a certain age—are likely to have this one in their collections.

    Kyle Orland / VGHF

  • These “pretty cool stickers” came from a “Carmen Day” kit the producer Broderbund sent to school to encourage themed edutainment activities that went beyond the screen.

    Kyle Orland / VGHF

  • As a nearby placard laments: “When female human characters were depicted in early video games, they often fell into stereotypical roles”—nature-loving girls or sexualized adults being chief among them.

    Kyle Orland / VGHF

  • Despite the lack of diverse female representation in early games, early game ads were often equal-opportunity affairs.

    Kyle Orland / VGHF

  • Don’t be fooled by the wide variety of headshots on these boxes—you needed to invest in “Alter Ego: Female Version” to get the full suite of personas.

    Kyle Orland / VGHF

  • Kyle Orland / VGHF

  • We’re struggling to think of any other video games that came packaged with a harmonica.

    Kyle Orland / VGHF

  • A standard Game Boy Camera hooked up to USB-C output via a customized board. VGHF used the setup to trade customized postcards for donations (see some examples in the background).

    Kyle Orland / VGHF

  • “EXTREME CLOSE-UP IS EXTREMELY SIGNIFICANT.”

    Kyle Orland / VGHF

  • Be the coolest beachgoer in all of Zebes with these promotional sunglasses.

    Kyle Orland / VGHF

  • A ’90s photo of the Maxis team, including a downright baby-faced Will Wright (back row, second from left).

    Kyle Orland / VGHF

  • VGHF’s Phil Salvador told me that this cow was one of the top results when you searched for “’90s mousepad” on eBay.

    Kyle Orland / VGHF

  • The brief heyday of music-based CD-ROM “multimedia” experiences is rightly forgotten by most consumers, and rightly remembered by organizations like VGHF.

    Kyle Orland / VGHF

  • Ever wonder what specific pantone swatch to use for that perfect “Joker jacket purple”? Wonder no longer!

    Kyle Orland / VGHF

SAN FRANCISCO—Trade shows like the Game Developers Conference and the (dearly departed) E3 are a great chance to see what’s coming down the pike for the game industry. But they can also be a great place to celebrate gaming’s history, as we’ve shown you with any number of on-site photo galleries in years past.

The history display tucked away in a corner of this year’s Game Developers Conference—the first one arranged by the Video Game History Foundation—was a little different. Rather than simply laying out a parcel of random collectibles, as past history-focused booths have, VGHF took a more curated approach, with mini-exhibits focused on specific topics like women in gaming, oddities of gaming music, and an entire case devoted to a little-known entry in a famous edutainment series.

Then there was the central case, devoted to the idea that all sorts of ephemera—from design docs to photos to pre-release prototypes to newsletters to promotional items—were all an integral part of video game history. The organization is practically begging developers, journalists, and fan hoarders of all stripes not to throw out even items that seem like they have no value. After all, today’s trash might be tomorrow’s important historic relic.

As we wrap up GDC (and get to work assembling what we’ve seen into future coverage), please enjoy this gallery of some of the more interesting historical specimens that the VGHF had at this year’s show.

Listing image by Kyle Orland / VGHF

Take a trip through gaming history with this charming GDC display Read More »