Features

in-the-age-of-ai,-we-must-protect-human-creativity-as-a-natural-resource

In the age of AI, we must protect human creativity as a natural resource


Op-ed: As AI outputs flood the Internet, diverse human perspectives are our most valuable resource.

Ironically, our present AI age has shone a bright spotlight on the immense value of human creativity as breakthroughs in technology threaten to undermine it. As tech giants rush to build newer AI models, their web crawlers vacuum up creative content, and those same models spew floods of synthetic media, risking drowning out the human creative spark in an ocean of pablum.

Given this trajectory, AI-generated content may soon exceed the entire corpus of historical human creative works, making the preservation of the human creative ecosystem not just an ethical concern but an urgent imperative. The alternative is nothing less than a gradual homogenization of our cultural landscape, where machine learning flattens the richness of human expression into a mediocre statistical average.

A limited resource

By ingesting billions of creations, chatbots learn to talk, and image synthesizers learn to draw. Along the way, the AI companies behind them treat our shared culture like an inexhaustible resource to be strip-mined, with little thought for the consequences.

But human creativity isn’t the product of an industrial process; it’s inherently throttled precisely because we are finite biological beings who draw inspiration from real lived experiences while balancing creativity with the necessities of life—sleep, emotional recovery, and limited lifespans. Creativity comes from making connections, and it takes energy, time, and insight for those connections to be meaningful. Until recently, a human brain was a prerequisite for making those kinds of connections, and there’s a reason why that is valuable.

Every human brain isn’t just a store of data—it’s a knowledge engine that thinks in a unique way, creating novel combinations of ideas. Instead of having one “connection machine” (an AI model) duplicated a million times, we have seven billion neural networks, each with a unique perspective. Relying on the cognitive diversity of human thought helps us escape the monolithic thinking that may emerge if everyone were to draw from the same AI-generated sources.

Today, the AI industry’s business models unintentionally echo the ways in which early industrialists approached forests and fisheries—as free inputs to exploit without considering ecological limits.

Just as pollution from early factories unexpectedly damaged the environment, AI systems risk polluting the digital environment by flooding the Internet with synthetic content. Like a forest that needs careful management to thrive or a fishery vulnerable to collapse from overexploitation, the creative ecosystem can be degraded even if the potential for imagination remains.

Depleting our creative diversity may become one of the hidden costs of AI, but that diversity is worth preserving. If we let AI systems deplete or pollute the human outputs they depend on, what happens to AI models—and ultimately to human society—over the long term?

AI’s creative debt

Every AI chatbot or image generator exists only because of human works, and many traditional artists argue strongly against current AI training approaches, labeling them plagiarism. Tech companies tend to disagree, although their positions vary. For example, in 2023, imaging giant Adobe took an unusual step by training its Firefly AI models solely on licensed stock photos and public domain works, demonstrating that alternative approaches are possible.

Adobe’s licensing model offers a contrast to companies like OpenAI, which rely heavily on scraping vast amounts of Internet content without always distinguishing between licensed and unlicensed works.

Photo of a mining dumptruck and water tank in an open pit copper mine.

OpenAI has argued that this type of scraping constitutes “fair use” and effectively claims that competitive AI models at current performance levels cannot be developed without relying on unlicensed training data, despite Adobe’s alternative approach.

The “fair use” argument often hinges on the legal concept of “transformative use,” the idea that using works for a fundamentally different purpose from creative expression—such as identifying patterns for AI—does not violate copyright. Generative AI proponents often argue that their approach is how human artists learn from the world around them.

Meanwhile, artists are expressing growing concern about losing their livelihoods as corporations turn to cheap, instantaneously generated AI content. They also call for clear boundaries and consent-driven models rather than allowing developers to extract value from their creations without acknowledgment or remuneration.

Copyright as crop rotation

This tension between artists and AI reveals a deeper ecological perspective on creativity itself. Copyright’s time-limited nature was designed as a form of resource management, like crop rotation or regulated fishing seasons that allow for regeneration. Copyright expiration isn’t a bug; its designers hoped it would ensure a steady replenishment of the public domain, feeding the ecosystem from which future creativity springs.

On the other hand, purely AI-generated outputs cannot be copyrighted in the US, potentially brewing an unprecedented explosion in public domain content, although it’s content that contains smoothed-over imitations of human perspectives.

Treating human-generated content solely as raw material for AI training disrupts this ecological balance between “artist as consumer of creative ideas” and “artist as producer.” Repeated legislative extensions of copyright terms have already significantly delayed the replenishment cycle, keeping works out of the public domain for much longer than originally envisioned. Now, AI’s wholesale extraction approach further threatens this delicate balance.

The resource under strain

Our creative ecosystem is already showing measurable strain from AI’s impact, from tangible present-day infrastructure burdens to concerning future possibilities.

Aggressive AI crawlers already effectively function as denial-of-service attacks on certain sites, with Cloudflare documenting GPTBot’s immediate impact on traffic patterns. Wikimedia’s experience provides clear evidence of current costs: AI crawlers caused a documented 50 percent bandwidth surge, forcing the nonprofit to divert limited resources to defensive measures rather than to its core mission of knowledge sharing. As Wikimedia says, “Our content is free, our infrastructure is not.” Many of these crawlers demonstrably ignore established technical boundaries like robots.txt files.

Beyond infrastructure strain, our information environment also shows signs of degradation. Google has publicly acknowledged rising volumes of “spammy, low-quality,” often auto-generated content appearing in search results. A Wired investigation found concrete examples of AI-generated plagiarism sometimes outranking original reporting in search results. This kind of digital pollution led Ross Anderson of Cambridge University to compare it to filling oceans with plastic—it’s a contamination of our shared information spaces.

Looking to the future, more risks may emerge. Ted Chiang’s comparison of LLMs to lossy JPEGs offers a framework for understanding potential problems, as each AI generation summarizes web information into an increasingly “blurry” facsimile of human knowledge. The logical extension of this process—what some researchers term “model collapse“—presents a risk of degradation in our collective knowledge ecosystem if models are trained indiscriminately on their own outputs. (However, this differs from carefully designed synthetic data that can actually improve model efficiency.)

This downward spiral of AI pollution may soon resemble a classic “tragedy of the commons,” in which organizations act from self-interest at the expense of shared resources. If AI developers continue extracting data without limits or meaningful contributions, the shared resource of human creativity could eventually degrade for everyone.

Protecting the human spark

While AI models that simulate creativity in writing, coding, images, audio, or video can achieve remarkable imitations of human works, this sophisticated mimicry currently lacks the full depth of the human experience.

For example, AI models lack a body that endures the pain and travails of human life. They don’t grow over the course of a human lifespan in real time. When an AI-generated output happens to connect with us emotionally, it often does so by imitating patterns learned from a human artist who has actually lived that pain or joy.

A photo of a young woman painter in her art studio.

Even if future AI systems develop more sophisticated simulations of emotional states or embodied experiences, they would still fundamentally differ from human creativity, which emerges organically from lived biological experience, cultural context, and social interaction.

That’s because the world constantly changes. New types of human experience emerge. If an ethically trained AI model is to remain useful, researchers must train it on recent human experiences, such as viral trends, evolving slang, and cultural shifts.

Current AI solutions, like retrieval-augmented generation (RAG), address this challenge somewhat by retrieving up-to-date, external information to supplement their static training data. Yet even RAG methods depend heavily on validated, high-quality human-generated content—the very kind of data at risk if our digital environment becomes overwhelmed with low-quality AI-produced output.

This need for high-quality, human-generated data is a major reason why companies like OpenAI have pursued media deals (including a deal signed with Ars Technica parent Condé Nast last August). Yet paradoxically, the same models fed on valuable human data often produce the low-quality spam and slop that floods public areas of the Internet, degrading the very ecosystem they rely on.

AI as creative support

When used carelessly or excessively, generative AI is a threat to the creative ecosystem, but we can’t wholly discount the tech as a tool in a human creative’s arsenal. The history of art is full of technological changes (new pigments, brushes, typewriters, word processors) that transform the nature of artistic production while augmenting human creativity.

Bear with me because there’s a great deal of nuance here that is easy to miss among today’s more impassioned reactions to people using AI as a blunt instrument of creating mediocrity.

While many artists rightfully worry about AI’s extractive tendencies, research published in Harvard Business Review indicates that AI tools can potentially amplify rather than merely extract creative capacity, suggesting that a symbiotic relationship is possible under the right conditions.

Inherent in this argument is that the responsible use of AI is reflected in the skill of the user. You can use a paintbrush to paint a wall or paint the Mona Lisa. Similarly, generative AI can mindlessly fill a canvas with slop, or a human can utilize it to express their own ideas.

Machine learning tools (such as those in Adobe Photoshop) already help human creatives prototype concepts faster, iterate on variations they wouldn’t have considered, or handle some repetitive production tasks like object removal or audio transcription, freeing humans to focus on conceptual direction and emotional resonance.

These potential positives, however, don’t negate the need for responsible stewardship and respecting human creativity as a precious resource.

Cultivating the future

So what might a sustainable ecosystem for human creativity actually involve?

Legal and economic approaches will likely be key. Governments could legislate that AI training must be opt-in, or at the very least, provide a collective opt-out registry (as the EU’s “AI Act” does).

Other potential mechanisms include robust licensing or royalty systems, such as creating a royalty clearinghouse (like the music industry’s BMI or ASCAP) for efficient licensing and fair compensation. Those fees could help compensate human creatives and encourage them to keep creating well into the future.

Deeper shifts may involve cultural values and governance. Inspired by models like Japan’s “Living National Treasures“—where the government funds artisans to preserve vital skills and support their work. Could we establish programs that similarly support human creators while also designating certain works or practices as “creative reserves,” funding the further creation of certain creative works even if the economic market for them dries up?

Or a more radical shift might involve an “AI commons”—legally declaring that any AI model trained on publicly scraped data should be owned collectively as a shared public domain, ensuring that its benefits flow back to society and don’t just enrich corporations.

Photo of family Harvesting Organic Crops On Farm

Meanwhile, Internet platforms have already been experimenting with technical defenses against industrial-scale AI demands. Examples include proof-of-work challenges, slowdown “tarpits” (e.g., Nepenthes), shared crawler blocklists (“ai.robots.txt“), commercial tools (Cloudflare’s AI Labyrinth), and Wikimedia’s “WE5: Responsible Use of Infrastructure” initiative.

These solutions aren’t perfect, and implementing any of them would require overcoming significant practical hurdles. Strict regulations might slow beneficial AI development; opt-out systems burden creators, while opt-in models can be complex to track. Meanwhile, tech defenses often invite arms races. Finding a sustainable, equitable balance remains the core challenge. The issue won’t be solved in a day.

Invest in people

While navigating these complex systemic challenges will take time and collective effort, there is a surprisingly direct strategy that organizations can adopt now: investing in people. Don’t sacrifice human connection and insight to save money with mediocre AI outputs.

Organizations that cultivate unique human perspectives and integrate them with thoughtful AI augmentation will likely outperform those that pursue cost-cutting through wholesale creative automation. Investing in people acknowledges that while AI can generate content at scale, the distinctiveness of human insight, experience, and connection remains priceless.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

In the age of AI, we must protect human creativity as a natural resource Read More »

review:-ryzen-ai-cpu-makes-this-the-fastest-the-framework-laptop-13-has-ever-been

Review: Ryzen AI CPU makes this the fastest the Framework Laptop 13 has ever been


With great power comes great responsibility and subpar battery life.

The latest Framework Laptop 13, which asks you to take the good with the bad. Credit: Andrew Cunningham

The latest Framework Laptop 13, which asks you to take the good with the bad. Credit: Andrew Cunningham

At this point, the Framework Laptop 13 is a familiar face, an old friend. We have reviewed this laptop five other times, and in that time, the idea of a repairable and upgradeable laptop has gone from a “sounds great if they can pull it off” idea to one that’s become pretty reliable and predictable. And nearly four years out from the original version—which shipped with an 11th-generation Intel Core processor—we’re at the point where an upgrade will get you significant boosts to CPU and GPU performance, plus some other things.

We’re looking at the Ryzen AI 300 version of the Framework Laptop today, currently available for preorder and shipping in Q2 for people who buy one now. The laptop starts at $1,099 for a pre-built version and $899 for a RAM-less, SSD-less, Windows-less DIY version, and we’ve tested the Ryzen AI 9 HX 370 version that starts at $1,659 before you add RAM, an SSD, or an OS.

This board is a direct upgrade to Framework’s Ryzen 7040-series board from mid-2023, with most of the same performance benefits we saw last year when we first took a look at the Ryzen AI 300 series. It’s also, if this matters to you, the first Framework Laptop to meet Microsoft’s requirements for its Copilot+ PC initiative, giving users access to some extra locally processed AI features (including but not limited to Recall) with the promise of more to come.

For this upgrade, Ryzen AI giveth, and Ryzen AI taketh away. This is the fastest the Framework Laptop 13 has ever been (at least, if you spring for the Ryzen AI 9 HX 370 chip that our review unit shipped with). If you’re looking to do some light gaming (or non-Nvidia GPU-accelerated computing), the Radeon 890M GPU is about as good as it gets. But you’ll pay for it in battery life—never a particularly strong point for Framework, and less so here than in most of the Intel versions.

What’s new, Framework?

This Framework update brings the return of colorful translucent accessories, parts you can also add to an older Framework Laptop if you want. Credit: Andrew Cunningham

We’re going to focus on what makes this particular Framework Laptop 13 different from the past iterations. We talk more about the build process and the internals in our review of the 12th-generation Intel Core version, and we ran lots of battery tests with the new screen in our review of the Intel Core Ultra version. We also have coverage of the original Ryzen version of the laptop, with the Ryzen 7 7840U and Radeon 780M GPU installed.

Per usual, every internal refresh of the Framework Laptop 13 comes with another slate of external parts. Functionally, there’s not a ton of exciting stuff this time around—certainly nothing as interesting as the higher-resolution 120 Hz screen option we got with last year’s Intel Meteor Lake update—but there’s a handful of things worth paying attention to.

Functionally, Framework has slightly improved the keyboard, with “a new key structure” on the spacebar and shift keys that “reduce buzzing when your speakers are cranked up.” I can’t really discern a difference in the feel of the keyboard, so this isn’t a part I’d run out to add to my own Framework Laptop, but it’s a fringe benefit if you’re buying an all-new laptop or replacing your keyboard for some other reason.

Keyboard legends have also been tweaked; pre-built Windows versions get Microsoft’s dedicated (and, within limits, customizable) Copilot key, while DIY editions come with a Framework logo on the Windows/Super key (instead of the word “super”) and no Copilot key.

Cosmetically, Framework is keeping the dream of the late ’90s alive with translucent plastic parts, namely the bezel around the display and the USB-C Expansion Modules. I’ll never say no to additional customization options, though I still think that “silver body/lid with colorful bezel/ports” gives the laptop a rougher, unfinished-looking vibe.

Like the other Ryzen Framework Laptops (both 13 and 16), not all of the Ryzen AI board’s four USB-C ports support all the same capabilities, so you’ll want to arrange your ports carefully.

Framework’s recommendations for how to configure the Ryzen AI laptop’s expansion modules. Credit: Framework

Framework publishes a graphic to show you which ports do what; if you’re looking at the laptop from the front, ports 1 and 3 are on the back, and ports 2 and 4 are toward the front. Generally, ports 1 and 3 are the “better” ones, supporting full USB4 speeds instead of USB 3.2 and DisplayPort 2.0 instead of 1.4. But USB-A modules should go in ports 2 or 4 because they’ll consume extra power in bays 1 and 3. All four do support display output, though, which isn’t the case for the Ryzen 7040 Framework board, and all four continue to support USB-C charging.

The situation has improved from the 7040 version of the Framework board, where not all of the ports could do any kind of display output. But it still somewhat complicates the laptop’s customizability story relative to the Intel versions, where any expansion card can go into any port.

I will also say that this iteration of the Framework laptop hasn’t been perfectly stable for me. The problems are intermittent but persistent, despite using the latest BIOS version (3.03 as of this writing) and driver package available from Framework. I had a couple of total-system freezes/crashes, occasional problems waking from sleep, and sporadic rendering glitches in Microsoft Edge. These weren’t problems I’ve had with the other Ryzen AI laptops I’ve used so far or with the Ryzen 7040 version of the Framework 13. They also persisted across two separate clean installs of Windows.

It’s possible/probable that some combination of firmware and driver updates can iron out these problems, and they generally didn’t prevent me from using the laptop the way I wanted to use it, but I thought it was worth mentioning since my experience with new Framework boards has usually been a bit better than this.

Internals and performance

“Ryzen AI” is AMD’s most recent branding update for its high-end laptop chips, but you don’t actually need to care about AI to appreciate the solid CPU and GPU speed upgrades compared to the last-generation Ryzen Framework or older Intel versions of the laptop.

Our Framework Laptop board uses the fastest processor offering: a Ryzen AI 9 HX 370 with four of AMD’s Zen 5 CPU cores, eight of the smaller, more power-efficient Zen 5c cores, and a Radeon 890M integrated GPU with 16 of AMD’s RDNA 3.5 graphics cores.

There are places where the Intel Arc graphics in the Core Ultra 7/Meteor Lake version of the Framework Laptop are still faster than what AMD can offer, though your experience may vary depending on the games or apps you’re trying to use. Generally, our benchmarks show the Arc GPU ahead by a small amount, but it’s not faster across the board.

Relative to other Ryzen AI systems, the Framework Laptop’s graphics performance also suffers somewhat because socketed DDR5 DIMMs don’t run as fast as RAM that’s been soldered to the motherboard. This is one of the trade-offs you’re probably OK with making if you’re looking at a Framework Laptop in the first place, but it’s worth mentioning.

A few actual game benchmarks. Ones with ray-tracing features enabled tend to favor Intel’s Arc GPU, while the Radeon 890M pulls ahead in some other games.

But the new Ryzen chip’s CPU is dramatically faster than Meteor Lake at just about everything, as well as the older Ryzen 7 7840U in the older Framework board. This is the fastest the Framework Laptop has ever been, and it’s not particularly close (but if you’re waffling between the Ryzen AI version, the older AMD version that Framework sells for a bit less money or the Core Ultra 7 version, wait to see the battery life results before you spend any money). Power efficiency has also improved for heavy workloads, as demonstrated by our Handbrake video encoding tests—the Ryzen AI chip used a bit less power under heavy load and took less time to transcode our test video, so it uses quite a bit less power overall to do the same work.

Power efficiency tests under heavy load using the Handbrake transcoding tool. Test uses CPU for encoding and not hardware-accelerated GPU-assisted encoding.

We didn’t run specific performance tests on the Ryzen AI NPU, but it’s worth noting that this is also Framework’s first laptop with a neural processing unit (NPU) fast enough to support the full range of Microsoft’s Copilot+ PC features—this was one of the systems I used to test Microsoft’s near-final version of Windows Recall, for example. Intel’s other Core Ultra 100 chips, all 200-series Core Ultra chips other than the 200V series (codenamed Lunar Lake), and AMD’s Ryzen 7000- and 8000-series processors often include NPUs, but they don’t meet Microsoft’s performance requirements.

The Ryzen AI chips are also the only Copilot+ compatible processors on the market that Framework could have used while maintaining the Laptop’s current level of upgradeability. Qualcomm’s Snapdragon X Elite and Plus chips don’t support external RAM—at least, Qualcomm only lists support for soldered-down LPDDR5X in its product sheets—and Intel’s Core Ultra 200V processors use RAM integrated into the processor package itself. So if any of those features appeal to you, this is the only Framework Laptop you can buy to take advantage of them.

Battery and power

Battery tests. The Ryzen AI 300 doesn’t do great, though it’s similar to the last-gen Ryzen Framework.

When paired with the higher-resolution screen option and Framework’s 61 WHr battery, the Ryzen AI version of the laptop lasted around 8.5 hours in a PCMark Modern Office battery life test with the screen brightness set to a static 200 nits. This is a fair bit lower than the Intel Core Ultra version of the board, and it’s even worse when compared to what a MacBook Air or a more typical PC laptop will give you. But it’s holding roughly even with the older Ryzen version of the Framework board despite being much faster.

You can improve this situation somewhat by opting for the cheaper, lower-resolution screen; we didn’t test it with the Ryzen AI board, and Framework won’t sell you the lower-resolution screen with the higher-end chip. But for upgraders using the older panel, the higher-res screen reduced battery life by between 5 and 15 percent in past testing of older Framework Laptops. The slower Ryzen AI 5 and Ryzen AI 7 versions will also likely last a little longer, though Framework usually only sends us the highest-end versions of its boards to test.

A routine update

This combo screwdriver-and-spudger is still the only tool you need to take a Framework Laptop apart. Credit: Andrew Cunningham

It’s weird that my two favorite laptops right now are probably Apple’s MacBook Air and the Framework Laptop 13, but that’s where I am. They represent opposite visions of computing, each of which appeals to a different part of my brain: The MacBook Air is the personal computer at its most appliance-like, the thing you buy (or recommend) if you just don’t want to think about your computer that much. Framework embraces a more traditionally PC-like approach, favoring open standards and interoperable parts; the result is more complicated and chaotic but also more flexible. It’s the thing you buy when you like thinking about your computer.

Framework Laptop buyers continue to pay a price for getting a more repairable and modular laptop. Battery life remains OK at best, and Framework doesn’t seem to have substantially sped up its firmware or driver releases since we talked with them about it last summer. You’ll need to be comfortable taking things apart, and you’ll need to make sure you put the right expansion modules in the right bays. And you may end up paying more than you would to get the same specs from a different laptop manufacturer.

But what you get in return still feels kind of magical, and all the more so because Framework has now been shipping product for four years. The Ryzen AI version of the laptop is probably the one I’d recommend if you were buying a new one, and it’s also a huge leap forward for anyone who bought into the first-generation Framework Laptop a few years ago and is ready for an upgrade. It’s by far the fastest CPU (and, depending on the app, the fastest or second-fastest GPU) Framework has shipped in the Laptop 13. And it’s nice to at least have the option of using Copilot+ features, even if you’re not actually interested in the ones Microsoft is currently offering.

If none of the other Framework Laptops have interested you yet, this one probably won’t, either. But it’s yet another improvement in what has become a steady, consistent sequence of improvements. Mediocre battery life is hard to excuse in a laptop, but if that’s not what’s most important to you, Framework is still offering something laudable and unique.

The good

  • Framework still gets all of the basics right—a matte 3:2 LCD that’s pleasant to look at, a nice-feeling keyboard and trackpad, and a design
  • Fastest CPU ever in the Framework Laptop 13, and the fastest or second-fastest integrated GPU
  • First Framework Laptop to support Copilot+ features in Windows, if those appeal to you at all
  • Fun translucent customization options
  • Modular, upgradeable, and repairable—more so than with most laptops, you’re buying a laptop that can change along with your needs and which will be easy to refurbish or hand down to someone else when you’re ready to replace it
  • Official support for both Windows and Linux

The bad

  • Occasional glitchiness that may or may not be fixed with future firmware or driver updates
  • Some expansion modules are slower or have higher power draw if you put them in the wrong place
  • Costs more than similarly specced laptops from other OEMs
  • Still lacks certain display features some users might require or prefer—in particular, there are no OLED, touchscreen, or wide-color-gamut options

The ugly

  • Battery life remains an enduring weak point.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Review: Ryzen AI CPU makes this the fastest the Framework Laptop 13 has ever been Read More »

bicycle-bling:-all-the-accessories-you’ll-need-for-your-new-e-bike

Bicycle bling: All the accessories you’ll need for your new e-bike


To accompany our cargo bike shopper’s guide, here’s the other you’ll want.

Credit: LueratSatichob/Getty Images

If you’ve read our cargo e-bike shopper’s guide, you may be well on your way to owning a new ride. Now comes the fun part.

Part of the joy of diving into a new hobby is researching and acquiring the necessary (and less-than-necessary) stuff. And cycling (or, for the casual or transportation-first rider, “riding bikes”) is no different—there are hundreds of ways to stock up on talismanic, Internet-cool parts and accessories that you may or may not need.

That’s not necessarily a bad thing! And you can even get creative—PC case LEDs serve the same function as a very specific Japanese reflective triangle that hangs from your saddle. But let’s start with the strictly necessary.

This article is aimed at the fully beginner cyclist, but I invite the experienced cyclists among us to fill the comments with anything I’ve missed. If this is your first run at owning a bike that gets ridden frequently, the below is a good starting point to keep you (and your cargo) safe—and your bike running.

First thing’s first: Safety stuff

Helmets

I once was asked by another cargo bike dad, “Are people wearing helmets on these? Is that uncool?”

“You’re already riding the uncoolest bike on earth—buy a helmet,” I told him.

For the most part, any helmet you pick up at a big box store or your local bike shop will do a perfectly fine job keeping your brains inside your skull. Even so, the goodly nerds over at Virginia Tech have partnered with the Insurance Institute for Highways Safety (IIHS) to rate 238 bike helmets using the STAR evaluation system. Sort by your use case and find something within your budget, but I’ve found that something in the $70–$100 range is more than adequate—any less and you’re sacrificing comfort, and any more and you won’t notice the difference. Save your cash.

Giro, Bell, Smith, POC, and Kask are all reputable brands with a wide range of shapes to fit bulbous and diminutive noggins alike.

Additionally, helmets are not “buy it for life” items—manufacturers recommend replacing them every four to five years because the foam and glues degrade with sun exposure. So there’s a built-in upgrade cycle on that one.

Lights

Many cargo e-bikes come with front and rear lights prewired into the electric system. If you opted for an acoustic bike, you’ll want to get some high-lumen visibility from dedicated bike lights (extra bike nerd bonus points for a dynamo system). Front and rear lights can be as cheap as you need or as expensive as you want. Depending on the brands your local bike shop carries, you will find attractive options from Bontraeger, Lezyne, and Knog. Just make sure whatever you’re buying is USB-rechargeable and has the appropriate mounts to fit your bike.

Additionally, you can go full Fast and the Furious and get nuts with cheap, adhesive-backed LEDs for fun and safety. I’ve seen light masts on the back of longtails, and I have my Long John blinged out with LEDs that pulse to music. This is 82 percent for the enjoyment of other bike parents.

A minimalist’s mobile toolkit

You will inevitably blow a tire on the side of the road, or something will rattle loose while your kid is screaming at you. With this in mind, I always have an everything-I-need kit in a zip-top bag in my work backpack. Some version of this assemblage lives on every bike I own in its own seat bag, but on my cargo bike, it’s split between the pockets of the atrociously expensive but very well thought-out Fahrer Panel Bags. This kit includes:

A pocket pump

Lezyne is a ubiquitous name in bike accessories, and for good reason. I’ve had the previous version of their Pocket Drive mini pump for the better part of a decade, and it shows no sign of stopping. What sets this pump apart is the retractable reversible tube that connects to your air valve, providing some necessary flexibility as you angrily pump up a tire on the side of the road. I don’t mess with CO2 canisters because I’ve had too many inflators explode due to user error, and they’re not recommended for tubeless systems, which are starting to be far more common.

If you spend any amount of time on bike Instagram and YouTube, you’ve seen pocketable USB-rechargeable air compressors made to replace manual pumps. We haven’t tested any of the most common models yet, but these could be a solid solution if your budget outweighs your desire to be stuck on the side of the road.

The Pocket Drive HV Pump from Lezyne.

A multi-tool

Depending on the style and vintage of your ride, you’ll have at least two to three different-sized bolts or connectors throughout the frame. If you have thru-axle wheels, you may need a 6 mm hex key to remove them in the event of a flat. Crank Brothers makes what I consider to be the most handsome, no-nonsense multi-tools on the market. They have tools in multiple configurations, allowing you to select the sizes that best apply to your gear—no more, no less.

The M20 minitool from Crank Brothers. Credit: Crankbrothers

Tube + patch kit

As long as you’re not counting grams, the brand of bike tube you use does not matter. Make sure it’s the right size for your wheel and tire combo and that it has the correct inflator valve (there are two styles: Presta and Schrader, with the former being more popular for bikes you’d buy at your local shop). Just go into your local bike shop and buy a bunch and keep them for when you need ’em.

The Park Tool patch kit has vulcanization glue included (I’d recommend avoiding sticker-style patches)—they’re great and cheap, and there’s no excuse for excluding them from your kit. Park Tool makes some really nice bike-specific tools, and they produce This Old House-quality bike repair tutorials hosted by the GOAT Calvin Jones. In the event of a single failure, many riders find it sensible to simply swap the tube and save the patching for when they’re back at their workbench.

With that said, because of their weight and potentially complicated drivetrains, it can be a bit of a pain to get wheels out of a cargo bike to change a tire, so it’s best to practice at home.

A big lock

If you’re regularly locking up outside an office or running errands, you’re going to need to buy (and learn to appropriately use) a lock to protect your investment. I’ve been a happy owner of a few Kryptonite U-Locks over the years, but even these beefy bois are easily defeated by a cordless angle grinder and a few minutes of effort. These days, there are u-locks from Abus, Hiplok, and LiteLok with grinder-resistant coatings that are eye-wateringly expensive, but if your bike costs as much as half of a used Honda Civic, they’re absolutely worth it.

Thing retention

Though you may not always carry stuff, it’s a good idea to be prepared for the day when your grocery run gets out of hand. A small bag with a net, small cam straps, and various sizes of bungee cords has saved my bacon more than once. Looking for a fun gift for the bike parent in your life? Overengineered, beautifully finished cam buckles from Austere Manufacturing are the answer.

Tot totage

Depending on whether we’re on an all-day adventure or just running down to school, I have a rotating inventory of stuff that gets thrown into the front of my bike with my daughter, including:

  • An old UE Wonderboom on a carabiner bumping Frozen club remixes
  • A small bag with snacks and water that goes into a netted area ahead of her feet

And even if it’s not particularly cool, I like to pack a camping blanket like a Rumpl. By the time we’re on our way home, she is invariably tired and wants a place to lay her little helmeted head.

Floor pump

When I first started riding, it didn’t occur to me that one should check their tire pressure before every ride. You don’t have to do this if your tires consistently maintain pressure day-to-day, but I’m a big boy, and it behooves me to call this out. That little pump I recommended above? You don’t want to be using that every day. No, you want what’s called a floor pump.

Silca makes several swervy versions ranging from $150 all the way up to $495. With that said, I’ve had the Lezyne Sport Floor Drive for over 10 years, and I can’t imagine not having it for another 20. Mine has a wood handle, which has taken on some patina and lends a more luxurious feel, and most importantly, it’s totally user-serviceable. This spring, I regreased the seals and changed out the o-rings without any special tools—just a quick trip to the plumbing store. I was also able to upgrade the filler chuck to Lezyne’s new right-angle ABS 1.0 chuck.

The Lezyne Sport Floor Drive 3.5.

No matter what floor pump you go for, at the very least, you’ll want to get one with a pressure gauge. Important tip: Do not just fill your tires to the max pressure on the side of the tire. This will make for an uncomfortable ride, and depending on how fancy of a wheelset you have, it could blow the tire right off the rim. Start with around 80 PSI with 700×28 tires on normal city roads and adjust from there. The days of busting your back at 100 PSI are over, gang.

Hex wrenches

Even if you don’t plan on wrenching on your own bike, it’s handy to have the right tools for making minor fit adjustments and removing your wheels to fix flats. The most commonly used bolts on bikes are metric hex bolts, with Torx bolts used on high-end gear and some small components. A set of Bondhus ball-end Allen wrenches will handle 99 percent of what you need, though fancy German tool manufacturer Wera makes some legitimately drool-worthy wrenches.

If you have blessed your bike with carbon bits (or just want the peace of mind that you’ve cranked down those bolts to the appropriate spec), you may want to pick up a torque wrench. They come in a few flavors geared at the low-torque specs of bikes, in ascending price points and user-friendliness: beam-type, adjustable torque drivers, and ratcheting click wrenches. All should be calibrated at some point, but each comes with its own pros and cons.

Keep in mind that overtightening is just as bad as undertightening because you can crack the component or shear the bolt head off. It happens to the best of us! (Usually after having said, “I don’t feel like grabbing the torque wrench” and just making the clicking sound with your mouth).

Lube

Keeping your chain (fairly) clean and (appropriately) lubricated will extend its life and prolong the life of the rest of your drivetrain. You’ll need to replace the chain once it becomes too worn out, and then every second chain, you’ll want to replace your cassette (the gears). Depending on how well you’ve cared for it, how wet your surroundings are, and how often you’re riding, an 11-speed chain can last anywhere from 1,000 to 1,500 miles, but your mileage may vary.

You can get the max mileage out of your drivetrain by periodically wiping down your chain with an old T-shirt or microfiber towel and reapplying chain lube every 200–300 miles, or counterintuitively, more frequently if you ride less frequently. Your local shop can recommend the lube that best suits your climate and riding environment, but I’m a big fan of Rock’n’Roll Extreme chain lube for my more-or-less dry Northern California rides. The best advice I’ve gotten is that it doesn’t matter what chain lube you use as long as it’s on the chain.

Also, do not use WD-40. That is not a lubricant.

That’s it! There may be a few more items you’ll want to add over time, but this list should give you a great start. Get out there and get riding—and enjoy the hours of further research this article has inevitably prompted.

Bicycle bling: All the accessories you’ll need for your new e-bike Read More »

resist,-eggheads!-universities-are-not-as-weak-as-they-have-chosen-to-be.

Resist, eggheads! Universities are not as weak as they have chosen to be.

The wholesale American cannibalism of one of its own crucial appendages—the world-famous university system—has begun in earnest. The campaign is predictably Trumpian, built on a flagrantly pretextual basis and executed with the sort of vicious but chaotic idiocy that has always been a hallmark of the authoritarian mind.

At a moment when the administration is systematically waging war on diversity initiatives of every kind, it has simultaneously discovered that it is really concerned about both “viewpoint diversity” and “antisemitism” on college campuses—and it is using the two issues as a club to beat on the US university system until it either dies or conforms to MAGA ideology.

Reaching this conclusion does not require reading any tea leaves or consulting any oracles; one need only listen to people like Vice President JD Vance, who in 2021 gave a speech called “The Universities are the Enemy” to signal that, like every authoritarian revolutionary, he intended to go after the educated.

“If any of us want to do the things that we want to do for our country,” Vance said, “and for the people who live in it, we have to honestly and aggressively attack the universities in this country.” Or, as conservative activist Christopher Rufo put it in a New York Times piece exploring the attack campaign, “We want to set them back a generation or two.”

The goal is capitulation or destruction. And “destruction” is not a hyperbolic term; some Trump aides have, according to the same piece, “spoken privately of toppling a high-profile university to signal their seriousness.”

Consider, in just a few months, how many battles have been launched:

  • The Trump administration is now snatching non-citizen university students, even those in the country legally, off the streets using plainclothes units and attempting to deport them based on their speech or beliefs.
  • It has opened investigations of more than 50 universities.
  • It has threatened grants and contracts at, among others, Brown ($510 million), Columbia ($400 million), Cornell ($1 billion), Harvard ($9 billion), Penn ($175 million), and Princeton ($210 million).
  • It has reached a widely criticized deal with Columbia that would force Columbia to change protest and security policies but would also single out one academic department (Middle Eastern, South Asian, and African Studies) for enhanced scrutiny. This deal didn’t even get Columbia its $400 million back; it only paved the way for future “negotiations” about the money. And the Trump administration is potentially considering a consent decree with Columbia, giving it leverage over the school for years to come.
  • It has demanded that Harvard audit every department for “viewpoint diversity,” hiring faculty who meet the administration’s undefined standards.
  • Trump himself has explicitly threatened to revoke Harvard’s tax-exempt nonprofit status after it refused to bow to his demands. And the IRS looks ready to do it.
  • The government has warned that it could choke off all international students—an important diplomatic asset but also a key source of revenue—at any school it likes.
  • Ed Martin—the extremely Trumpy interim US Attorney for Washington, DC—has already notified Georgetown that his office will not hire any of that school’s graduates if the school “continues to teach and utilize DEI.”

What’s next? Project 2025 lays it out for us, envisioning the federal government getting heavily involved in accreditation—thus giving the government another way to bully schools—and privatizing many student loans. Right-wing wonks have already begun to push for “a never-ending compliance review” of elite schools’ admissions practices, one that would see the Harvard admissions office filled with federal monitors scrutinizing every single admissions decision. Trump has also called for “patriotic education” in K–12 schools; expect similar demands of universities, though probably under the rubrics of “viewpoint discrimination” and “diversity.”

Universities may tell themselves that they would never comply with such demands, but a school without accreditation and without access to federal funds, international students, and student loan dollars could have trouble surviving for long.

Some of the top leaders in academia are ringing the alarm bells. Princeton’s president, Christopher Eisgruber, wrote a piece in The Atlantic warning that the Trump administration has already become “the greatest threat to American universities since the Red Scare of the 1950s. Every American should be concerned.”

Lee Bollinger, who served as president of both the University of Michigan and Columbia University, gave a fiery interview to the Chronicle of Higher Education in which he said, “We’re in the midst of an authoritarian takeover of the US government… We cannot get ourselves to see how this is going to unfold in its most frightening versions. You neutralize the branches of government; you neutralize the media; you neutralize universities, and you’re on your way. We’re beginning to see the effects on universities. It’s very, very frightening.”

But for the most part, even though faculty members have complained and even sued, administrators have stayed quiet. They are generally willing to fight for their cash in court—but not so much in the court of public opinion. The thinking is apparently that there is little to be gained by antagonizing a ruthless but also chaotic administration that just might flip the money spigot back on as quickly as it was shut off. (See also: tariff policy.)

This academic silence also comes after many universities course-corrected following years of administrators weighing in on global and political events outside a school’s basic mission. When that practice finally caused problems for institutions, as it did following the Gaza/Israel fighting, numerous schools adopted a posture of “institutional neutrality” and stopped offering statements except on core university concerns. This may be wise policy, but unfortunately, schools are clinging to it even though the current moment could not be more central to their mission.

To critics, the public silence looks a lot like “appeasement”—a word used by our sister publication The New Yorker to describe how “universities have cut previously unthinkable ‘deals’ with the Administration which threaten academic freedom.” As one critic put it recently, “still there is no sign of organized resistance on the part of universities. There is not even a joint statement in defense of academic freedom or an assertion of universities’ value to society.”

Even Michael Roth, the president of Wesleyan University, has said that universities’ current “infatuation with institutional neutrality is just making cowardice into a policy.”

Appeasing narcissistic strongmen bent on “dominance” is a fool’s errand, as is entering a purely defensive crouch. Weakness in such moments is only an invitation to the strongman to dominate you further. You aren’t going to outlast your opponent when the intended goal appears to be not momentary “wins” but the weakening of all cultural forces that might resist the strongman. (See also: Trump’s brazen attacks on major law firms and the courts.)

As an Atlantic article put it recently, “Since taking office, the Trump administration has been working to dismantle the global order and the nation’s core institutions, including its cultural ones, to strip them of their power. The future of the nation’s universities is very much at stake. This is not a challenge that can be met with purely defensive tactics.”

The temperamental caution of university administrators means that some can be poor public advocates for their universities in an age of anger and distrust, and they may have trouble finding a clear voice to speak with when they come under thundering public attacks from a government they are more used to thinking of as a funding source.

But the moment demands nothing less. This is not a breeze; this is the whirlwind. And it will leave a state-dependent, nationalist university system in its wake unless academia arises, feels its own power, and non-violently resists.

Fighting back

Finally, on April 14, something happened: Harvard decided to resist in far more public fashion. The Trump administration had demanded, as a condition of receiving $9 billion in grants over multiple years, that Harvard reduce the power of student and faculty leaders, vet every academic department for undefined “viewpoint diversity,” run plagiarism checks on all faculty, share hiring information with the administration, shut down any program related to diversity or inclusion, and audit particular departments for antisemitism, including the Divinity School. (Numerous Jewish groups want nothing to do with the campaign, writing in an open letter that “our safety as Jews has always been tied to the rule of law, to the safety of others, to the strength of civil society, and to the protection of rights and liberties for all.”)

If you think this sounds a lot like government control, giving the Trump administration the power to dictate hiring and teaching practices, you’re not alone; Harvard president Alan Garber rejected the demands in a letter, saying, “The university will not surrender its independence or relinquish its constitutional rights. Neither Harvard nor any other private university can allow itself to be taken over by the federal government.”

The Trump administration immediately responded by cutting billions in Harvard funding, threatening the university’s tax-exempt status, and claiming it might block international students from attending Harvard.

Perhaps Harvard’s example will provide cover for other universities to make hard choices. And these are hard choices. But Columbia and Harvard have already shown that the only way you have a chance at getting the money back is to sell whatever soul your institution has left.

Given that, why not fight? If you have to suffer, suffer for your deepest values.

Fare forward

“Resistance” does not mean a refusal to change, a digging in, a doubling down. No matter what part of the political spectrum you inhabit, universities—like most human institutions—are “target-rich environments” for complaints. To see this, one has only to read about recent battles over affirmative action, the Western canon, “legacy” admissions, the rise and fall of “theory” in the humanities, Gaza/Palestine protests, the “Varsity Blues” scandal, critiques of “meritocracy,” mandatory faculty “diversity statements,” the staggering rise in tuition costs over the last few decades, student deplatforming of invited speakers, or the fact that so many students from elite institutions cannot imagine a higher calling than management consulting. Even top university officials acknowledge there are problems.

Famed Swiss theologian Karl Barth lost his professorship and was forced to leave Germany in 1935 because he would not bend the knee to Adolf Hitler. He knew something about standing up for one’s academic and spiritual values—and about the importance of not letting any approach to the world ossify into a reactionary, bureaucratic conservatism that punishes all attempts at change or dissent. The struggle for knowledge, truth, and justice requires forward movement even as the world changes, as ideas and policies are tested, and as cultures develop. Barth’s phrase for this was “Ecclesia semper reformanda est“—the church must always be reformed—and it applies just as well to the universities where he spent much of his career.

As universities today face their own watershed moment of resistance, they must still find ways to remain intellectually curious and open to the world. They must continue to change, always imperfectly but without fear. It is important that their resistance not be partisan. Universities can only benefit from broad-based social support, and the idea that they are fighting “against conservatives” or “for Democrats” will be deeply unhelpful. (Just as it would be if universities capitulated to government oversight of their faculty hires or gave in to “patriotic education.”)

This is difficult when one is under attack, as the natural reaction is to defend what currently exists. But the assault on the universities is about deeper issues than admissions policies or the role of elite institutions in American life. It is about the rule of law, freedom of speech, scientific research, and the very independence of the university—things that should be able to attract broad social and judicial support if schools do not retreat into ideology.

Why it matters

Ars Technica was founded by grad students and began with a “faculty model” drawn from universities: find subject matter experts and turn them loose to find interesting stories in their domains of expertise, with minimal oversight and no constant meetings.

From Minnesota Bible colleges to the halls of Harvard, from philosophy majors to chemistry PhDs, from undergrads to post-docs, Ars has employed people from a wide range of schools and disciplines. We’ve been shaped by the university system, and we cover it regularly as a source of scientific research and computer science breakthroughs. While we differ in many ways, we recognize the value of a strong, independent, mission-focused university system that, despite current flaws, remains one of America’s storied achievements. And we hope that universities can collectively find the strength to defend themselves, just as we in the media must learn to do.

The assault on universities and on the knowledge they produce has been disorienting in its swiftness, animus, and savagery. But universities are not starfish, flopping about helplessly on a beach while a cruel child slices off their arms one by one. They can do far more than hope to survive another day, regrowing missing limbs in some remote future. They have real power, here and now. But they need to move quickly, they need to move in solidarity, and they need to use the resources that they have, collectively, assembled.

Because, if they aren’t going to use those resources when their very mission comes under assault, what was the point of gathering them in the first place?

Here are a few of those resources.

Money

Cash is not always the most important force in human affairs, but it doesn’t hurt to have a pile of it when facing off against a feral US government. When the government threatened Harvard with multiyear cuts of $9 billion, for instance, it was certainly easier for the university to resist while sitting on a staggering $53 billion endowment. In 2024, the National Association of College and University Business Officers reported that higher ed institutions in the US collectively have over $800 billion in endowment money.

It’s true that many endowment funds are donor-restricted and often invested in non-liquid assets, making them unavailable for immediate use or to bail out university programs whose funding has been cut. But it’s also true that $800 billion is a lot of money—it’s more than the individual GDP of all but two dozen countries.

No trustee of this sort of legacy wants to squander an institution’s future by spending money recklessly, but what point is there in having a massive endowment if it requires your school to become some sort of state-approved adjunct?

Besides, one might choose not to spend that money now only to find that it is soon requisitioned regardless. People in Trump’s orbit have talked for years about placing big new taxes on endowment revenue as a way of bringing universities to heel. Trump himself recently wrote on social media that Harvard “perhaps” should “lose its Tax Exempt Status and be Taxed as a Political Entity if it keeps pushing political, ideological, and terrorist inspired/supporting “Sickness?” Remember, Tax Exempt Status is totally contingent on acting in the PUBLIC INTEREST!”

So spend wisely, but do spend. This is the kind of moment such resources were accumulated to weather.

Students

Fifteen million students are currently enrolled in higher education across the country. The total US population is 341 million people. That means students comprise over 4 percent of the total population; when you add in faculty and staff, higher education’s total share of the population is even greater.

So what? Political science research over the last three decades looked at nonviolent protest movements and found that they need only 3.5 percent of the population to actively participate. Most movements that hit that threshold succeed, even in authoritarian states. Higher ed alone has those kinds of numbers.

Students are not a monolith, of course, and many would not participate—nor should universities look at their students merely as potential protesters who might serve university interests. But students have been well-known for a willingness to protest, and one of the odd features of the current moment has been that so many students protested the Gaza/Israel conflict even though so few have protested the current government assault on the very schools where they have chosen to spend their time and money. It is hard to say whether both schools and their students are burned out from recent, bruising protests, or whether the will to resist remains.

But if it does, the government assault on higher education could provoke an interesting realignment of forces: students, faculty, and administrators working together for once in resistance and protest, upending the normal dynamics of campus movements. And the numbers exist to make a real national difference if higher ed can rally its own full range of resources.

Institutions

Depending on how you count, the US has around 4,000 colleges and universities. The sheer number and diversity of these institutions is a strength—but only if they can do a better job working together on communications, lobbying, and legal defenses.

Schools are being attacked individually, through targeted threats rather than broad laws targeting all higher education. And because schools are in many ways competitors rather than collaborators, it can be difficult to think in terms of sharing resources or speaking with one voice. But joint action will be essential, given that many smaller schools are already under economic pressure and will have a hard time resisting government demands, losing their nonprofit status, or finding their students blocked from the country or cut off from loan money.

Plenty of trade associations and professional societies exist within the world of higher education, of course, but they are often dedicated to specific tasks and lack the public standing and authority to make powerful public statements.

Faculty/alumni

The old stereotype of the out-of-touch, tweed-wearing egghead, spending their life lecturing on the lesser plays of Ben Jonson, is itself out of touch. The modern university is stuffed with lawyers, data scientists, computer scientists, cryptographers, marketing researchers, writers, media professionals, and tech policy mavens. They are a serious asset, though universities sometimes leave faculty members to operate so autonomously that group action is difficult or, at least, institutionally unusual. At a time of crisis, that may need to change.

Faculty are an incredible resource because of what they know, of course. Historians and political scientists can offer context and theory for understanding populist movements and authoritarian regimes. Those specializing in dialogue across difference, or in truth and reconciliation movements, or in peace and conflict studies, can offer larger visions for how even deep social conflicts might be transcended. Communications professors can help universities think more carefully about articulating what they do in the public marketplace of ideas. And when you are on the receiving end of vindictive and pretextual legal activity, it doesn’t hurt to have a law school stuffed with top legal minds.

But faculty power extends beyond facts. Relationships with students, across many years, are a hallmark of the best faculty members. When generations of those students have spread out into government, law, and business, they make a formidable network.

Universities that realize the need to fight back already know this. Ed Martin, the interim US Attorney for the District of Columbia, attacked Georgetown in February and asked if it had “eliminated all DEI from your school and its curriculum?” He ended his “clarification” letter by claiming that “no applicant for our fellows program, our summer internship, or employment in our office who is a student or affiliated with a law school or university that continues to teach and utilize DEI will be considered.”

When Georgetown Dean Bill Treanor replied to Martin, he did not back down, noting Martin’s threat to “deny our students and graduates government employment opportunities until you, as Interim United States Attorney for the District of Columbia, approve of our curriculum.” (Martin himself had managed to omit the “interim” part of his title.) Such a threat would violate “the First Amendment’s protection of a university’s freedom to determine its own curriculum and how to deliver it.”

There was no “negotiating” here, no attempt to placate a bully. Treanor barely addressed Martin’s questions. Instead, he politely but firmly noted that the inquiry itself was illegitimate, even under recent Supreme Court jurisprudent and Trump Department of Education policy. And he tied everything in his response to the university’s mission as a Jesuit school committed to “intellectual, ethical, and spiritual understanding.”

The letter’s final paragraph, in which Treanor told Martin that he expected him to back down from his threats, opened with a discussion of Georgetown’s faculty.

Georgetown Law has one of the preeminent faculties in the country, fostering groundbreaking scholarship, educating students in a wide variety of perspectives, and thriving on the robust exchange of ideas. Georgetown Law faculty have educated world leaders, members of Congress, and Justice Department officials, from diverse backgrounds and perspectives.

Implicit in these remarks are two reminders:

  1. Georgetown is home to many top legal minds who aren’t about to be steamrolled by a January 6 defender whose actions in DC have already been so comically outrageous that Sen. Adam Schiff has placed a hold on his nomination to get the job permanently.
  2. Georgetown faculty have good relationships with many powerful people across the globe who are unlikely to sympathize with some legal hack trying to bully their alma mater.

The letter serves as a good reminder: Resist with firmness and rely on your faculty. Incentivize their work, providing the time and resources to write more popular-level distillations of their research or to educate alumni groups about the threats campuses are facing. Get them into the media and onto lecture hall stages. Tap their expertise for internal working groups. Don’t give in to the caricatures but present a better vision of how faculty contribute to students, to research, and to society.

Real estate

Universities collectively possess a real estate portfolio of land and buildings—including lecture halls, stages, dining facilities, stadiums, and dormitories—that would make even a developer like Donald Trump salivate. It’s an incredible resource that is already well-used but might be put toward purposes that meet the moment even more clearly.

Host more talks, not just on narrow specialty topics, but on the kinds of broad-based political debates that a healthy society needs. Make the universities essential places for debate, discussion, and civic organizing. Encourage more campus conferences in the summer, with vastly reduced rates for groups that effectively aid civic engagement, depolarization, and dialogue across political differences. Provide the physical infrastructure for fruitful cross-party political encounters and anti-authoritarian organizing. Use campuses to house regional and national hubs that develop best practices in messaging, legal tactics, local outreach, and community service from students, faculty, and administrators.

Universities do these things, of course; many are filled with “dialogue centers” and civic engagement offices. But many of these resources exist primarily for students; to survive and thrive, universities will need to rebuild broader social confidence. The other main criticism is that they can be siloed off from the other doings of the university. If “dialogue” is taken care of at the “dialogue center,” then other departments and administrative units may not need to worry about it. But with something as broad and important as “resistance,” the work cannot be confined to particular units.

With so many different resources, from university presses to libraries to lecture halls, academia can do a better job at making its campuses useful both to students and to the surrounding community—so long as the universities know their own missions and make sure their actions align with them.

Athletics

During times of external stress, universities need to operate more than ever out of their core, mission-driven values. While educating the whole person, mentally and physically, is a worthy goal, it is not one that requires universities to submit to a Two Minutes Hate while simultaneously providing mass entertainment and betting material for the gambling-industrial complex.

When up against a state that seeks “leverage” of every kind over the university sector, realize that academia itself controls some of the most popular sports competitions in America. That, too, is leverage, if one knows how to use it.

Such leverage could, of course, be Trumpian in its own bluntness—no March Madness tournament, for instance, so long as thousands of researchers are losing their jobs and health care networks are decimated and the government is insisting on ideological control over hiring and department makeup. (That would certainly be interesting—though quite possibly counterproductive.)

But universities might use their control of NCAA sporting events to better market themselves and their impact—and to highlight what’s really happening to them. Instead, we continue to get the worst kinds of anodyne spots during football and basketball games: frisbee on the quad, inspiring shots of domes and flags, a professor lecturing in front of a chalkboard.

Be creative! But do something. Saying and doing nothing—letting the games go on without comment as the boot heel comes down on the whole sector, is a complete abdication of mission and responsibility.

DOD and cyber research

The Trump administration seems to believe that it has the only thing people want: grant funding. It seems not even to care if broader science funding in the US simply evaporates, if labs close down, or if the US loses its world-beating research edge.

But even if “science” is currently expendable, the US government itself relies heavily on university researchers to produce innovations required by the Department of Defense and the intelligence community. Cryptography, cybersecurity tools, the AI that could power battlefield drone swarms—much of it is produced by universities under contract with the feds. And there’s no simple, short-term way for the government to replace this system.

Even other countries believe that US universities do valuable cyber work for the federal government; China just accused the University of California and Virginia Tech of aiding in an alleged cyberattack by the NSA, for instance.

That gives the larger universities—the ones that often have these contracts—additional leverage. They should find a way to use it.

Medical facilities

Many of the larger universities run sprawling and sophisticated health networks that serve whole communities and regions; indeed, much of the $9 billion in federal money at issue in the Harvard case was going to Harvard’s medical system of labs and hospitals.

If it seems unthinkable to you that the US government would treat the health of its own people as collateral damage in a war to become the Thought Police, remember that this is the same administration that has already tried to stop funds to the state of Maine—funds used to “feed children and disabled adults in schools and care settings across the state”—just because Maine allowed a couple of transgender kids to play on sports teams. What does the one have to do with the other? Nothing—except that the money provides leverage.

But health systems are not simply weapons for the Trump administration to use by refusing or delaying contracts, grants, and reimbursements. Health systems can improve people’s lives in the most tangible of ways. And that means they ought to be shining examples of community support and backing, providing a perfect opportunity to highlight the many good things that universities do for society.

Now, to the extent that these health care systems in the US have suffered from the general flaws of all US health care—lack of universal coverage leading to medical debt and the overuse of emergency rooms by the indigent, huge salaries commanded by doctors, etc.—the Trump war on these systems and on the universities behind them might provide a useful wake-up call from “business as usual.” Universities might use this time to double down on mission-driven values, using these incredible facilities even more to extend care, to lower barriers, and to promote truly public and community health. What better chance to show one’s city, region, and state the value of a university than massively boosting free and easy access to mental and physical health resources? Science research can be esoteric; saving someone’s body or mind is not.

Conclusion

This moment calls out for moral clarity and resolve. It asks universities to take their mission in society seriously and to resist being co-opted by government forces.

But it asks something of all of us, too. University leaders will make their choices, but to stand strong, they need the assistance of students, faculty, and alumni. In an age of polarization, parts of society have grown skeptical about the value of higher education. Some of these people are your friends, family, and neighbors. Universities must continue to make changes as they seek to build knowledge and justice and community, but those of us no longer within their halls and quads also have a part to play in sharing a more nuanced story about the value of the university system, both to our own lives and to the country.

If we don’t, our own degrees may be from institutions that have become almost unrecognizable.

Resist, eggheads! Universities are not as weak as they have chosen to be. Read More »

diablo-vs.-darkest-dungeon:-rpg-devs-on-balancing-punishment-and-power

Diablo vs. Darkest Dungeon: RPG devs on balancing punishment and power

For Sigman and the Darkest Dungeon team, it was important to establish an overarching design philosophy that was set in place. That said, the details within that framework may change or evolve significantly during development.

“In this age of early access and easily updatable games, balance is a living thing,” Sigman said. “It’s highly iterative throughout the game’s public life. We will update balance based upon community feedback, analytics, evolving metas, and also reflections on our own design philosophies and approaches.”

In Darkest Dungeon 2, a group of adventures sits by a table, exhausted

A screen for managing inventory and more in Darkest Dungeon II. Credit: Red Hook Studios

The problem, of course, is that every change to an existing game is a double-edged sword. With each update, you risk breaking the very elements you’re trying to fix.

Speaking to that ongoing balancing act, Sigman admits, “It’s not without its challenges. We’ve found that many players eagerly await such updates, but a subset gets really angry when developers change balance elements.”

Getting one of your favorite heroes or abilities nerfed can absolutely sink a game or destroy a strategy you’ve relied on for success. The team relies on a number of strictly mathematical tools to help isolate and solve balance problems, but on some level, it’s an artistic and philosophical question.

“A good example is how to address ‘exploits’ in a game,” Sigman said. “Some games try to hurriedly stamp out all possible exploits. With a single-player game, I think you have more leeway to let some exploits stand. It’s nice to let players get away with some stuff. If you kick sand over every exploit that appears, you remove some of the fun.”

As with so many aspects of game design, perfecting the balance between adversity and empowerment comes down to a simple question.

“One amazing piece of wisdom from Sid Meier, my personal favorite designer, is to remember to ask yourself, ‘Who is having the fun here? The designer or the player?’ It should be the player,” Sigman told us.

It’s the kind of approach that players love to hear. Even if a decision is made to make a game more difficult, particularly in an existing game, it should be done to make the play experience more enjoyable. If it begins to feel like devs are making balance changes just to scale down players’ power, it can begin to feel like you’re being punished for having fun.

The fine balance between power and challenge is a hard one to strike, but what players ultimately want is to have a good time. Sometimes that means feeling like a world-destroying demigod, and sometimes it means squeaking through a bloody boss encounter with a single hit point. Most often, though, you’re looking for a happy medium: a worthy challenge overcome through power and skill.

Diablo vs. Darkest Dungeon: RPG devs on balancing punishment and power Read More »

looking-at-the-universe’s-dark-ages-from-the-far-side-of-the-moon

Looking at the Universe’s dark ages from the far side of the Moon


meet you in the dark side of the moon

Building an observatory on the Moon would be a huge challenge—but it would be worth it.

A composition of the moon with the cosmos radiating behind it

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

There is a signal, born in the earliest days of the cosmos. It’s weak. It’s faint. It can barely register on even the most sensitive of instruments. But it contains a wealth of information about the formation of the first stars, the first galaxies, and the mysteries of the origins of the largest structures in the Universe.

Despite decades of searching for this signal, astronomers have yet to find it. The problem is that our Earth is too noisy, making it nearly impossible to capture this whisper. The solution is to go to the far side of the Moon, using its bulk to shield our sensitive instruments from the cacophony of our planet.

Building telescopes on the far side of the Moon would be the greatest astronomical challenge ever considered by humanity. And it would be worth it.

The science

We have been scanning and mapping the wider cosmos for a century now, ever since Edwin Hubble discovered that the Andromeda “nebula” is actually a galaxy sitting 2.5 million light-years away. Our powerful Earth-based observatories have successfully mapped the detailed location to millions of galaxies, and upcoming observatories like the Vera C. Rubin Observatory and Nancy Grace Roman Space Telescope will map millions more.

And for all that effort, all that technological might and scientific progress, we have surveyed less than 1 percent of the volume of the observable cosmos.

The vast bulk of the Universe will remain forever unobservable to traditional telescopes. The reason is twofold. First, most galaxies will simply be too dim and too far away. Even the James Webb Space Telescope, which is explicitly designed to observe the first generation of galaxies, has such a limited field of view that it can only capture a handful of targets at a time.

Second, there was a time, within the first few hundred million years after the Big Bang, before stars and galaxies had even formed. Dubbed the “cosmic dark ages,” this time naturally makes for a challenging astronomical target because there weren’t exactly a lot of bright sources to generate light for us to look at.

But there was neutral hydrogen. Most of the Universe is made of hydrogen, making it the most common element in the cosmos. Today, almost all of that hydrogen is ionized, existing in a super-heated plasma state. But before the first stars and galaxies appeared, the cosmic reserves of hydrogen were cool and neutral.

Neutral hydrogen is made of a single proton and a single electron. Each of these particles has a quantum property known as spin (which kind of resembles the familiar, macroscopic property of spin, but it’s not quite the same—though that’s a different article). In its lowest-energy state, the proton and electron will have spins oriented in opposite directions. But sometimes, through pure random quantum chance, the electron will spontaneously flip around. Very quickly, the hydrogen notices and gets the electron to flip back to where it belongs. This process releases a small amount of energy in the form of a photon with a wavelength of 21 centimeters.

This quantum transition is exceedingly rare, but with enough neutral hydrogen, you can build a substantial signal. Indeed, observations of 21-cm radiation have been used extensively in astronomy, especially to build maps of cold gas reservoirs within the Milky Way.

So the cosmic dark ages aren’t entirely dark; those clouds of primordial neutral hydrogen are emitting tremendous amounts of 21-cm radiation. But that radiation was emitted in the distant past, well over 13 billion years ago. As it has traveled through the cosmic distances, all those billions of light-years on its way to our eager telescopes, it has experienced the redshift effects of our expanding Universe.

By the time that dark age 21-cm radiation reaches us, it has stretched by a factor of 10, turning the neutral hydrogen signal into radio waves with wavelengths of around 2 meters.

The astronomy

Humans have become rather fond of radio transmissions in the past century. Unfortunately, the peak of this primordial signal from the dark ages sits right below the FM dial of your radio, which pretty much makes it impossible to detect from Earth. Our emissions are simply too loud, too noisy, and too difficult to remove. Teams of astronomers have devised clever ways to reduce or eliminate interference, featuring arrays scattered around the most desolate deserts in the world, but they have not been able to confirm the detection of a signal.

So those astronomers have turned in desperation to the quietest desert they can think of: the far side of the Moon.

It wasn’t until 1959 when the Soviet Luna 3 probe gave us our first glimpse of the Moon’s far side, and it wasn’t until 2019 when the Chang’e 4 mission made the first soft landing. Compared to the near side, and especially low-Earth orbit, there is very little human activity there. We’ve had more active missions on the surface of Mars than on the lunar far side.

Chang’e-4 landing zone on the far side of the moon. Credit: Xiao Xiao and others (CC BY 4.0)

And that makes the far side of the Moon the ideal location for a dark-age-hunting radio telescope, free from human interference and noise.

Ideas abound to make this a possibility. The first serious attempt was DARE, the Dark Ages Radio Explorer. Rather than attempting the audacious goal of building an actual telescope on the surface, DARE was a NASA-funded concept to develop an observatory (and when it comes to radio astronomy, “observatory” can be as a simple as a single antenna) to orbit the Moon and take data when it’s on the opposite side as the Earth.

For various bureaucratic reasons, NASA didn’t develop the DARE concept further. But creative astronomers have put forward even bolder proposals.

The FarView concept, for example, is a proposed radio telescope array that would dwarf anything on the Earth. It would be sensitive to frequency ranges between 5 and 40 MHz, allowing it to target the dark ages and the birth of the first stars. The proposed design contains 100,000 individual elements, with each element consisting of a single, simple dipole antenna, dispersed over a staggering 200 square kilometers. It would be infeasible to deliver that many antennae directly to the surface of the Moon. Instead, we’d have to build them, mining lunar regolith and turning it into the necessary components.

The design of this array is what’s called an interferometer. Instead of a single big dish, the individual antennae collect data on their own and then correlate all their signals together later. The effective resolution of an interferometer is the same as a single dish as big as the widest distance among the elements. The downside of an interferometer is that most of the incoming radiation just hits dirt (or in this case, lunar regolith), so the interferometer has to collect a lot of data to build up a decent signal.

Attempting these kinds of observations on the Earth requires constant maintenance and cleaning to remove radio interference and have essentially sunk all attempts to measure the dark ages. But a lunar-based interferometer will have all the time in the world it needs, providing a much cleaner and easier-to-analyze stream of data.

If you’re not in the mood for building 100,000 antennae on the Moon’s surface, then another proposal seeks to use the Moon’s natural features—namely, its craters. If you squint hard enough, they kind of look like radio dishes already. The idea behind the project, named the Lunar Crater Radio Telescope, is to find a suitable crater and use it as the support structure for a gigantic, kilometer-wide telescope.

This idea isn’t without precedent. Both the beloved Arecibo and the newcomer FAST observatories used depressions in the natural landscape of Puerto Rico and China, respectively, to take most of the load off of the engineering to make their giant dishes. The Lunar Telescope would be larger than both of those combined, and it would be tuned to hunt for dark ages radio signals that we can’t observe using Earth-based observatories because they simply bounce off the Earth’s ionosphere (even before we have to worry about any additional human interference). Essentially, the only way that humanity can access those wavelengths is by going beyond our ionosphere, and the far side of the Moon is the best place to park an observatory.

The engineering

The engineering challenges we need to overcome to achieve these scientific dreams are not small. So far, humanity has only placed a single soft-landed mission on the distant side of the Moon, and both of these proposals require an immense upgrade to our capabilities. That’s exactly why both far-side concepts were funded by NIAC, NASA’s Innovative Advanced Concepts program, which gives grants to researchers who need time to flesh out high-risk, high-reward ideas.

With NIAC funds, the designers of the Lunar Crater Radio Telescope, led by Saptarshi Bandyopadhyay at the Jet Propulsion Laboratory, have already thought of the challenges they will need to overcome to make the mission a success. Their mission leans heavily on another JPL concept, the DuAxel, which consists of a rover that can split into two single-axel rovers connected by a tether.

To build the telescope, several DuAxels are sent to the crater. One of each pair “sits” to anchor itself on the crater wall, while another one crawls down the slope. At the center, they are met with a telescope lander that has deployed guide wires and the wire mesh frame of the telescope (again, it helps for assembling purposes that radio dishes are just strings of metal in various arrangements). The pairs on the crater rim then hoist their companions back up, unfolding the mesh and lofting the receiver above the dish.

The FarView observatory is a much more capable instrument—if deployed, it would be the largest radio interferometer ever built—but it’s also much more challenging. Led by Ronald Polidan of Lunar Resources, Inc., it relies on in-situ manufacturing processes. Autonomous vehicles would dig up regolith, process and refine it, and spit out all the components that make an interferometer work: the 100,000 individual antennae, the kilometers of cabling to run among them, the solar arrays to power everything during lunar daylight, and batteries to store energy for round-the-lunar-clock observing.

If that sounds intense, it’s because it is, and it doesn’t stop there. An astronomical telescope is more than a data collection device. It also needs to crunch some numbers and get that precious information back to a human to actually study it. That means that any kind of far side observing platform, especially the kinds that will ingest truly massive amounts of data such as these proposals, would need to make one of two choices.

Choice one is to perform most of the data correlation and processing on the lunar surface, sending back only highly refined products to Earth for further analysis. Achieving that would require landing, installing, and running what is essentially a supercomputer on the Moon, which comes with its own weight, robustness, and power requirements.

The other choice is to keep the installation as lightweight as possible and send the raw data back to Earthbound machines to handle the bulk of the processing and analysis tasks. This kind of data throughput is outright impossible with current technology but could be achieved with experimental laser-based communication strategies.

The future

Astronomical observatories on the far side of the Moon face a bit of a catch-22. To deploy and run a world-class facility, either embedded in a crater or strung out over the landscape, we need some serious lunar manufacturing capabilities. But those same capabilities come with all the annoying radio fuzz that already bedevil Earth-based radio astronomy.

Perhaps the best solution is to open up the Moon to commercial exploitation but maintain the far side as a sort of out-world nature preserve, owned by no company or nation, left to scientists to study and use as a platform for pristine observations of all kinds.

It will take humanity several generations, if not more, to develop the capabilities needed to finally build far-side observatories. But it will be worth it, as those facilities will open up the unseen Universe for our hungry eyes, allowing us to pierce the ancient fog of our Universe’s past, revealing the machinations of hydrogen in the dark ages, the birth of the first stars, and the emergence of the first galaxies. It will be a fountain of cosmological and astrophysical data, the richest possible source of information about the history of the Universe.

Ever since Galileo ground and polished his first lenses and through the innovations that led to the explosion of digital cameras, astronomy has a storied tradition of turning the technological triumphs needed to achieve science goals into the foundations of various everyday devices that make life on Earth much better. If we’re looking for reasons to industrialize and inhabit the Moon, the noble goal of pursuing a better understanding of the Universe makes for a fine motivation. And we’ll all be better off for it.

Photo of Paul Sutter

Looking at the Universe’s dark ages from the far side of the Moon Read More »

an-ars-technica-history-of-the-internet,-part-1

An Ars Technica history of the Internet, part 1


Intergalactic Computer Network

In our new 3-part series, we remember the people and ideas that made the Internet.

A collage of vintage computer elements

Credit: Collage by Aurich Lawson

Credit: Collage by Aurich Lawson

In a very real sense, the Internet, this marvelous worldwide digital communications network that you’re using right now, was created because one man was annoyed at having too many computer terminals in his office.

The year was 1966. Robert Taylor was the director of the Advanced Research Projects Agency’s Information Processing Techniques Office. The agency was created in 1958 by President Eisenhower in response to the launch of Sputnik. So Taylor was in the Pentagon, a great place for acronyms like ARPA and IPTO. He had three massive terminals crammed into a room next to his office. Each one was connected to a different mainframe computer. They all worked slightly differently, and it was frustrating to remember multiple procedures to log in and retrieve information.

Author’s re-creation of Bob Taylor’s office with three teletypes. Credit: Rama & Musée Bolo (Wikipedia/Creative Commons), steve lodefink (Wikipedia/Creative Commons), The Computer Museum @ System Source

In those days, computers took up entire rooms, and users accessed them through teletype terminals—electric typewriters hooked up to either a serial cable or a modem and a phone line. ARPA was funding multiple research projects across the United States, but users of these different systems had no way to share their resources with each other. Wouldn’t it be great if there was a network that connected all these computers?

The dream is given form

Taylor’s predecessor, Joseph “J.C.R.” Licklider, had released a memo in 1963 that whimsically described an “Intergalactic Computer Network” that would allow users of different computers to collaborate and share information. The idea was mostly aspirational, and Licklider wasn’t able to turn it into a real project. But Taylor knew that he could.

In a 1998 interview, Taylor explained: “In most government funding, there are committees that decide who gets what and who does what. In ARPA, that was not the way it worked. The person who was responsible for the office that was concerned with that particular technology—in my case, computer technology—was the person who made the decision about what to fund and what to do and what not to do. The decision to start the ARPANET was mine, with very little or no red tape.”

Taylor marched into the office of his boss, Charles Herzfeld. He described how a network could save ARPA time and money by allowing different institutions to share resources. He suggested starting with a small network of four computers as a proof of concept.

“Is it going to be hard to do?” Herzfeld asked.

“Oh no. We already know how to do it,” Taylor replied.

“Great idea,” Herzfeld said. “Get it going. You’ve got a million dollars more in your budget right now. Go.”

Taylor wasn’t lying—at least, not completely. At the time, there were multiple people around the world thinking about computer networking. Paul Baran, working for RAND, published a paper in 1964 describing how a distributed military networking system could be made resilient even if some nodes were destroyed in a nuclear attack. Over in the UK, Donald Davies independently came up with a similar concept (minus the nukes) and invented a term for the way these types of networks would communicate. He called it “packet switching.”

On a regular phone network, after some circuit switching, a caller and answerer would be connected via a dedicated wire. They had exclusive use of that wire until the call was completed. Computers communicated in short bursts and didn’t require pauses the way humans did. So it would be a waste for two computers to tie up a whole line for extended periods. But how could many computers talk at the same time without their messages getting mixed up?

Packet switching was the answer. Messages were divided into multiple snippets. The order and destination were included with each message packet. The network could then route the packets in any way that made sense. At the destination, all the appropriate packets were put into the correct order and reassembled. It was like moving a house across the country: It was more efficient to send all the parts in separate trucks, each taking their own route to avoid congestion.

A simplified diagram of how packet switching works. Credit: Jeremy Reimer

By the end of 1966, Taylor had hired a program director, Larry Roberts. Roberts sketched a diagram of a possible network on a napkin and met with his team to propose a design. One problem was that each computer on the network would need to use a big chunk of its resources to manage the packets. In a meeting, Wes Clark passed a note to Roberts saying, “You have the network inside-out.” Clark’s alternative plan was to ship a bunch of smaller computers to connect to each host. These dedicated machines would do all the hard work of creating, moving, and reassembling packets.

With the design complete, Roberts sent out a request for proposals for constructing the ARPANET. All they had to do now was pick the winning bid, and the project could begin.

BB&N and the IMPs

IBM, Control Data Corporation, and AT&T were among the first to respond to the request. They all turned it down. Their reasons were the same: None of these giant companies believed the network could be built. IBM and CDC thought the dedicated computers would be too expensive, but AT&T flat-out said that packet switching wouldn’t work on its phone network.

In late 1968, ARPA announced a winner for the bid: Bolt Beranek and Newman. It seemed like an odd choice. BB&N had started as a consulting firm that calculated acoustics for theaters. But the need for calculations led to the creation of a computing division, and its first manager had been none other than J.C.R. Licklider. In fact, some BB&N employees had been working on a plan to build a network even before the ARPA bid was sent out. Robert Kahn led the team that drafted BB&N’s proposal.

Their plan was to create a network of “Interface Message Processors,” or IMPs, out of Honeywell 516 computers. They were ruggedized versions of the DDP-516 16-bit minicomputer. Each had 24 kilobytes of core memory and no mass storage other than a paper tape reader, and each cost $80,000 (about $700,000 today). In comparison, an IBM 360 mainframe cost between $7 million and $12 million at the time.

An original IMP, the world’s first router. It was the size of a large refrigerator. Credit: Steve Jurvetson (CC BY 2.0)

The 516’s rugged appearance appealed to BB&N, who didn’t want a bunch of university students tampering with its IMPs. The computer came with no operating system, but it didn’t really have enough RAM for one. The software to control the IMPs was written on bare metal using the 516’s assembly language. One of the developers was Will Crowther, who went on to create the first computer adventure game.

One other hurdle remained before the IMPs could be put to use: The Honeywell design was missing certain components needed to handle input and output. BB&N employees were dismayed that the first 516, which they named IMP-0, didn’t have working versions of the hardware additions they had requested.

It fell on Ben Barker, a brilliant undergrad student interning at BB&N, to manually fix the machine. Barker was the best choice, even though he had slight palsy in his hands. After several stressful 16-hour days wrapping and unwrapping wires, all the changes were complete and working. IMP-0 was ready.

In the meantime, Steve Crocker at the University of California, Los Angeles, was working on a set of software specifications for the host computers. It wouldn’t matter if the IMPs were perfect at sending and receiving messages if the computers themselves didn’t know what to do with them. Because the host computers were part of important academic research, Crocker didn’t want to seem like he was a dictator telling people what to do with their machines. So he titled his draft a “Request for Comments,” or RFC.

This one act of politeness forever changed the nature of computing. Every change since has been done as an RFC, and the culture of asking for comments pervades the tech industry even today.

RFC No. 1 proposed two types of host software. The first was the simplest possible interface, in which a computer pretended to be a dumb terminal. This was dubbed a “terminal emulator,” and if you’ve ever done any administration on a server, you’ve probably used one. The second was a more complex protocol that could be used to transfer large files. This became FTP, which is still used today.

A single IMP connected to one computer wasn’t much of a network. So it was very exciting in September 1969 when IMP-1 was delivered to BB&N and then shipped via air freight to UCLA. The first test of the ARPANET was done with simultaneous phone support. The plan was to type “LOGIN” to start a login sequence. This was the exchange:

“Did you get the L?”

“I got the L!”

“Did you get the O?”

“I got the O!”

“Did you get the G?”

“Oh no, the computer crashed!”

It was an inauspicious beginning. The computer on the other end was helpfully filling in the “GIN” part of “LOGIN,” but the terminal emulator wasn’t expecting three characters at once and locked up. It was the first time that autocomplete had ruined someone’s day. The bug was fixed, and the test completed successfully.

IMP-2, IMP-3, and IMP-4 were delivered to the Stanford Research Institute (where Doug Engelbart was keen to expand his vision of connecting people), UC Santa Barbara, and the University of Utah.

Now that the four-node test network was complete, the team at BB&N could work with the researchers at each node to put the ARPANET through its paces. They deliberately created the first ever denial of service attack in January 1970, flooding the network with packets until it screeched to a halt.

The original ARPANET, predecessor of the Internet. Circles are IMPs, and rectangles are computers. Credit: DARPA

Surprisingly, many of the administrators of the early ARPANET nodes weren’t keen to join the network.  They didn’t like the idea of anyone else being able to use resources on “their” computers. Taylor reminded them that their hardware and software projects were mostly ARPA-funded, so they couldn’t opt out.

The next month, Stephen Carr, Stephen Crocker, and Vint Cerf released RFC No. 33. It described a Network Control Protocol (NCP) that standardized how the hosts would communicate with each other. After this was adopted, the network was off and running.

J.C.R. Licklider, Bob Taylor, Larry Roberts, Steve Crocker, and Vint Cerf. Credit: US National Library of Medicine, WIRED, Computer Timeline, Steve Crocker, Vint Cerf

The ARPANET grew significantly over the next few years. Important events included the first ever email between two different computers, sent by Roy Tomlinson in July 1972. Another groundbreaking demonstration involved a PDP-10 in Harvard simulating, in real-time, an aircraft landing on a carrier. The data was sent over the ARPANET to a MIT-based graphics terminal, and the wireframe graphical view was shipped back to a PDP-1 at Harvard and displayed on a screen. Although it was primitive and slow, it was technically the first gaming stream.

A big moment came in October 1972 at the International Conference on Computer Communication. This was the first time the network had been demonstrated to the public. Interest in the ARPANET was growing, and people were excited. A group of AT&T executives noticed a brief crash and laughed, confident that they were correct in thinking that packet switching would never work. Overall, however, the demonstration was a resounding success.

But the ARPANET was no longer the only network out there.

The two keystrokes on a Model 33 Teletype that changed history. Credit: Marcin Wichary (CC BY 2.0)

A network of networks

The rest of the world had not been standing still. In Hawaii, Norman Abramson and Franklin Kuo created ALOHAnet, which connected computers on the islands using radio. It was the first public demonstration of a wireless packet switching network. In the UK, Donald Davies’ team developed the National Physical Laboratory (NPL) network. It seemed like a good idea to start connecting these networks together, but they all used different protocols, packet formats, and transmission rates. In 1972, the heads of several national networking projects created an International Networking Working Group. Cerf was chosen to lead it.

The first attempt to bridge this gap was SATNET, also known as the Atlantic Packet Satellite Network. Using satellite links, it connected the US-based ARPANET with networks in the UK. Unfortunately, SATNET itself used its own set of protocols. In true tech fashion, an attempt to make a universal standard had created one more standard instead.

Robert Kahn asked Vint Cerf to try and fix these problems once and for all. They came up with a new plan called the Transmission Control Protocol, or TCP. The idea was to connect different networks through specialized computers, called “gateways,” that translated and forwarded packets. TCP was like an envelope for packets, making sure they got to the right destination on the correct network. Because some networks were not guaranteed to be reliable, when one computer successfully received a complete and undamaged message, it would send an acknowledgement (ACK) back to the sender. If the ACK wasn’t received in a certain amount of time, the message was retransmitted.

In December 1974, Cerf, Yogen Dalal, and Carl Sunshine wrote a complete specification for TCP. Two years later, Cerf and Kahn, along with a dozen others, demonstrated the first three-network system. The demo connected packet radio, the ARPANET, and SATNET, all using TCP. Afterward, Cerf, Jon Postel, and Danny Cohen suggested a small but important change: They should take out all the routing information and put it into a new protocol, called the Internet Protocol (IP). All the remaining stuff, like breaking and reassembling messages, detecting errors, and retransmission, would stay in TCP. Thus, in 1978, the protocol officially became known as, and was forever thereafter, TCP/IP.

A map of the Internet in 1977. White dots are IMPs, and rectangles are host computers. Jagged lines connect to other networks. Credit: The Computer History Museum

If the story of creating the Internet was a movie, the release of TCP/IP would have been the triumphant conclusion. But things weren’t so simple. The world was changing, and the path ahead was murky at best.

At the time, joining the ARPANET required leasing high-speed phone lines for $100,000 per year. This limited it to large universities, research companies, and defense contractors. The situation led the National Science Foundation (NSF) to propose a new network that would be cheaper to operate. Other educational networks arose at around the same time. While it made sense to connect these networks to the growing Internet, there was no guarantee that this would continue. And there were other, larger forces at work.

By the end of the 1970s, computers had improved significantly. The invention of the microprocessor set the stage for smaller, cheaper computers that were just beginning to enter people’s homes. Bulky teletypes were being replaced with sleek, TV-like terminals. The first commercial online service, CompuServe, was released to the public in 1979. For just $5 per hour, you could connect to a private network, get weather and financial reports, and trade gossip with other users. At first, these systems were completely separate from the Internet. But they grew quickly. By 1987, CompuServe had 380,000 subscribers.

A magazine ad for CompuServe from 1980. Credit: marbleriver

Meanwhile, the adoption of TCP/IP was not guaranteed. At the beginning of the 1980s, the Open Systems Interconnection (OSI) group at the International Standardization Organization (ISO) decided that what the world needed was more acronyms—and also a new, global, standardized networking model.

The OSI model was first drafted in 1980, but it wasn’t published until 1984. Nevertheless, many European governments, and even the US Department of Defense, planned to transition from TCP/IP to OSI. It seemed like this new standard was inevitable.

The seven-layer OSI model. If you ever thought there were too many layers, you’re not alone. Credit: BlueCat Networks

While the world waited for OSI, the Internet continued to grow and evolve. In 1981, the fourth version of the IP protocol, IPv4, was released. On January 1, 1983, the ARPANET itself fully transitioned to using TCP/IP. This date is sometimes referred to as the “birth of the Internet,” although from a user’s perspective, the network still functioned the same way it had for years.

A map of the Internet from 1982. Ovals are networks, and rectangles are gateways. Hosts are not shown, but number in the hundreds. Note the appearance of modern-looking IPv4 addresses. Credit: Jon Postel

In 1986, the NFSNET came online, running under TCP/IP and connected to the rest of the Internet. It also used a new standard, the Domain Name System (DNS). This system, still in use today, used easy-to-remember names to point to a machine’s individual IP address. Computer names were assigned “top-level” domains based on their purpose, so you could connect to “frodo.edu” at an educational institution, or “frodo.gov” at a governmental one.

The NFSNET grew rapidly, dwarfing the ARPANET in size. In 1989, the original ARPANET was decommissioned. The IMPs, long since obsolete, were retired. However, all the ARPANET hosts were successfully migrated to other Internet networks. Like a Ship of Theseus, the ARPANET lived on even after every component of it was replaced.

The exponential growth of the ARPANET/Internet during its first two decades. Credit: Jeremy Reimer

Still, the experts and pundits predicted that all of these systems would eventually have to transfer over to the OSI model. The people who had built the Internet were not impressed. In 1987, writing RFC No. 1,000, Crocker said, “If we had only consulted the ancient mystics, we would have seen immediately that seven layers were required.”

The Internet pioneers felt they had spent many years refining and improving a working system. But now, OSI had arrived with a bunch of complicated standards and expected everyone to adopt their new design. Vint Cerf had a more pragmatic outlook. In 1982, he left ARPA for a new job at MCI, where he helped build the first commercial email system (MCI Mail) that was connected to the Internet. While at MCI, he contacted researchers at IBM, Digital, and Hewlett-Packard and convinced them to experiment with TCP/IP. Leadership at these companies still officially supported OSI, however.

The debate raged on through the latter half of the 1980s and into the early 1990s. Tired of the endless arguments, Cerf contacted the head of the National Institute of Standards and Technology (NIST) and asked him to write a blue ribbon report comparing OSI and TCP/IP. Meanwhile, while planning a successor to IPv4, the Internet Advisory Board (IAB) was looking at the OSI Connectionless Network Protocol and its 128-bit addressing for inspiration. In an interview with Ars, Vint Cerf explained what happened next.

“It was deliberately misunderstood by firebrands in the IETF [Internet Engineering Task Force] that we are traitors by adopting OSI,” he said. “They raised a gigantic hoo-hah. The IAB was deposed, and the authority in the system flipped. IAB used to be the decision makers, but the fight flips it, and IETF becomes the standard maker.”

To calm everybody down, Cerf performed a striptease at a meeting of the IETF in 1992. He revealed a T-shirt that said “IP ON EVERYTHING.” At the same meeting, David Clark summarized the feelings of the IETF by saying, “We reject kings, presidents, and voting. We believe in rough consensus and running code.”

Vint Cerf strips down to the bare essentials. Credit: Boardwatch and Light Reading

The fate of the Internet

The split design of TCP/IP, which was a small technical choice at the time, had long-lasting political implications. In 2001, David Clark and Marjory Blumenthal wrote a paper that looked back on the Protocol War. They noted that the Internet’s complex functions were performed at the endpoints, while the network itself ran only the IP part and was concerned simply with moving data from place to place. These “end-to-end principles” formed the basis of “… the ‘Internet Philosophy’: freedom of action, user empowerment, end-user responsibility for actions undertaken, and lack of controls ‘in’ the Net that limit or regulate what users can do,” they said.

In other words, the battle between TCP/IP and OSI wasn’t just about two competing sets of acronyms. On the one hand, you had a small group of computer scientists who had spent many years building a relatively open network and wanted to see it continue under their own benevolent guidance. On the other hand, you had a huge collective of powerful organizations that believed they should be in charge of the future of the Internet—and maybe the behavior of everyone on it.

But this impossible argument and the ultimate fate of the Internet was about to be decided, and not by governments, committees, or even the IETF. The world was changed forever by the actions of one man. He was a mild-mannered computer scientist, born in England and working for a physics research institute in Switzerland.

That’s the story covered in the next article in our series.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

An Ars Technica history of the Internet, part 1 Read More »

“what-the-hell-are-you-doing?”-how-i-learned-to-interview-astronauts,-scientists,-and-billionaires

“What the hell are you doing?” How I learned to interview astronauts, scientists, and billionaires


The best part about journalism is not collecting information. It’s sharing it.

Inside NASA's rare Moon rocks vault (2016)

Sometimes the best place to do an interview is in a clean room. Credit: Lee Hutchinson

Sometimes the best place to do an interview is in a clean room. Credit: Lee Hutchinson

I recently wrote a story about the wild ride of the Starliner spacecraft to the International Space Station last summer. It was based largely on an interview with the commander of the mission, NASA astronaut Butch Wilmore.

His account of Starliner’s thruster failures—and his desperate efforts to keep the vehicle flying on course—was riveting. In the aftermath of the story, many readers, people on social media, and real-life friends congratulated me on conducting a great interview. But truth be told, it was pretty much all Wilmore.

Essentially, when I came into the room, he was primed to talk. I’m not sure if Wilmore was waiting for me specifically to talk to, but he pretty clearly wanted to speak with someone about his experiences aboard the Starliner spacecraft. And he chose me.

So was it luck? I’ve been thinking about that. As an interviewer, I certainly don’t have the emotive power of some of the great television interviewers, who are masters of confrontation and drama. It’s my nature to avoid confrontation where possible. But what I do have on my side is experience, more than 25 years now, as well as preparation. I am also genuinely and completely interested in space. And as it happens, these values are important, too.

Interviewing is a craft one does not pick up overnight. During my career, I have had some funny, instructive, and embarrassing moments. Without wanting to seem pretentious or self-indulgent, I thought it might be fun to share some of those stories so you can really understand what it’s like on a reporter’s side of the cassette tape.

March 2003: Stephen Hawking

I had only been working professionally as a reporter at the Houston Chronicle for a few years (and as the newspaper’s science writer for less time still) when the opportunity to interview Stephen Hawking fell into my lap.

What a coup! He was only the world’s most famous living scientist, and he was visiting Texas at the invitation of a local billionaire named George Mitchell. A wildcatter and oilman, Mitchell had grown up in Galveston along the upper Texas coast, marveling at the stars as a kid. He studied petroleum engineering and later developed the controversial practice of fracking. In his later years, Mitchell spent some of his largesse on the pursuits of his youth, including astronomy and astrophysics. This included bringing Hawking to Texas more than half a dozen times in the 1990s and early 2000s.

For an interview with Hawking, one submitted questions in advance. That’s because Hawking was afflicted with Lou Gehrig’s disease and lost the ability to speak in 1985. A computer attached to his wheelchair cycled through letters and sounds, and Hawking clicked a button to make a selection, forming words and then sentences, which were sent to a voice synthesizer. For unprepared responses, it took a few minutes to form a single sentence.

George Mitchell and Stephen Hawking during a Texas visit.

Credit: Texas A&M University

George Mitchell and Stephen Hawking during a Texas visit. Credit: Texas A&M University

What to ask him? I had a decent understanding of astronomy, having majored in it as an undergraduate. But the readership of a metro newspaper was not interested in the Hubble constant or the Schwarzschild radius. I asked him about recent discoveries of the cosmic microwave background radiation anyway. Perhaps the most enduring response was about the war in Iraq, a prominent topic of the day. “It will be far more difficult to get out of Iraq than to get in,” he said. He was right.

When I met him at Texas A&M University, Hawking was gracious and polite. He answered a couple of questions in person. But truly, it was awkward. Hawking’s time on Earth was limited and his health failing, so it required an age to tap out even short answers. I can only imagine his frustration at the task of communication, which the vast majority of humans take for granted, especially because he had such a brilliant mind and so many deep ideas to share. And here I was, with my banal questions, stealing his time. As I stood there, I wondered whether I should stare at him while he composed a response. Should I look away? I felt truly unworthy.

In the end, it was fine. I even met Hawking a few more times, including at a memorable dinner at Mitchell’s ranch north of Houston, which spans tens of thousands of acres. A handful of the world’s most brilliant theoretical physicists were there. We would all be sitting around chatting, and Hawking would periodically chime in with a response to something brought up earlier. Later on that evening, Mitchell and Hawking took a chariot ride around the grounds. I wonder what they talked about?

Spring 2011: Jane Goodall and Sylvia Earle

By this point, I had written about science for nearly a decade at the Chronicle. In the early part of the year, I had the opportunity to interview noted chimpanzee scientist Jane Goodall and one of the world’s leading oceanographers, Sylvia Earle. Both were coming to Houston to talk about their research and their passion for conservation.

I spoke with Goodall by phone in advance of her visit, and she was so pleasant, so regal. By then, Goodall was 76 years old and had been studying chimpanzees in Gombe Stream National Park in Tanzania for five decades. Looking back over the questions I asked, they’re not bad. They’re just pretty basic. She gave great answers regardless. But there is only so much chemistry you can build with a person over the telephone (or Zoom, for that matter, these days). Being in person really matters in interviewing because you can read cues, and it’s easier to know when to let a pause go. The comfort level is higher. When you’re speaking with someone you don’t know that well, establishing a basic level of comfort is essential to making an all-important connection.

A couple of months later, I spoke with Earle in person at the Houston Museum of Natural Science. I took my older daughter, then nine years old, because I wanted her to hear Earle speak later in the evening. This turned out to be a lucky move for a couple of different reasons. First, my kid was inspired by Earle to pursue studies in marine biology. And more immediately, the presence of a curious 9-year-old quickly warmed Earle to the interview. We had a great discussion about many things beyond just oceanography.

President Barack Obama talks with Dr. Sylvia Earle during a visit to Midway Atoll on September 1, 2016.

Credit: Barack Obama Presidential Library

President Barack Obama talks with Dr. Sylvia Earle during a visit to Midway Atoll on September 1, 2016. Credit: Barack Obama Presidential Library

The bottom line is that I remained a fairly pedestrian interviewer back in 2011. That was partly because I did not have deep expertise in chimpanzees or oceanography. And that leads me to another key for a good interview and establishing a rapport. It’s great if a person already knows you, but even if they don’t, you can overcome that by showing genuine interest or demonstrating your deep knowledge about a subject. I would come to learn this as I started to cover space more exclusively and got to know the industry and its key players better.

September 2014: Scott Kelly

To be clear, this was not much of an interview. But it is a fun story.

I spent much of 2014 focused on space for the Houston Chronicle. I pitched the idea of an in-depth series on the sorry state of NASA’s human spaceflight program, which was eventually titled “Adrift.” By immersing myself in spaceflight for months on end, I discovered a passion for the topic and knew that writing about space was what I wanted to do for the rest of my life. I was 40 years old, so it was high time I found my calling.

As part of the series, I traveled to Kazakhstan with a photographer from the Chronicle, Smiley Pool. He is a wonderful guy who had strengths in chatting up sources that I, an introvert, lacked. During the 13-day trip to Russia and Kazakhstan, we traveled with a reporter from Esquire named Chris Jones, who was working on a long project about NASA astronaut Scott Kelly. Kelly was then training for a yearlong mission to the International Space Station, and he was a big deal.

Jones was a tremendous raconteur and an even better writer—his words, my goodness. We had so much fun over those two weeks, sharing beer, vodka, and Kazakh food. The capstone of the trip was seeing the Soyuz TMA-14M mission launch from the Baikonur Cosmodrome. Kelly was NASA’s backup astronaut for the flight, so he was in quarantine alongside the mission’s primary astronaut. (This was Butch Wilmore, as it turns out). The launch, from a little more than a kilometer away, was still the most spectacular moment of spaceflight I’ve ever observed in person. Like, holy hell, the rocket was right on top of you.

Expedition 43 NASA Astronaut Scott Kelly walks from the Zvjozdnyj Hotel to the Cosmonaut Hotel for additional training, Thursday, March 19, 2015, in Baikonur, Kazakhstan.

Credit: NASA/Bill Ingalls

Expedition 43 NASA Astronaut Scott Kelly walks from the Zvjozdnyj Hotel to the Cosmonaut Hotel for additional training, Thursday, March 19, 2015, in Baikonur, Kazakhstan. Credit: NASA/Bill Ingalls

Immediately after the launch, which took place at 1: 25 am local time, Kelly was freed from quarantine. This must have been liberating because he headed straight to the bar at the Hotel Baikonur, the nicest watering hole in the small, Soviet-era town. Jones, Pool, and I were staying at a different hotel. Jones got a text from Kelly inviting us to meet him at the bar. Our NASA minders were uncomfortable with this, as the last thing they want is to have astronauts presented to the world as anything but sharp, sober-minded people who represent the best of the best. But this was too good to resist.

By the time we got to the bar, Kelly and his companion, the commander of his forthcoming Soyuz flight, Gennady Padalka, were several whiskeys deep. The three of us sat across from Kelly and Padalka, and as one does at 3 am in Baikonur, we started taking shots. The astronauts were swapping stories and talking out of school. At one point, Jones took out his notebook and said that he had a couple of questions. To this, Kelly responded heatedly, “What the hell are you doing?”

Not conducting an interview, apparently. We were off the record. Well, until today at least.

We drank and talked for another hour or so, and it was incredibly memorable. At the time, Kelly was probably the most famous active US astronaut, and here I was throwing down whiskey with him shortly after watching a rocket lift off from the very spot where the Soviets launched the Space Age six decades earlier. In retrospect, this offered a good lesson that the best interviews are often not, in fact, interviews. To get the good information, you need to develop relationships with people, and you do that by talking with them person to person, without a microphone, often with alcohol.

Scott Kelly is a real one for that night.

September 2019: Elon Musk

I have spoken with Elon Musk a number of times over the years, but none was nearly so memorable as a long interview we did for my first book on SpaceX, called Liftoff. That summer, I made a couple of visits to SpaceX’s headquarters in Hawthorne, California, interviewing the company’s early employees and sitting in on meetings in Musk’s conference room with various teams. Because SpaceX is such a closed-up company, it was fascinating to get an inside look at how the sausage was made.

It’s worth noting that this all went down a few months before the onset of the COVID-19 pandemic. In some ways, Musk is the same person he was before the outbreak. But in other ways, he is profoundly different, his actions and words far more political and polemical.

Anyway, I was supposed to interview Musk on a Friday evening at the factory at the end of one of these trips. As usual, Musk was late. Eventually, his assistant texted, saying something had come up. She was desperately sorry, but we would have to do the interview later. I returned to my hotel, downbeat. I had an early flight the next morning back to Houston. But after about an hour, the assistant messaged me again. Musk had to travel to South Texas to get the Starship program moving. Did I want to travel with him and do the interview on the plane?

As I sat on his private jet the next day, late morning, my mind swirled. There would be no one else on the plane but Musk, his three sons (triplets, then 13 years old) and two bodyguards, and me. When Musk is in a good mood, an interview can be a delight. He is funny, sharp, and a good storyteller. When Musk is in a bad mood, well, an interview is usually counterproductive. So I fretted. What if Musk was in a bad mood? It would be a super-awkward three and a half hours on the small jet.

Two Teslas drove up to the plane, the first with Musk driving his boys and the second with two security guys. Musk strode onto the jet, saw me, and said he didn’t realize I was going to be on the plane. (A great start to things!) Musk then took out his phone and started a heated conversation about digging tunnels. By this point, I was willing myself to disappear. I just wanted to melt into the leather seat I was sitting in about three feet from Musk.

So much for a good mood for the interview.

As the jet climbed, the phone conversation got worse, but then Musk lost his connection. He put away his phone and turned to me, saying he was free to talk. His mood, almost as if by magic, changed. Since we were discussing the early days of SpaceX at Kwajalein, he gathered the boys around so they could hear about their dad’s earlier days. The interview went shockingly well, and at least part of the reason has to be that I knew the subject matter deeply, had prepared, and was passionate about it. We spoke for nearly two hours before Musk asked if he might have some time with his kids. They spent the rest of the flight playing video games, yucking it up.

April 2025: Butch Wilmore

When they’re on the record, astronauts mostly stick to a script. As a reporter, you’re just not going to get too much from them. (Off the record is a completely different story, of course, as astronauts are generally delightful, hilarious, and earnest people.)

Last week, dozens of journalists were allotted 10-minute interviews with Wilmore and, separately, Suni Williams. It was the first time they had spoken in depth with the media since their launch on Starliner and return to Earth aboard a Crew Dragon vehicle. As I waited outside Studio A at Johnson Space Center, I overheard Wilmore completing an interview with a Tennessee-based outlet, where he is from. As they wrapped up, the public affairs officer said he had just one more interview left and said my name. Wilmore said something like, “Oh good, I’ve been waiting to talk with him.”

That was a good sign. Out of all the interviews that day, it was good to know he wanted to speak with me. The easy thing for him to do would have been to use “astronaut speak” for 10 minutes and then go home. I was the last interview of the day.

As I prepared to speak with Wilmore and Williams, I didn’t want to ask the obvious questions they’d answered many times earlier. If you ask, “What was it like to spend nine months in space when you were expecting only a short trip?” you’re going to get a boring answer. Similarly, although the end of the mission was highly politicized by the Trump White House, two veteran NASA astronauts were not going to step on that landmine.

I wanted to go back to the root cause of all this, the problems with Starliner’s propulsion system. My strategy was simply to ask what it was like to fly inside the spacecraft. Williams gave me some solid answers. But Wilmore had actually been at the controls. And he apparently had been holding in one heck of a story for nine months. Because when I asked about the launch, and then what it was like to fly Starliner, he took off without much prompting.

Butch Wilmore has flown on four spacecraft: the Space Shuttle, Soyuz, Starliner, and Crew Dragon.

Credit: NASA/Emmett Given

Butch Wilmore has flown on four spacecraft: the Space Shuttle, Soyuz, Starliner, and Crew Dragon. Credit: NASA/Emmett Given

I don’t know exactly why Wilmore shared so much with me. We are not particularly close and have never interacted outside of an official NASA setting. But he knows of my work and interest in spaceflight. Not everyone at the space agency appreciates my journalism, but they know I’m deeply interested in what they’re doing. They know I care about NASA and Johnson Space Center. So I asked Wilmore a few smart questions, and he must have trusted that I would tell his story honestly and accurately, and with appropriate context. I certainly tried my best. After a quarter of a century, I have learned well that the most sensational stories are best told without sensationalism.

Even as we spoke, I knew the interview with Wilmore was one of the best I had ever done. A great scientist once told me that the best feeling in the world is making some little discovery in a lab and for a short time knowing something about the natural world that no one else knows. The equivalent, for me, is doing an interview and knowing I’ve got gold. And for a little while, before sharing it with the world, I’ve got that little piece of gold all to myself.

But I’ll tell you what. It’s even more fun to let the cat out of the bag. The best part about journalism is not collecting information. It’s sharing that information with the world.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

“What the hell are you doing?” How I learned to interview astronauts, scientists, and billionaires Read More »

google-pixel-9a-review:-all-the-phone-you-need

Google Pixel 9a review: All the phone you need


The Pixel 9a looks great and shoots lovely photos, but it’s light on AI.

Pixel 9a floating back

The Pixel 9a adopts a streamlined design. Credit: Ryan Whitwam

The Pixel 9a adopts a streamlined design. Credit: Ryan Whitwam

It took a few years, but Google’s Pixel phones have risen to the top of the Android ranks, and its new Pixel 9a keeps most of what has made flagship Pixel phones so good, including the slick software and versatile cameras. Despite a revamped design and larger battery, Google has maintained the $499 price point of last year’s phone, undercutting other “budget” devices like the iPhone 16e.

However, hitting this price point involves trade-offs in materials, charging, and—significantly—the on-device AI capabilities compared to its pricier siblings. None of those are deal-breakers, though. In fact, the Pixel 9a may be coming along at just the right time. As we enter a period of uncertainty for imported gadgets, a modestly priced phone with lengthy support could be the perfect purchase.

A simpler silhouette

The Pixel 9a sports the same rounded corners and flat edges we’ve seen on other recent smartphones. The aluminum frame has a smooth, almost silky texture, with rolled edges that flow into the front and back covers.

Pixel 9a in hand

The 9a is just small enough to be cozy in your hand.

Credit: Ryan Whitwam

The 9a is just small enough to be cozy in your hand. Credit: Ryan Whitwam

On the front, there’s a sheet of Gorilla Glass 3, which has been a mainstay of budget phones for years. On the back, Google used recycled plastic with a matte finish. It attracts more dust and grime than glass, but it doesn’t show fingerprints as clearly. The plastic doesn’t feel as solid as the glass backs on Google’s more expensive phones, and the edge where it meets the aluminum frame feels a bit more sharp and abrupt than the glass on Google’s flagship phones.

Specs at a glance: Google Pixel 9a
SoC Google Tensor G4
Memory 8GB
Storage 128GB, 256GB
Display 1080×2424 6.3″ pOLED, 60–120 Hz
Cameras 48 MP primary, f/1.7, OIS; 13 MP ultrawide, f/2.2; 13 MP selfie, f/2.2
Software Android 15, 7 years of OS updates
Battery 5,100 mAh, 23 W wired charging, 7.5 W wireless charging
Connectivity Wi-Fi 6e, NFC, Bluetooth 5.3, sub-6 GHz 5G
Measurements 154.7×73.3×8.9 mm; 185 g

Were it not for the “G” logo emblazoned on the back, you might not recognize the Pixel 9a as a Google phone. It lacks the camera bar that has been central to the design language of all Google’s recent devices, opting instead for a sleeker flat design.

The move to a pOLED display saved a few millimeters, giving the designers a bit more internal volume. In the past, Google has always pushed toward thinner and thinner Pixels, but it retained the same 8.9 mm thickness for the Pixel 9a. Rather than shave off a millimeter, Google equipped the Pixel 9a with a 5,100 mAh battery, which is the largest ever in a Pixel, even beating out the larger and more expensive Pixel 9 Pro XL by a touch.

Pixel 9a and Pixel 8a

The Pixel 9a (left) drops the camera bar from the Pixel 8a (right).

Credit: Ryan Whitwam

The Pixel 9a (left) drops the camera bar from the Pixel 8a (right). Credit: Ryan Whitwam

The camera module on the back is almost flush with the body of the phone, rising barely a millimeter from the surrounding plastic. The phone feels more balanced and less top-heavy than phones that have three or four cameras mounted to chunky aluminum surrounds. The buttons on the right edge are the only other disruptions to the phone’s clean lines. They, too, are aluminum, with nice, tactile feedback and no detectable wobble. Aside from a few tiny foibles, the build quality and overall feel of this phone are better than we’d expect for $499.

The 6.3-inch OLED is slightly larger than last year’s, and it retains the chunkier bezels of Google’s A-series phones. While the flagship Pixels are all screen from the front, there’s a sizable gap between the edge of the OLED and the aluminum frame. That means the body is a few millimeters larger than it probably had to be—the Pixel 9 Pro has the same display size, and it’s a bit more compact, for example. Still, the Pixel 9a does not look or feel oversized.

Pixel 9a edge

The camera bump just barely rises above the surrounding plastic.

Credit: Ryan Whitwam

The camera bump just barely rises above the surrounding plastic. Credit: Ryan Whitwam

The OLED is sharp enough at 1080p and has an impressively high peak brightness, making it legible outdoors. However, the low-brightness clarity falls short of what you get with more expensive phones like the Pixel 9 Pro or Galaxy S25. The screen supports a 120 Hz refresh rate, but that’s disabled by default. This panel does not use LTPO technology, which makes higher refresh rates more battery-intensive. There’s a fingerprint scanner under the OLED, but it has not been upgraded to ultrasonic along with the flagship Pixels. This one is still optical—it works quickly enough, but it lights up dark rooms and lacks reliability compared to ultrasonic sensors.

Probably fast enough

Google took a page from Apple when it debuted its custom Tensor mobile processors with the Pixel 6. Now, Google uses Tensor processors in all its phones, giving a nice boost to budget devices like the Pixel 9a. The Pixel 9a has a Tensor G4, which is identical to the chip in the Pixel 9 series, save for a slightly different modem.

Pixel 9a flat

With no camera bump, the Pixel 9a lays totally flat on surfaces with very little wobble.

Credit: Ryan Whitwam

With no camera bump, the Pixel 9a lays totally flat on surfaces with very little wobble. Credit: Ryan Whitwam

While Tensor is not a benchmark speed demon like the latest silicon from Qualcomm or Apple, it does not feel slow in daily use. A chip like the Snapdragon 8 Elite puts up huge benchmark numbers, but it doesn’t run at that speed for long. Qualcomm’s latest chips can lose half their speed to heat, but Tensor only drops by about a third during extended load.

However, even after slowing down, the Snapdragon 8 Elite is a faster gaming chip than Tensor. If playing high-end games like Diablo Immortal and Genshin Impact is important to you, you can do better than the Pixel 9a (and other Pixels).

9a geekbench

The 9a can’t touch the S25, but it runs neck and neck with the Pixel 9 Pro.

Credit: Ryan Whitwam

The 9a can’t touch the S25, but it runs neck and neck with the Pixel 9 Pro. Credit: Ryan Whitwam

In general use, the Pixel 9a is more than fast enough that you won’t spend time thinking about the Tensor chip. Apps open quickly, animations are unerringly smooth, and the phone doesn’t get too hot. There are some unavoidable drawbacks to its more limited memory, though. Apps don’t stay in memory as long or as reliably as they do on the flagship Pixels, for instance. There are also some AI limitations we’ll get to below.

With a 5,100 mAh battery, the Pixel 9a has more capacity than any other Google phone. Combined with the 1080p screen, the 9a gets much longer battery life than the flagship Pixels. Google claims about 30 hours of usage per charge. In our testing, this equates to a solid day of heavy use with enough left in the tank that you won’t feel the twinge of range anxiety as evening approaches. If you’re careful, you might be able to make it two days without a recharge.

Pixel 9a and 9 Pro XL

The Pixel 9a (right) is much smaller than the Pixel 9 Pro XL (left), but it has a slightly larger battery.

Credit: Ryan Whitwam

The Pixel 9a (right) is much smaller than the Pixel 9 Pro XL (left), but it has a slightly larger battery. Credit: Ryan Whitwam

As for recharging, Google could do better—the Pixel 9a manages just 23 W wired and 7.5 W wireless, and the flagship Pixels are only a little faster. Companies like OnePlus and Motorola offer phones that charge several times faster than Google’s.

The low-AI Pixel

Google’s Pixel software is one of the primary reasons to buy its phones. There’s no bloatware on the device when you take it out of the box, which saves you from tediously extracting a dozen sponsored widgets and microtransaction-laden games right off the bat. Google’s interface design is also our favorite right now, with a fantastic implementation of Material You theming that adapts to your background colors.

Gemini is the default assistant, but the 9a loses some of Google’s most interesting AI features.

Credit: Ryan Whitwam

Gemini is the default assistant, but the 9a loses some of Google’s most interesting AI features. Credit: Ryan Whitwam

The Pixel version of Android 15 also comes with a raft of thoughtful features, like the anti-spammer Call Screen and Direct My Call to help you navigate labyrinthine phone trees. Gemini is also built into the phone, fully replacing the now-doomed Google Assistant. Google notes that Gemini on the 9a can take action across apps, which is technically true. Gemini can look up data from one supported app and route it to another at your behest, but only when it feels like it. Generative AI is still unpredictable, so don’t bank on Gemini being a good assistant just yet.

Google’s more expensive Pixels also have the above capabilities, but they go further with AI. Google’s on-device Gemini Nano model is key to some of the newest and more interesting AI features, but large language models (even the small ones) need a lot of RAM. The 9a’s less-generous 8GB of RAM means it runs a less-capable version of the AI known as Gemini Nano XXS that only supports text input.

As a result, many of the AI features Google was promoting around the Pixel 9 launch just don’t work. For example, there’s no Pixel Screenshots app or Call Notes. Even some features that seem like they should work, like AI weather summaries, are absent on the Pixel 9a. Recorder summaries are supported, but Gemini Nano has a very nano context window. We tested with recordings ranging from two to 20 minutes, and the longer ones surpassed the model’s capabilities. Google tells Ars that 2,000 words (about 15 minutes of relaxed conversation) is the limit for Gemini Nano on this phone.

Pixel 9a software

The 9a is missing some AI features, and others don’t work very well.

Credit: Ryan Whitwam

The 9a is missing some AI features, and others don’t work very well. Credit: Ryan Whitwam

If you’re the type to avoid AI features, the less-capable Gemini model might not matter. You still get all the other neat Pixel features, along with Google’s market-leading support policy. This phone will get seven years of full update support, including annual OS version bumps and monthly security patches. The 9a is also entitled to special quarterly Pixel Drop updates, which bring new (usually minor) features.

Most OEMs struggle to provide even half the support for their phones. Samsung is neck and neck with Google, but its updates are often slower and more limited on older phones. Samsung’s vision for mobile AI is much less fleshed out than Google’s, too. Even with the Pixel 9a’s disappointing Gemini Nano capabilities, we expect Google to make improvements to all aspects of the software (even AI) over the coming years.

Capable cameras

The Pixel 9a has just two camera sensors, and it doesn’t try to dress up the back of the phone to make it look like there are more, a common trait of other Android phones. There’s a new 48 MP camera sensor similar to the one in the Pixel 9 Pro Fold, which is smaller and less capable than the main camera in the flagship Pixels. There’s also a 13 MP ultrawide lens that appears unchanged from last year. You have to spend a lot more money to get Google’s best camera hardware, but conveniently, much of the Pixel magic is in the software.

Pixel 9a back in hand

The Pixel 9a sticks with two cameras.

Credit: Ryan Whitwam

The Pixel 9a sticks with two cameras. Credit: Ryan Whitwam

Google’s image processing works extremely well, lightening dark areas while also preventing blowout in lighter areas. This impressive dynamic range results in even exposures with plenty of detail, and this is true in all lighting conditions. In dim light, you can use Night Sight to increase sharpness and brightness to an almost supernatural degree. Outside of a few edge cases with unusual light temperature, we’ve been very pleased with Google’s color reproduction, too.

The most notable drawback to the 9a’s camera is that it’s a bit slower than the flagship Pixels. The sensor is smaller and doesn’t collect as much light, even compared to the base model Pixel 9. This is more noticeable with shots using Night Sight, which gathers data over several seconds to brighten images. However, image capture is still generally faster than Samsung, OnePlus, and Motorola cameras. Google leans toward keeping shutter speeds high (low exposure time). Outdoors, that means you can capture motion with little to no blur almost as reliably as you can with the Pro Pixels.

The 13 MP ultrawide camera is great for landscape outdoor shots, showing only mild distortion at the edges of the frame despite an impressive 120-degree field-of-view. Unlike Samsung and OnePlus, Google also does a good job of keeping colors consistent across the sensors.

You can shoot macro photos with the Pixel 9a, but it works a bit differently than other phones. The ultrawide camera doesn’t have autofocus, nor is there a dedicated macro sensor. Instead, Google uses AI with the main camera to take close-ups. This seems to work well enough, but details are only sharp around the center of the frame, with ample distortion at the edges.

There’s no telephoto lens here, but Google’s capable image processing helps a lot. The new primary camera sensor probably isn’t hurting, either. You can reliably push the 48 MP primary to 2x digital zoom, and Google’s algorithms will produce photos that you’d hardly know have been enhanced. Beyond 2x zoom, the sharpening begins to look more obviously artificial.

A phone like the Pixel 9 Pro or Galaxy S25 Ultra with 5x telephoto lenses can definitely get sharper photos at a distance, but the Pixel 9a does not do meaningfully worse than phones that have 2–3x telephoto lenses.

The right phone at the right time

The Pixel 9a is not a perfect phone, but for $499, it’s hard to argue with it. This device has the same great version of Android seen on Google’s more expensive phones, along with a generous seven years of guaranteed updates. It also pushes battery life a bit beyond what you can get with other Pixel phones. The camera isn’t the best we’ve seen—that distinction goes to the Pixel 9 Pro and Pro XL. However, it gets closer than a $500 phone ought to.

Pixel 9a with keyboard

Material You theming is excellent on Pixels.

Credit: Ryan Whitwam

Material You theming is excellent on Pixels. Credit: Ryan Whitwam

You do miss out on some AI features with the 9a. That might not bother the AI skeptics, but some of these missing on-device features, like Pixel Screenshots and Call Notes, are among the best applications of generative AI we’ve seen on a phone yet. With years of Pixel Drops ahead of it, the 9a might not have enough muscle to handle Google’s future AI endeavors, which could lead to buyer’s remorse if AI turns out to be as useful as Google claims it will be.

At $499, you’d have to spend $300 more to get to the base model Pixel 9, a phone with weaker battery life and a marginally better camera. That’s a tough sell given how good the 9a is. If you’re not going for the Pro phones, stick with the 9a. With all the uncertainty over future tariffs on imported products, the day of decent sub-$500 phones could be coming to an end. With long support, solid hardware, and a beefy battery, the Pixel 9a could be the right phone to buy before prices go up.

The good

  • Good value at $499
  • Bright, sharp display
  • Long battery life
  • Clean version of Android 15 with seven years of support
  • Great photo quality

The bad

  • Doesn’t crush benchmarks or run high-end games perfectly
  • Missing some AI features from more expensive Pixels

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google Pixel 9a review: All the phone you need Read More »

the-ars-cargo-e-bike-buying-guide-for-the-bike-curious-(or-serious)

The Ars cargo e-bike buying guide for the bike-curious (or serious)


Fun and functional transportation? See why these bikes are all the rage.

Three different cargo bikes

Credit: Aurich Lawson | John Timmer

Credit: Aurich Lawson | John Timmer

Are you a millennial parent who has made cycling your entire personality but have found it socially unacceptable to abandon your family for six hours on a Saturday? Or are you a bike-curious urban dweller who hasn’t owned a bicycle since middle school? Do you stare at the gridlock on your commute, longing for a bike-based alternative, but curse the errands you need to run on the way home?

I have a solution for you: invest in a cargo bike.

Cargo bikes aren’t for everyone, but they’re great if you enjoy biking and occasionally need to haul more than a bag or basket can carry (including kids and pets). In this guide, we’ll give you some parameters for your search—and provide some good talking points to get a spouse on board.

Bakfiets to the future

As the name suggests, a cargo bike, also known by the Dutch bakfiet, is a bicycle or tricycle designed to haul both people and things. And that loose definition is driving a post-pandemic innovation boom in this curious corner of the cycling world.

My colleagues at Ars have been testing electric cargo bikes for the past few years, and their experiences reflect the state of the market: It’s pretty uneven. There are great, user-centric products being manufactured by brands you may have heard of—and then there are products made as cheaply as possible, using bottom-of-the-barrel parts, to capture customers who are hesitant to drop a car-sized payment on a bike… even if they already own an $8,000 carbon race rocket.

The price range is wide. You can get an acoustic cargo bike for about $2,000, and you start seeing e-bikes at around $2,000 as well, with top-of-the-line bikes going for up to $12,000.

But don’t think of cargo bikes as leisure items. Instead, they can be a legitimate form of transportation that, with the right gear—and an electric drivetrain—can fully integrate into your life. Replacing 80 percent of my in-town car trips with a cargo bike has allowed me to squeeze in a workout while I bring my kid to school and then run errands without worrying about traffic or parking. It means my wife can take our infant daughter somewhere in the car while I take the bigger kid to a park across town.

Additionally, when you buy a car, the purchase is just the start of the costs; you can be stuck with several hundred to several thousand dollars a year in insurance and maintenance. With bikes, even heavy cargo bikes, you’re looking at a yearly check-up on brakes and chain stretch (which should be a $150 bike shop visit if you don’t do it yourself) and a periodic chain lubing (which you should do yourself).

A recent study found that once people use cargo bikes, they like their cars much less.

And, of course, bikes are fun. No matter what, you’re outside with the wind in your face.

Still, like anything else, there are trade-offs to this decision, and a new glut of choices confront consumers as they begin their journey down a potentially pricy rabbit hole. In this article, instead of recommending specific bikes, we’ll tell you what you need to know to make an informed decision based on your personal preferences. In a future article, we’ll look at all the other things you’ll need to get safely from point A to point B. 

Function, form, and evolutionary design

Long dominated by three main domains of design, the diversification of the North American cargo bike has accelerated, partially driven by affordable battery systems, interest from sustainability-minded riders, and government subsidies. In general, these three categories—bakfiets, longtails, and trikes—are still king, but there is far more variation within them. That’s due to the entrance of mainstream US bike brands like Specialized, which have joined homegrown specialists such as Rad Power and Yuba, as well as previously hard-to-find Dutch imports from Riese & Müller, Urban Arrow, and Larry vs Harry.

Within the three traditional cargo bikes, each style has evolved to include focused designs that are more or less suitable for individual tasks. Do you live in an apartment and need to cart your kids and not much else? You probably want a mid-tail of some sort. Do you have a garage and an urge to move your kid and a full wheelset from another bike? A Long John is your friend!

Let’s take a high-level look at the options.

Bakfiets/Long Johns

Image of a front-loading cargo bike with white metal tubes, set against stone pavement and walls.

A front-loader from Urban Arrow, called the Family. Credit: John Timmer

Dutch for “box bike,” a bakfiets, or a front-loader, is the most alien-looking of the styles presented here (at least according to the number of questions I get at coffee shops). There are several iterations of the form, but in general, bakfiets feature a big (26-inch) wheel in the back, a large cargo area ahead of the rider, and a smaller (usually 20-inch) wheel ahead of the box, with steering provided through a rod or cable linkage. Depending on the manufacturer, these bikes can skew closer to people carriers (Riese & Müller, Yuba, Xtracycle) or cargo carriers (Larry vs Harry, Omnium). However, even in the case of a bakfiets that is purpose-built for hauling people, leg and shoulder space becomes scarce as your cargo gets older and you begin playing child-limb Jenga.

We reviewed Urban Arrow’s front-loading Family bike here.

Brands to look out for: 

  • Riese & Müller
  • Urban Arrow
  • Larry vs Harry
  • Yuba
  • Xtracycle

Longtails

Image of a red bicycle with large plastic tubs flanking its rear wheel.

The Trek Fetch+ 2. Credit: John TImmer

If my local preschool drop-off is any indication, long- and mid-tail cargo bikes have taken North America by storm, and for good reason. With a step-through design, smaller wheels, and tight, (relatively) apartment-friendly proportions, long tails are imminently approachable. Built around 20-inch wheels, their center of gravity, and thus the weight of your cargo or pillion, is lower to the ground, making for a more stable ride.

This makes them far less enjoyable to ride than your big-wheeled whip. On the other hand, they’re also more affordable—the priciest models from Tern (the GSD, at $5,000, and the Specialized Haul, at $3,500) top out at half the price of mid-range bakfiets. Proper child restraints attach easily, and one can add boxes and bags for cargo, though they are seen as less versatile than a Long John. On the other hand, it’s far easier to carry an adult or as many children as you feel comfortable shoving on the rear bench than it is to squeeze large kids into the bakfiets.

We’ve reviewed several bikes in this category, including the Trek Fetch+ 2, Integral Electrics Maven, and Cycrown CycWagen.

Brands to look out for:

  • Radwagon
  • Tern
  • Yuba
  • Specialized, Trek

Tricycles

The Christiania Classic. Credit: Christiania Bikes America

And then we have a bit of an outlier. The original delivery bike, trikes can use a front-load or rear-load design, with two wheels always residing under the cargo. In either case, consumer trikes are not well-represented on the street, though brands such as Christiana and Workman have been around for some time.

Why aren’t trikes more popular? According to Kash, the mononymous proprietor of San Francisco’s Warm Planet Bikes, if you’re already a confident cyclist, you’ll likely be put off by the particular handling characteristics of a three-wheeled solution. “While trikes work, [there are] such significant trade-offs that, unless you’re the very small minority of people for whom they absolutely have to have those features specific to trikes, you’re going to try other things,” he told me.

In his experience, riders who find tricycles most useful are usually those who have never learned to ride a bike or those who have balance issues or other disabilities. For these reasons, most of this guide will focus on Long Johns and longtails.

Brands to look out for: 

Which bike style is best for you?

Before you start wading into niche cargo bike content on Reddit and YouTube, it’s useful to work through a decision matrix to narrow down what’s important to you. We’ll get you started below. Once you have a vague direction, the next best step is to find a bike shop that either carries or specializes in cargo bikes so you can take some test rides. All mechanical conveyances have their quirks, and quirky bikes are the rule.

Where do you want your cargo (or kid): Fore or aft?

This is the most important question after “which bike looks coolest to you?” and will drive the rest of the decision tree. Anecdotally, I have found that many parents feel more secure having their progeny in the back. Others like having their load in front of them to ensure it’s staying put, or in the case of a human/animal, to be able to communicate with them. Additionally, front-loaders tend to put cargo closer to the ground, thus lowering their center of gravity. Depending on the bike, this can counteract any wonky feel of the ride.

An abridged Costco run: toilet paper, paper towels, snacks, and gin. Credit: Chris Cona

How many people and how much stuff are you carrying?

As noted above, a front-loader will mostly max out at two slim toddlers (though the conventional wisdom is that they’ll age into wanting to ride their own bikes at that point). On the other hand, a longtail can stack as many kids as you can fit until you hit the maximum gross vehicle weight. However, if you’d like to make Costco runs on your bike, a front loader provides an empty platform (or cube, depending on your setup) to shove diapers, paper goods, and cases of beer; the storage on long tails is generally more structured. In both cases, racks can be added aft and fore (respectively) to increase carrying capacity.

What’s your topography like?

Do you live in a relatively flat area? You can probably get away with an acoustic bike and any sort of cargo area you like. Flat and just going to the beach? This is where trikes shine! Load up the kids and umbrellas and toodle on down to the dunes.

On the other hand, if you live among the hills of the Bay Area or the traffic of a major metropolitan area, the particular handling of a box trike could make your ride feel treacherous when you’re descending or attempting to navigate busy traffic. Similarly, if you’re navigating any sort of elevation and planning on carrying anything more than groceries, you’ll want to spring for the e-bike with sufficient gear range to tackle the hills. More on gear ratios later.

Do you have safe storage?

Do you have a place to put this thing? The largest consumer-oriented front loader on the market (the Riese & Müller Load 75) is almost two and a half meters (about nine feet) long, and unless you live in Amsterdam, it should be stored inside—which means covered garage-like parking. On the other end of the spectrum, Tern’s GSD and HSD are significantly shorter and can be stored vertically with their rear rack used as a stand, allowing them to be brought into tighter spaces (though your mileage may vary on apartment living).

If bike storage is your main concern, bikes like the Omnium Mini Max, Riese & Müller’s Carrie, and the to-be-released Gocyle CXi/CX+ are designed specifically for you. In the event of the unthinkable—theft, vandalism, a catastrophic crash—there are several bike-specific insurance carriers (Sundays, Velosurance, etc.) that are affordable and convenient. If you’re dropping the cash on a bike in this price range, insurance is worth getting.

How much do you love tinkering and doing maintenance?

Some bikes are more baked than others. For instance, the Urban Arrow—the Honda Odyssey of the category—uses a one-piece expanded polypropylene cargo area, proprietary cockpit components, and internally geared hubs. Compare that to Larry vs Harry’s Bullitt, which uses standard bike parts and comes with a cargo area that’s a blank space with some bolt holes. OEM cargo box solutions exist, but the Internet is full of very entertaining box, lighting, and retention bodges.

Similar questions pertain to drivetrain options: If you’re used to maintaining a fleet of bikes, you may want to opt for a traditional chain-driven derailleur setup. Have no desire to learn what’s going on down there? Some belt drives have internally geared hubs that aren’t meant to be user-serviceable. So if you know a bit about bikes or are an inveterate tinkerer, there are brands that will better scratch that itch.

A note about direct-to-consumer brands

As Arsians, research and price shopping are ingrained in our bones like scrimshaw, so you’ll likely quickly become familiar with the lower-priced direct-to-consumer (DTC) e-bike brands that will soon be flooding your Instagram ads. DTC pricing will always be more attractive than you’ll find with brands carried at your local bike shop, but buyers should beware.

In many cases, those companies don’t just skimp on brick and mortar; they often use off-brand components—or, in some cases, outdated standards that can be had for pennies on the dollar. By that, I mean seven-speed drivetrains mated to freewheel hubs that are cheap to source for the manufacturer but could seriously limit parts availability for you or your poor mechanic.

And let’s talk about your mechanic. When buying online, you’ll get a box with a bike in various states of disassembly that you’ll need to put together. If you’re new to bike maintenance and assembly, you might envision the process as a bit of Ikeaology that you can get through with a beer and minimal cursing. But if you take a swing through /r/bikemechanics for a professional perspective, you’ll find that these “economically priced bikes” are riddled with outdated and poor-quality components.

And this race to a bottom-tier price point means those parts are often kluged together, leading to an unnecessarily complicated assembly process—and, down the line, repairs that will be far more of a headache than they should be. Buying a bike from your local bike shop generally means a more reliable (or at least mainstream) machine with after-sales support. You’ll get free tune-ups for a set amount of time and someone who can assist you if something feels weird.

Oh yeah, and there are exploding batteries. Chances are good that if a battery is self-immolating, it’s because it’s (a) wired incorrectly, (b) used in a manner not recommended by the manufacturer, or (c) damaged. If a battery is cheap, it’s less likely that the manufacturer sought UL or EU certification, and it’s more likely that the battery will have some janky cells. Your best bet is to stick to the circuits and brands you’ve heard of.

Credit: Chris Cona

Bikes ain’t nothin’ but nuts and bolts, baby

Let’s move on to the actual mechanics of momentum. Most cargo bike manufacturers have carried over three common standards from commuter and touring bikes: chain drives with cable or electronically shifted derailleurs, belt-driven internally geared hubs (IGH), or belt-driven continuously variable hubs (CVH)—all of which are compatible with electric mid-drive motors. The latter two can be grouped together, as consumers are often given the option of “chain or belt,” depending on the brand of bike.

Chain-driven

If you currently ride and regularly maintain a bike, chain-driven drivetrains are the metal-on-metal, gears-and-lube components with which you’re intimately familiar. Acoustic or electric, most bike manufacturers offer a geared drivetrain in something between nine and 12 speeds.

The oft-stated cons of chains, cogs, and derailleurs for commuters and cargo bikers are that one must maintain them with lubricant, chains get dirty, you get dirty, chains wear out, and derailleurs can bend. On the other hand, parts are cheap, and—assuming you’re not doing 100-mile rides on the weekend and you’re keeping an ear out for upsetting sounds—maintaining a bike isn’t a whole lot of work. Plus, if you’re already managing a fleet of conventional bikes, one more to look after won’t kill you.

Belt-driven

Like the alternator on your car or the drivetrain of a fancy motorcycle, bicycles can be propelled by a carbon-reinforced, nylon-tooth belt that travels over metal cogs that run quietly and grease- and maintenance-free. While belts are marginally less efficient at transferring power than chains, a cargo bike is not where you’ll notice the lack of peak wattage. The trade-off for this ease of use is that service can get weird at some point. These belts require a bike to have a split chainstay to install them, and removing the rear wheel to deal with a flat can be cumbersome. As such, belts are great for people who aren’t keen on keeping up with day-to-day maintenance and would prefer a periodic pop-in to a shop for upkeep.

IGH vs. CVH

Internally geared hubs, like those produced by Rohloff, Shimano, and Sturmey Archer, are hilariously neat things to be riding around on a bicycle. Each brand’s implementation is a bit different, but in general, these hubs use two to 14 planetary gears housed within the hub of the rear wheel. Capable of withstanding high-torque applications, these hubs can offer a total overall gear range of 526 percent.

If you’ve ridden a heavy municipal bike share bike in a major US city, chances are good you’ve experienced an internally geared hub. Similar in packaging to an IGH but different in execution, continuously variable hubs function like the transmission in a midrange automobile.

These hubs are “stepless shifting”—you turn the shifter, and power input into the right (drive) side of the hub transfers through a series of balls that allow for infinite gear ratios throughout their range. However, that range is limited to about 380 percent for Enviolo, which is more limited than IGH or even some chain-driven systems. They’re more tolerant of shifting under load, though, and like planetary gears, they can be shifted while stationary (think pre-shifting before taking off at a traffic light).

Neither hub is meant to be user serviceable, so service intervals are lengthy.

Electric bikes

Credit: Chris Cona

Perhaps the single most important innovation that allowed cargo bikes to hit mainstream American last-mile transportation is the addition of an electric drive system. These have been around for a while, but they mostly involved hacking together a bunch of dodgy parts from AliExpress. These days, reputable brands such as Bosch and Shimano have brought their UL- and CE-rated electric drivetrains to mainstream cargo bikes, allowing normal people to jump on a bike and get their kids up a hill.

Before someone complains that “e-bikes aren’t bikes,” it’s important to note that we’re advocating for Class 1 or 3 pedal-assist bikes in this guide. Beyond allowing us to haul stuff, these bikes create greater equity for those of us who love bikes but may need a bit of a hand while riding.

For reference, here’s what those classes mean:

  • Class 1: Pedal-assist, no throttle, limited to 20 mph/32 kmh assisted top speed
  • Class 2: Pedal-assist, throttle activated, limited to 20 mph/32 kmh assisted top speed
  • Class 3: Pedal-assist, no throttle, limited to 28 mph/45 kmh assisted top speed, mandatory speedometer

Let’s return to Kash from his perch on Market Street in San Francisco:

The e-bike allows [enthusiasts] to keep cycling, and I have seen that reflected in the nature of the people who ride by this shop, even just watching the age expand. These aren’t people who bought de facto mopeds—these are people who bought [a pedal-assisted e-bike] because they wanted a bicycle. They didn’t just want to coast; they just need that slight assist so they can continue to do the things they used to do.

And perhaps most importantly, getting more people out of cars and onto bikes creates more advocates for cyclist safety and walkable cities.

But which are the reliable, non-explody standards? We now have many e-bike options, but there are really only two or three you’ll see if you go to a shop: Bosch, Shimano E-Drive, and Specialized (whose motors are designed and built by Brose). Between their Performance and Cargo Line motors, Bosch is by far the most common option of the three. Because bike frames need to be designed for a particular mid-drive unit, it’s rare to get an option of one or another, other than choosing the Performance trim level.

For instance, Urban Arrow offers the choice of Bosch’s Cargo Line (85 nm output) or Performance Line (65 nm), while Larry vs Harry’s eBullitt is equipped with Shimano EP6 or EP8 (both at 85 nm) drives. So in general, if you’re dead set on a particular bike, you’ll be living with the OEM-specced system.

In most cases, you’ll find that OEM offerings stick to pedal-assist mid-drive units—that is, a pedal-assist motor installed where a traditional bottom bracket would be. While hub-based motors push or pull you along by making the cranks easier to turn (while making you feel a bit like you’re on a scooter), mid-drives utilize the mechanical advantage of your bike’s existing gearing to make it easier to pedal and give you more torque options. This is additionally pleasant if you actually like riding bikes. Now you get to ride a bike while knowing you can take on pretty much any topography that comes your way.

Now go ride

That’s all you need to know before walking into a store or trolling the secondary market. Every rider is different, and each brand and design has its own quirks, so it’s important to get out there and ride as many different bikes as you can to get a feel for them for yourself. And if this is your first foray into the wild world of bikes, join us in the next installment of this guide, where we’ll be enumerating all the fun stuff you should buy (or avoid) along with your new whip.

Transportation is a necessity, but bikes are fun. We may as well combine the two to make getting to work and school less of a chore. Enjoy your new, potentially expensive, deeply researchable hobby!

The Ars cargo e-bike buying guide for the bike-curious (or serious) Read More »

don’t-call-it-a-drone:-zipline’s-uncrewed-aircraft-wants-to-reinvent-retail

Don’t call it a drone: Zipline’s uncrewed aircraft wants to reinvent retail


Ars visits a zipline delivery service that’s deploying in more locations soon.

The inner portion of the Zipline P2 is lowered to the ground on a tether, facing into the wind, with a small propeller at the back. Doors on the bottom open when it touches the ground, depositing the cargo. Credit: Tim Stevens

The skies around Dallas are about to get a lot more interesting. No, DFW airport isn’t planning any more expansions, nor does American Airlines have any more retro liveries to debut. This will be something different, something liable to make all the excitement around the supposed New Jersey drones look a bit quaint.

Zipline is launching its airborne delivery service for real, rolling it out in the Dallas-Fort Worth suburb of Mesquite ahead of a gradual spread that, if all goes according to plan, will also see its craft landing in Seattle before the end of the year. These automated drones can be loaded in seconds, carry small packages for miles, and deposit them with pinpoint accuracy at the end of a retractable tether.

It looks and sounds like the future, but this launch has been a decade in the making. Zipline has already flown more than 1.4 million deliveries and covered over 100 million miles, yet it feels like things are just getting started.

The ranch

When Zipline called me and invited me out for a tour of a drone delivery testing facility hidden in the hills north of San Francisco, I was naturally intrigued, but I had no idea what to expect. Shipping logistics facilities tend to be dark and dreary spaces, with automated machinery stacked high on angular shelves within massive buildings presenting all the visual charm of a concrete paver.

Zipline’s facility is a bit different. It’s utterly stunning, situated among the pastures of a ranch that sprawls over nearly 7,000 acres of the kind of verdant, rolling terrain that has drawn nature lovers to Northern California for centuries.

A modest-looking facility amidst beautiful hills

The Zipline drone testing facility. Credit: Tim Stevens

Zipline’s contribution to the landscape consists of a few shipping container-sized prefab office spaces, a series of tents, and some tall, metal structures that look like a stand of wireform trees. The fruit hanging from their aluminum branches are clusters of white drones, or at least what we’d call “drones.”

But the folks at Zipline don’t seem to like that term. Everyone I spoke with referred to the various craft hovering, buzzing, or gliding overhead as aircraft. That’s for good reason.

Not your average UAV

Go buy a drone at an electronics retailer, something from DJI perhaps, and you’ll have to abide by a series of regulations about how high and how far to fly it. Two of the most important rules: Never fly near an airport, and never let the thing out of your sight.

Zipline’s aircraft are much more comprehensive machines, able to fly for miles and miles. By necessity, they must fly well beyond the range of any human operator, or what’s called “beyond visual line of sight,” or BVLOS. In 2023, Zipline was the first commercial operator to get clearance for BVLOS flights.

Zipline’s aircraft operate under a series of FAA classifications—specifically, part 107, part 135, and the upcoming part 108, which will formalize BVLOS operation. The uncrewed aircraft, which are able to operate as such, navigate through controlled airspace, and even near airports, with the help of FAA-mandated transponder data as well as onboard sensors that can detect the presence of an approaching aircraft and automatically avoid it.

A tree-like tower houses a drone with rolling hills as the backdrop

A Zipline drone testing facility. Seen on the right is one of the “trees.” Credit: Tim Stevens

In fact, just about everything about Zipline’s aircraft is automatic. Onboard sensors sample the air through pitot tubes, detecting bad weather. The craft use this data to reroute themselves around the problem, then report back to save subsequent flights the hassle.

Wind speed and direction are also calculated, ensuring that deliveries are dropped with accuracy. Once the things are in the air, even the Zipline operators aren’t sure which way they’ll fly, only that they’ll figure out the right way to get the package there and return safely.

Zipline actually operates two separate aircraft that are suited for different mission types. The aircraft clinging to the aluminum trees, the type that will be exploring the skies over Dallas soon, are internally called Platform 2, or P2, and they’re actually two aircraft in one.

A P2 drone can hover in place using five propellers and take off vertically before seamlessly transitioning into efficient forward flight. When it reaches its destination, doors on the bottom open, and a second aircraft emerges. This baby craft, called a “Zip,” drops down on a tether.

Fins ensure the tethered craft stays facing into the wind while a small propeller at the rear keeps it from blowing off-target. When it touches the ground, its doors pop open, gently depositing a package from a cargo cavity that’s big enough for about four loaves of bread. Maximum payload capacity is eight pounds, and payloads can be delivered up to about 10 miles away.

Where there’s a P2, there must be a P1, and while Zipline’s first aircraft serves much the same purpose, it does so in a very different way. The P1 is a fixed-wing aircraft, looking for all the world like a hobbyist’s radio-controlled model, just bigger and way more expensive.

The P1 launches into the sky like a glider, courtesy of a high-torque winch that slings it aloft before its electric prop takes over. It can fly for over 120 miles on a charge before dropping its cargo, a package that glides to the ground via parachute.

The P1 slows momentarily during the drop and then buzzes back up to full speed dramatically before turning for home. There’s no gentle, vertical landing here. It instead cruises precisely toward a wire suspended high in the air. An instant before impact, it noses up, exposing a metal hook to the wire, which stops the thing instantly.

In naval aviator parlance, it’s an OK three-wire every time, and thanks to hot-swappable batteries, a P1 can be back in the air in just minutes. This feature has helped the company perform millions of successful deliveries, many carrying lifesaving supplies.

From Muhanga to Mesquite

The first deployment from the company that would become Zipline was in 2016 in Muhanga, Rwanda, beginning with the goal of delivering vaccines and other medical supplies quickly and reliably across the untamed expanses of Africa. Eric Watson, now head of systems and safety engineering at Zipline, was part of that initial crew.

“Our mission is to enable access to instant logistics to everyone in the world,” he said. “We started with one of the most visceral pain points, of being able to go to a place, operating in remote parts where access to medicine was a problem.”

It proved to be an incredible proving ground for the technology, but this wasn’t just some beta test designed to deliver greater ROI. Zipline already has success in a more important area: delivering lifesaving medicine. The company’s drones deliver things like vaccines, anti-venoms, and plasma. A 2023 study from the Wharton School at the University of Pennsylvania found that Zipline’s blood delivery service reduced deaths from postpartum hemorrhage by 51 percent.

That sort of promise attracted Lauren Lacey to the company. She’s Zipline’s head of integration quality and manufacturing engineering. A former engineer at Sandia Labs, where she spent a decade hardening America’s military assets, Lacey has brought that expertise to whipping Zipline’s aircraft into shape.

A woman stands by a drone in a testing facility

Lauren Lacey, Zipline’s head of integration quality and manufacturing engineering. Credit: Tim Stevens

Lacey walked me through the 11,000-square-foot Bay Area facility she and her team have turned into a stress-testing house of horrors for uncrewed aircraft. I witnessed everything from latches being subjected to 120° F heat while bathed in ultra-fine dust to a giant magnetic resonance device capable of rattling a circuit board with 70 Gs of force.

It’s all in the pursuit of creating an aircraft that can survive 10,000 deliveries. The various test chambers can replicate upward of 2,500 tests per day, helping the Zipline team iterate quickly and not only add strength but peel away unneeded mass, too.

“Every single gram that we put on the aircraft is one less that we can deliver to the customer,” Lacey said.

Now zipping

Zipline already has a small test presence in Arkansas, a pilot program with Walmart, but its rollout today is a big step forward. Once added to the system, customers can make orders through a dedicated Zipline app. Walmart is the only partner for now, but the company plans to offer more products on the retail and healthcare front, including restaurant food deliveries.

The app will show Walmart products eligible for this sort of delivery, calculating weight and volume to ensure that your order isn’t too big. The P2’s eight-pound payload may seem restrictive, but Jeff Bezos, in touting Amazon’s own future drone delivery program, previously said that 86 percent of the company’s deliveries are five pounds or less.

Amazon suspended its prototype drone program last year for software updates but is flying again in pilot programs in Texas and Arizona. The company has not provided an update on the number of flights lately, but the most recent figures were fewer than 10,000 drone deliveries. For comparison, Zipline currently completes thousands per day. Another future competitor, Alphabet-backed Wing, has flown nearly a half-million deliveries in the US and abroad.

Others are vying for a piece of the airborne delivery pie, too, but nobody I spoke with at Zipline seems worried. From what I could see from my visit, they have reason for confidence. The winds on that ranch in California were so strong that towering dust devils were dancing between the disaffected cattle during my visit. Despite that, the drones flew fast and true, and my requested delivery of bandages and medicine was safely and quickly deposited on the ground just a few feet from my own feet.

It felt like magic, yes, but more importantly, it was one of the most disruptive demonstrations I’ve seen. While the tech isn’t ideally suited for every situation, it may help cut down on the delivery trucks that are increasingly clogging rural roads, all while getting more things to more people who need them, and doing it emissions-free.

Don’t call it a drone: Zipline’s uncrewed aircraft wants to reinvent retail Read More »

the-speech-police:-chairman-brendan-carr-and-the-fcc’s-news-distortion-policy

The speech police: Chairman Brendan Carr and the FCC’s news distortion policy


FCC Chairman Brendan Carr

FCC invokes 1960s-era policy to punish media after decades of minimal enforcement.

FCC Chairman Brendan Carr delivers a speech at Mobile World Congress in Barcelona on March 3, 2025. Credit: Getty Images | AFP

Federal Communications Commission Chairman Brendan Carr is taking a hard line against broadcast TV stations accused of bias against Republicans and President Trump. To pressure broadcasters, Carr is invoking the rarely enforced news distortion policy that was developed starting in the late 1960s and says the FCC should consider revoking broadcast licenses.

The FCC has regulatory authority over broadcasters with licenses to use the public airwaves. But Carr’s two immediate predecessors—Democrat Jessica Rosenworcel and Republican Ajit Pai—both said that punishing stations based on the content of news programs would violate the First Amendment right to free speech.

Rosenworcel and Pai’s agreement continued a decades-long trend of the FCC easing itself out of the news-regulation business. Two other former FCC chairs—Republican Alfred Sikes and Democrat Tom Wheeler—have urged Carr to change course.

Carr has multiple probes in progress, and his investigation into CBS over the editing of an interview with Kamala Harris has drawn condemnations from both liberal and conservative advocacy groups that describe it as a threat to the Constitutional right to free speech. One plea to drop the investigation came in a March 19 letter from conservative groups including the Center for Individual Freedom, Grover Norquist’s Americans for Tax Reform, and the Taxpayers Protection Alliance.

“While we understand the concerns that motivate the complaint, we nonetheless fear that an adverse ruling against CBS would constitute regulatory overreach and advance precedent that can be weaponized by future FCCs,” the letter said. The letter argued that “Democrats and leftwing activist groups have repeatedly worked to weaponize” the government against free speech and that the FCC should “help guard against future abuses by Democrats and leftwing organizations by streamlining license renewals and merger reviews and eliminating the news distortion and news hoax rules.”

“The flimsiest of complaints”

Andrew Jay Schwartzman, an expert on media law and senior counselor for the Benton Institute for Broadband & Society, told Ars that “the CBS complaint is utterly lacking in merit. What is alleged doesn’t come within light-years of a violation of any FCC policy.”

The Foundation for Individual Rights and Expression (FIRE), an advocacy group, called Carr’s investigation of CBS “a political stunt,” an “illegitimate show trial,” and an “unconstitutional abuse of regulatory authority.” Democratic lawmakers are demanding answers from Carr about what they call “bogus investigations” designed to “target and intimidate news organizations and broadcasters in violation of the First Amendment.”

The CBS investigation was also lambasted in comments submitted by Christopher Terry, a professor of media law and ethics at the University of Minnesota, and J. Israel Balderas, a journalism professor at Elon University who is also a First Amendment attorney and a former FCC media advisor.

“The agency under Brendan Carr appears to be, based on the flimsiest of complaints, pursuing media outlets critical of Donald Trump during the 2024 campaign, while ignoring similar complaints from the public about Trump-friendly media outlets,” Terry and Balderas wrote. “Being the speech police is not the FCC’s job, but enforcing any restrictions in a selective, much less a partisan, way is problematic, and likely to lead to extensive legal actions challenging FCC authority.”

FCC’s long shift away from news regulation

The FCC has historically regulated broadcast news with the Fairness Doctrine, which no longer exists, and the news distortion policy, which is still in place. The Fairness Doctrine was introduced in 1949 to guarantee “that the public has a reasonable opportunity to hear different opposing positions on the public issues of interest and importance in the community.” This requirement to air contrasting views remained in place until 1987.

After losing a court case brought by a TV station, the FCC was forced to reconsider its enforcement of the Fairness Doctrine and decided to repeal it. The Reagan-era FCC concluded that the Fairness Doctrine “violates the First Amendment” and works against the public interest. “Despite the physical differences between the electronic and print media, their roles in our society are identical, and we believe that the same First Amendment principles should be equally applicable to both,” the FCC said at the time.

US regulation of broadcast news continued to be lessened through a series of commission decisions and court rulings. “Even the relaxation of non-content regulations, such as the extension of stations’ license terms from three to eight years, and adoption of rules that make challenges to license renewals by the public or potential competitors almost impossible, have bolstered broadcasters’ editorial rights against outside review,” said a 2001 article by Santa Clara University professor Chad Raphael in the journal Communication Law and Policy.

The FCC’s general shift away from regulating news content made it surprising that the news distortion policy survived, Raphael wrote. “Given this deregulatory trend, it is remarkable that the Commission has preserved its little-known rules against licensees’ deliberately distorting the news… The distortion rules have drawn scant commentary in the regulatory literature, especially in contrast to the outpouring of debate over their cousin, the Fairness Doctrine,” the article said.

But the FCC never issued many findings of news distortion, and such findings have been nearly nonexistent in recent decades. Raphael’s analysis found 120 decisions on news distortion between 1969 and 1999, and only 12 of them resulted in findings against broadcasters. Those 12 decisions were generated by eight cases, as several of the cases “generated multiple decisions as they went through the appeals process.”

“The number of reported decisions drops off dramatically after 1976, and there is only one finding of distortion after 1982, when the Reagan-era FCC began to remove content regulations on broadcast news,” Raphael wrote. The one post-1982 finding of distortion was issued in a letter of admonishment to NBC in 1993 “for staging a segment of a Dateline NBC report on unsafe gas tanks in General Motors trucks,” Raphael wrote.

GM investigated the incident and NBC “admitted to staging the explosion, made an on-air apology to GM, fired three producers who contributed to the segment, and eventually dismissed its news president,” he wrote. The FCC itself sent the letter quietly, with “the first mention of this action appearing in a 1999 decision rejecting a challenge to NBC’s license renewals.”

Investigations rare, penalties even rarer

The rare findings of news distortion were usually accompanied by other infractions. “Most penalties consisted of issuing letters of admonishment or censure that did not figure heavily in subsequent license renewals, all of which were successful,” Raphael wrote.

Despite Raphael’s paper being nearly a quarter-century old, it’s practically up to date. “Since the time of Raphael’s study, it appears that the Commission has only considered allegations of news distortion in a very small number of cases,” said a 2019 paper by Joel Timmer, a professor of film, television, and digital media at Texas Christian University.

Timmer found eight post-1999 cases in which news distortion allegations were considered. Most of the allegations didn’t get very far, and none of them resulted in a finding of news distortion.

The FCC technically has no rule or regulation against news distortion. “Instead, it has a news distortion policy, developed ‘through the adjudicatory process in decisions resolving challenges to broadcasters’ licenses,'” Timmer wrote.

The FCC dismissed an allegation of news distortion over broadcast networks incorrectly projecting that Al Gore would win Florida in the 2000 presidential election, he wrote. The FCC said the incorrect projections were “not a sufficient basis to initiate such an investigation.”

The FCC did investigate an allegation of news distortion in 2007. Two reporters at Florida station WTVT alleged a violation when their employer failed to air reports on the use of synthetic bovine growth hormone by dairy farmers. “The reporters alleged that station management and ownership demanded changes in their report as a result of pressure from Monsanto, the company that produces BGH,” but the FCC decided it was “a legitimate editorial dispute” and not “a deliberate effort to coerce [the reporters] into distorting the news,” Timmer wrote.

There was also a 2007 case involving a Detroit TV station’s report “that a local official and several prominent local business people consorted with prostitutes during a fishing trip to Costa Rica,” Timmer wrote. “It was alleged that a reporter from WXYZ-TV actually paid prostitutes to stay at the hotel at which the trip’s participants were staying, then falsely reported that the participants consorted with them. While the FCC acknowledged that, if true, this could constitute staging of the news, there was a lack of extrinsic evidence to establish that the licensee, its top management, or its news management were involved in an attempt to deliberately distort or falsify the news, causing the news distortion claim to fail.”

Timmer’s paper summarized the FCC’s post-1999 news distortion enforcement as follows:

In addition to the post-1999 cases already discussed—those involving reporting on bovine growth hormone, erroneous projections that Al Gore would win Florida in the 2000 presidential election—and reporting regarding prostitutes in Costa Rica with a public official and business people—charges of news distortion were raised and discussed in only a handful of instances. In addition to these three cases, there were five other cases since 1999 in which the Commission considered allegations of news distortion. In only two of the eight cases was there any detailed discussion of news distortion claims: the BGH story and the story involving prostitutes in Costa Rica. Significantly, in none of the cases was news distortion found to have occurred.

Terry told Ars that he’s not aware of any news distortion findings since the 2019 paper.

The FCC has a separate broadcast hoax rule enacted in 1992. As of 2000, “no broadcaster had ever been fined pursuant to the rule, nor had any stations lost their licenses for violating the rule,” and “it appears that the FCC has considered allegations of broadcast hoaxes only three times since 2000, with none of those cases resulting in the FCC finding a violation of the rule,” Timmer wrote.

The 60 Minutes investigation

In one of her last official acts before Trump’s inauguration and her departure from the FCC, Rosenworcel dismissed complaints of bias against Trump related to ABC’s fact-checking during a presidential debate, the editing of a CBS 60 Minutes interview with Harris, and NBC putting Harris on a Saturday Night Live episode. Rosenworcel also dismissed a challenge to a Fox station license alleging that Fox willfully distorted news with false reports of fraud in the 2020 election that Trump lost.

Carr quickly revived the three complaints alleging bias against Trump, which were filed by a nonprofit law firm called the Center for American Rights. Of these, the ABC and CBS complaints allege news distortion. The NBC complaint alleges a violation of the separate Equal Time rule. The complaints were filed against individual broadcast stations because the FCC licenses stations rather than the networks that own them or are affiliated with them.

Carr has repeatedly expressed interest in the complaint over 60 Minutes, which alleged that CBS misled viewers by airing two different responses to the same question about Israeli Prime Minister Benjamin Netanyahu, one on 60 Minutes and the other on Face the Nation. CBS’s defense—which is supported by the unedited transcript and video of the interview—is that the two clips show different parts of the same answer given by Harris.

On February 5, the Carr-led FCC issued a public notice seeking comment on the CBS investigation. The FCC’s public notices aren’t generally seen by many people, but the FCC tried to encourage participation in this proceeding. The agency temporarily added a banner message to the top of the consumer complaints page to urge the public to submit comments about the 60 Minutes interview.

“Interested in adding your comments to the proceeding investigating news distortion in the airing of a ’60 Minutes’ interview with then Vice President Kamala Harris?” the banner message said, linking to a page that explained how to submit comments on the proceeding.

Former chairs blast Carr

One filing was submitted by the former chairs Sikes and Wheeler, plus three other former FCC commissioners: Republican Rachelle Chong, Democrat Ervin Duggan, and Democrat Gloria Tristani. “These comments are submitted to emphasize the unprecedented nature of this news distortion proceeding, and to express our strong concern that the Federal Communications Commission may be seeking to censor the news media in a manner antithetical to the First Amendment,” the bipartisan group of former FCC chairs and commissioners wrote.

The FCC has historically “enforced the [news distortion] policy very rarely, and it has adopted guardrails requiring that complaints be summarily dismissed in all but the most exceptional circumstances,” they wrote, adding that there are no exceptional circumstances warranting an investigation into CBS.

“The Commission’s departures from its typical practice and precedent are especially troubling when viewed in context. This Administration has made no secret of its desire to revoke the licenses of broadcasters that cover it in ways the President considers unfavorable,” the filing said.

Pointing to the Raphael and Timmer analyses, the former FCC leaders wrote that the agency “issued findings of liability on news distortion in just eight cases between 1969 and 2019—and in fact in just one case between 1985 and 2019. None of the cases that found news distortion concerned the way a broadcaster had exercised its editorial discretion in presenting the news. Instead, each case involved egregious misconduct, including the wholesale fabrication of news stories.”

The FCC’s news distortion policy applies a multi-part test, the group noted. A finding of news distortion requires “deliberate distortion” and not mere inaccuracy or differences of opinion, “extrinsic evidence (i.e., beyond the broadcast itself) demonstrating that the broadcaster deliberately distorted or staged the news” and that “the distortion must apply to a ‘significant event,’ rather than minor inaccuracies or incidental aspects of the report.” Finally, FCC policy is to “only consider taking action on the broadcaster’s license if the extrinsic evidence shows the distortion involved the ‘principals, top management, or news management’ of the licensee, as opposed to other employees.”

The FCC has historically punished licensees only after dramatic violations, like “elaborate hoaxes, internal conspiracies, and reports conjured from whole cloth,” they wrote. There is “no credible argument” that the allegations against CBS “belong in the same category.”

CBS transcript and video supports network

Kamal Harris smiles while sitting for a television interview.

Kamala Harris on 60 Minutes.

Credit: CBS

Kamala Harris on 60 Minutes. Credit: CBS

The Center for American Rights complaint says that an FCC investigation of”extrinsic evidence” could include examining outtakes to determine whether “the licensee has deliberately suppressed or altered a news report.” The complaint criticized CBS for not providing the complete transcript of the interview.

In late January, the Carr-led FCC demanded that CBS provide an unedited transcript and camera feeds of the interview. CBS provided the requested materials and made them available publicly. The transcript supports CBS’s defense because it shows that what the Center for American Rights claimed were “two completely different answers” were just two different sentences from the same response.

“We broadcast a longer portion of the vice president’s answer on Face the Nation and broadcast a shorter excerpt from the same answer on 60 Minutes the next day. Each excerpt reflects the substance of the vice president’s answer,” CBS said.

The Center for American Rights complained that in one clip, Harris answered the question about Netanyahu by saying, “Well, Bill, the work that we have done has resulted in a number of movements in that region by Israel that were very much prompted by, or a result of many things, including our advocacy for what needs to happen in the region.”

In the second clip, Harris responded to the question by saying, “We are not going to stop pursuing what is necessary for the United States to be clear about where we stand on the need for this war to end.”

“Same interview, same question, two completely different answers,” the Center for American Rights’ complaint said.

But the CBS transcript and video shows that Harris spoke these two sentences as part of one answer to the question. CBS aired the two sentences in different clips, but neither contradicts the other.

Center for American Rights stands by complaint

The Center for American Rights declined to comment on the transcript and video when contacted by Ars, but it pointed us to the final comments it submitted in the FCC proceeding. The filing argues for an expansive approach to regulating news distortion, saying that “slanting the news to benefit one political candidate violates the distortion doctrine.”

“The core of our concern is that 60 Minutes‘ slice-and-dice journalism was an act of slanting the news to favor a preferred candidate and part of a pattern of CBS News consistently favoring a candidate and party… The Commission is uniquely positioned as the relevant authority with the power to investigate to determine whether CBS engaged in intentional news slanting,” the filing said.

The Center for American Rights filing also complained that “Fox and Sinclair [we]re subject to relentless regulatory pressure under the prior chair… but then everyone screams that the First Amendment is being eviscerated when CBS is subject to attention under the same policy from the new chair.”

“‘Selective enforcement’ is when Fox and Sinclair are constantly under regulatory pressure from Democrats at the FCC and in the Congress and from their outside allies, but then unchecked ‘press freedom’ is the sacrosanct principle when CBS allegedly transgresses the same lines when Republicans are in power,” the group said, responding to arguments that punishing CBS would be selective enforcement.

As previously mentioned in this article, Rosenworcel rejected a news distortion complaint and license challenge that targeted Fox’s WTXF-TV in Philadelphia. “Such content review in the context of a renewal application would run afoul of our obligations under the First Amendment and the statutory prohibition on censorship and interference with free speech rights,” Rosenworcel’s FCC said.

The conservative Sinclair Broadcasting Group was fined $48 million for portraying sponsored TV segments as news coverage and other violations in the largest-ever civil penalty paid by a broadcaster in FCC history. But that happened under Republican Ajit Pai, the FCC chair during Trump’s first term. Pai’s FCC also blocked Sinclair’s attempt to buy Tribune Media Company.

Carr defended his investigation of CBS in a letter to Sen. Richard Blumenthal (D-Conn.). “During the Biden Administration, the FCC and Democrats across government repeatedly weaponized our country’s communications laws and processes. In contrast, I am restoring the FCC’s commitment to basic fairness and even-handed treatment for all,” Carr wrote.

Carr said he “put the CBS complaint on the same procedural footing that the Biden FCC determined it should apply to the Fox complaint.” By this, he means that the previous administration held a proceeding to consider the Fox complaint instead of dismissing it outright.

“The Biden FCC’s approach to the Fox petition stands in stark contrast to the approach the Biden FCC took to the CBS petition. Unlike the Fox petition, the Biden FCC just summarily dismissed the CBS one,” Carr wrote. Carr also said the Biden-era FCC “fail[ed] to process hundreds of routine Sinclair license renewals” and that the FCC is now “clearing and renewing those licenses again.”

The Fox case involved very different allegations than the CBS one. While CBS is facing investigation for airing two parts of an interviewee’s answer in two different broadcasts, a Delaware judge ruled in 2023 that Fox News made false and defamatory statements claiming that Dominion Voting Systems committed election fraud by manipulating vote counts through its software and algorithms. Fox subsequently agreed to pay Dominion $788 million in a settlement instead of facing trial.

Carr could test FCC authority in court

The Rosenworcel FCC said the CBS complaint was meritless in its dismissal. “Opening a news distortion enforcement action under Commission precedent—as rare as it is—turns on the important question of whether any information or extrinsic evidence was submitted to the Commission indicating an ‘intentional’ or ‘deliberate’ falsification of the news,” the decision said. “The Complaint submitted fails to do so. The Commission simply cannot wield its regulatory authority in a manner completely inconsistent with long-settled precedent that the Commission not ‘second guess’ broadcast decisions.”

The comments submitted by former chairs and commissioners said the “transcript confirms that the editing choices at issue lie well within the editorial judgment protected by the First Amendment.” TechFreedom, a libertarian-leaning think tank, told the FCC that “if the new standard for triggering a news distortion analysis is that any edits of raw interview video can be subject to challenge, then the FCC will spend the next four years, at least, fielding dozens, hundreds, thousands of news distortion complaints. Since every taped interview is edited, every taped interview that is aired will be ripe for an FCC complaint, which will have to be adjudicated. The news distortion complaint process will be weaponized by both political parties, and the business of the FCC will grind to a halt as it will have to assign more and more FTEs [full-time employees] to processing these complaints.”

Although CBS appears to have a strong defense, Carr can make life difficult for broadcasters simply by opening investigations. As experts have previously told Ars, the FCC can use its rules to harass licensees and hold up applications related to business deals. Carr said in November that the news distortion complaint over the 60 Minutes interview would factor into the FCC’s review of CBS owner Paramount’s transfer of TV broadcast station licenses to Skydance.

Jeffrey Westling, a lawyer who is the director of technology and innovation policy at the conservative American Action Forum, has written that the high legal bar for proving news distortion means that cases must involve something egregious—like a bribe or instructions from management to distort the news. But Westling has told Ars it’s possible that a “sympathetic” court could let the FCC use the rule to deny a transfer or renewal of a broadcast license.

“The actual bounds of the rule are not well-tested,” said Westling, who argues that the news distortion policy should be eliminated.

An FCC webpage that was last updated during Rosenworcel’s term says the FCC’s authority to enforce its news distortion policy is narrow. “The agency is prohibited by law from engaging in censorship or infringing on First Amendment rights of the press,” the FCC said, noting that “opinion or errors stemming from mistakes are not actionable.”

1960s FCC: “No government agency can authenticate the news”

The high bar set by the news distortion policy isn’t just about issuing findings of distortion—it is supposed to prevent many investigations in the first place, the Rosenworcel FCC said in its dismissal of the CBS complaint:

Indeed, the Commission has established a high threshold to commencing any investigation into allegations of news distortion. It is not sufficient for the Complainant to show that the material in question is false or even that the Licensee might have known or should have known about the falsity of the material. A news distortion complaint must include extrinsic evidence that the Licensee took actions to engage in a deliberate and intentional falsification of the news.

The comments submitted by Terry and Balderas said that “case law is clear: news distortion complaints must meet an extraordinary burden of proof.”

“The current complaint against CBS fails to meet this standard,” Terry and Balderas wrote. “Editing for clarity, brevity, or production value is a standard journalistic practice, and absent clear evidence of deliberate fabrication, government intervention is unwarranted. The current complaint against CBS presents no extrinsic evidence whatsoever—no internal memos, no whistleblower testimony, no evidence of financial incentives—making it facially deficient under the extrinsic evidence standard consistently applied since Hunger in America.”

Hunger in America was a 1968 CBS documentary that the FCC investigated. The FCC’s decision against issuing a finding of news distortion became an important precedent that was cited in a 1985 court case that upheld another FCC decision to reject an allegation of news distortion.

“The FCC’s policy on rigging, staging, or distorting the news was developed in a series of cases beginning in 1969,” said the 1985 ruling from the US Court of Appeals for the District of Columbia Circuit. “In the first of these, Hunger In America, CBS had shown an infant it said was suffering from malnutrition, but who was actually suffering from another ailment.”

The 1960s FCC found that “[r]igging or slanting the news is a most heinous act against the public interest” but also that “in this democracy, no government agency can authenticate the news, or should try to do so.” As the DC Circuit Court noted, in Hunger in America and “in all the subsequent cases, the FCC made a crucial distinction between deliberate distortion and mere inaccuracy or difference of opinion.”

Carr: FCC “not close” to dismissing complaint

Despite this history of non-enforcement except in the most egregious cases, Carr doesn’t seem inclined to end the investigation into what seems to be a routine editing decision. “Carr believes CBS has done nothing to bring the commission’s investigation to an end, including a fix for the alleged pervasive bias in its programming, according to people with knowledge of the matter,” said a New York Post report on March 28.

The report said the Paramount/Skydance merger “remains in FCC purgatory” and that the news distortion investigation is “a key element” holding up FCC approval of the transaction. An anonymous FCC official was quoted as saying that “the case isn’t close to being settled right now.”

We contacted Carr and will update this article if we get a response. But Carr confirmed to another news organization recently that he doesn’t expect a quick resolution. He told Reuters on March 25 that “we’re not close in my view to the position of dismissing that complaint at this point.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

The speech police: Chairman Brendan Carr and the FCC’s news distortion policy Read More »