Mastalir said China is “copying the US playbook” with the way it integrates satellites into more conventional military operations on land, in the air, and at sea. “Their specific goals are to be able to track and target US high-value assets at the time and place of their choosing,” Mastalir said.
China’s strategy, known as Anti-Access/Area Denial, or A2AD, is centered on preventing US forces from accessing international waters extending hundreds or thousands of miles from mainland China. Some of the islands occupied by China within the last 15 years are closer to the Philippines, another treaty ally, than to China itself.
The A2AD strategy first “extended to the first island chain (bounded by the Philippines), and now the second island chain (extending to the US territory of Guam), and eventually all the way to the West Coast of California,” Mastalir said.
US officials say China has based anti-ship, anti-air, and anti-ballistic weapons in the region, and many of these systems rely on satellite tracking and targeting. Mastalir said his priority at Indo-Pacific Command, headquartered in Hawaii, is to defend US and allied satellites, or “blue assets,” and challenge “red assets” to break the Chinese military’s “long-range kill chains and protect the joint force from space-enabled attack.”
What this means is the Space Force wants to have the ability to disable or destroy the satellites China would use to provide communication, command, tracking, navigation, or surveillance support during an attack against the US or its allies.
Buildings and structures are seen on October 25, 2022, on an artificial island built by China on Subi Reef in the Spratly Islands of the South China Sea. China has progressively asserted its claim of ownership over disputed islands in the region. Credit: Ezra Acayan/Getty Images
Mastalir said he believes China’s space-based capabilities are “sufficient” to achieve the country’s military ambitions, whatever they are. “The sophistication of their sensors is certainly continuing to increase—the interconnectedness, the interoperability. They’re a pacing challenge for a reason,” he said.
“We’re seeing all signs point to being able to target US aircraft carriers… high-value assets in the air like tankers, AWACS (Airborne Warning And Control System),” Mastalir said. “This is a strategy to keep the US from intervening, and that’s what their space architecture is.”
That’s not acceptable to Pentagon officials, so Space Force personnel are now training for orbital warfare. Just don’t expect to know the specifics of any of these weapons systems any time soon.
“The details of that? No, you’re not going to get that from any war-fighting organization—’let me tell you precisely how I intend to attack an adversary so that they can respond and counter that’—those aren’t discussions we’re going to have,” Saltzman said. “We’re still going to protect some of those (details), but broadly, from an operational concept, we are going to be ready to contest space.”
A new administration
The Space Force will likely receive new policy directives after President-elect Donald Trump takes office in January. The Trump transition team hasn’t identified any changes coming for the Space Force, but a list of policy proposals known as Project 2025 may offer some clues.
Published by the Heritage Foundation, a conservative think tank, Project 2025 calls for the Pentagon to pivot the Space Force from a mostly defensive posture toward offensive weapons systems. Christopher Miller, who served as acting secretary of defense in the first Trump administration, authored the military section of Project 2025.
Miller wrote that the Space Force should “reestablish offensive capabilities to guarantee a favorable balance of forces, efficiently manage the full deterrence spectrum, and seriously complicate enemy calculations of a successful first strike against US space assets.”
Saltzman met with Trump last month while attending a launch of SpaceX’s Starship rocket in Texas, but he said the encounter was incidental. Saltzman was already there for discussions with SpaceX officials, and Trump’s travel plans only became known the day before the launch.
The conversation with Trump at the Starship launch didn’t touch on any policy details, according to Saltzman. He added that the Space Force hasn’t yet had any formal discussions with the Trump transition team.
Regardless of the direction Trump takes with the Space Force, Saltzman said the service is already thinking about what to do to maintain what the Pentagon now calls “space superiority”—a twist on the term air superiority, which might have seemed equally as fanciful at the dawn of military aviation more than a century ago.
“That’s the reason we’re the Space Force,” Saltzman said. “So administration to administration, that’s still going to be true. Now, it’s just about resourcing and the discussions about what we want to do and when we want to do it, and we’re ready to have those discussions.”
Adding electric power and a battery turns the Urus from hit-or-miss to just right.
The original Urus was an SUV that nobody particularly wanted, even if the market was demanding it. With luxury manufacturers tripping over themselves to capitalize on a seemingly limitless demand for taller all-around machines, Lamborghini was a little late to the party.
The resulting SUV has done its job, boosting Lamborghini’s sales and making up more than half of the company’s volume last year. Even so, the first attempt was just a bit tame. That most aggressive of supercar manufacturers produced an SUV featuring the air of the company’s lower, more outrageous performance machines, but it didn’t quite deliver the level of prestige that its price demanded.
The Urus Performante changed that, adding enough visual and driving personality to make itself a legitimately exciting machine to drive or to look at. Along the way, though, it lost a bit of the most crucial aspect of an SUV: everyday livability. On paper, the Urus SE is just a plug-in version of the Urus, with a big battery adding some emissions-free range. In reality, it’s an SUV with more performance and more flexibility, too. This is the Urus’ Goldilocks moment.
If you’re looking for something subtle, you shouldn’t be looking at an Urus. Credit: Tim Stevens
The what
The Urus SE starts with the same basic platform as the other models in the line, including a 4.0 L turbocharged V8 that drives all four wheels through an eight-speed automatic and an all-wheel-drive system.
All that has received a strong dose of electrification, starting with a 25.9 kWh battery pack sitting far out back that helps to offset the otherwise nose-heavy SUV while also adding a playful bit of inertia to its tail. More on that in a moment.
That battery powers a 189 hp (141 kW) permanent-magnet synchronous electric motor fitted between the V8 and its transmission. The positioning means it has full access to all eight speeds and can drive the car at up to 81 mph (130 km/h). That, plus a Lamborghini-estimated 37 miles (60 km) of range, means this is a large SUV that could feasibly cover a lot of people’s commutes emissions-free.
The V8 lives here. Credit: Tim Stevens
But when that electric motor’s power is paired with the 4.0 V8, the result is 789 hp (588 kW) total system power delivered to all four wheels. And with the electric torque coming on strong and early, it not only adds shove but throttle response, too.
Other updates
At a glance, the Urus SE looks more or less the same as the earlier renditions of the same SUV. Look closer, though, and you’ll spot several subtle changes, including a hood that eases more gently into the front fenders and a new spoiler out back that Lamborghini says boosts rear downforce by 35 percent over the Urus S.
Far and away the most striking part of the car, though, are the 22-inch wheels wrapped around carbon-ceramic brakes. They give this thing the look of a rolling caricature of a sport SUV in the best way possible. On the body of the machine itself, you’ll want to choose a properly eye-catching color, like the Arancio Egon you see here. I’ve been lucky to drive some pretty special SUVs over the years, and none have turned heads like this one did when cruising silently through a series of small Italian towns.
Things are far more same-y on the inside. At first blush, nothing has changed inside the Urus SE, and that’s OK. You have a few new hues of Technicolor hides to choose from—the car you see here is outfitted in a similarly pungent orange to its exterior color, making it a citrus dream through and through. The sports seats aren’t overly aggressive, offering more comfort than squeeze, but I’d say that’s just perfect.
Buttons and touchscreens vie with less conventional controls inside the Urus. Tim Stevens
But that’s all much the same as prior Urus versions. The central infotainment screen is slightly larger at 12.3 inches, and the software is lightly refreshed, but it’s the same Audi-based system as before. A light skinning full of hexagons makes it look and feel a little more at home in a car with a golden bull on the nose.
Unfortunately, while the car is quicker than the original model, the software isn’t. The overall experience is somewhat sluggish, especially when moving through the navigation system. Even the regen meter on the digital gauge cluster doesn’t change until a good half-second after you’ve pressed the brake pedal, an unfortunate place for lag.
The Urus SE offers six drive modes: Strada (street), Sport, Corsa (track), Sabbia (sand), Terra (dirt), and Neve (snow). There’s also a seventh, customizable Ego mode. As on earlier Urus models, these modes must be selected in that sequence. So if you want to go from Sport back to Strada, you need to cycle the mode selector knob five times—or go digging two submenus deep on the touchscreen.
Those can be further customized via a few buttons added beneath the secondary drive mode lever on the right. The top button enables standard Hybrid mode, where the gasoline and electric powertrains work together as harmoniously as possible for normal driving. The second button enters Recharge mode, which instructs the car to prioritize battery charge. The third and lowest button enters Performance mode, which gives you maximum performance from the hybrid system at the expense of charge.
Finally, a quick tug on the mode selector on the right drops the Urus into EV Drive.
Silent running
I started my time in the Urus SE driving into the middle of town, which was full of narrow streets, pedestrian-friendly speed limits, and aggressively piloted Fiats. Slow and steady is the safest way in these situations, so I was happy to sample the Urus’ all-electric mode.
To put it simply, it delivers. There’s virtually no noise from the drivetrain, a near-silent experience at lower speeds that help assuage the stress such situations can cause. The experience was somewhat spoiled by some tire noise, but I’ll blame that on the Pirelli Scorpion Winter 2 tires outfitted here. I can’t, however, blame the tires for a few annoying creaks and rattles, which isn’t exactly what I’d expect from an SUV at this price point.
Though there isn’t much power at your disposal in this mode, the Urus can still scoot away from lights and stop signs quickly and easily, even ducking through small gaps in tiny roundabouts.
It might not be subtle, but it can be practical. Credit: Tim Stevens
Dip more than three-quarters of the way into the throttle, though, and that V8 fires up and quickly joins the fun. The hand-off here can be a little less than subtle as power output surges quickly, but in a moment, the car goes from a wheezy EV to a roaring Lamborghini. And unlike a lot of plug-ins that stubbornly refuse to shut their engines off again when this happens, another quick pull of the EV lever silences the thing.
When I finally got out of town, I shifted over to Strada mode, the default mode for the Urus. I found this mode a little too lazy for my tastes, as it was reluctant to shift down unless I dipped far into the throttle, resulting in a bucking bull of acceleration when the eight-speed automatic finally complied.
The car only really came alive when I put it into Sport mode and above.
Shifting to Sport
Any hesitation or reluctance to shift is quickly obliterated as soon as you tug the drive mode lever into Sport. The SUV immediately forgets all about trying to be efficient, dropping a gear or two and making sure you’re never far from the power band, keeping the turbo lag from the V8 to a minimum.
The tachometer gets some red highlights in this mode, but you won’t need to look at it. There’s plenty of sound from the exhaust, augmented by some digital engine notes I found to be more distracting and unnecessary than anything. Most importantly, the overall feel of the car changes dramatically. It leaps forward with the slightest provocation of the right pedal, really challenging the grip of the tires.
In my first proper sampling of the full travel of that throttle pedal, I was surprised at how quickly this latest Urus got frisky, kicking its tail out with an eager wag on a slight bend to the right. It wasn’t scary, but it was just lively enough to make me smile and feel like I was something more than a passenger in a hyper-advanced, half-electric SUV.
Credit: Tim Stevens
In other words, it felt like a Lamborghini, an impression only reinforced as I dropped the SUV down to Corsa mode and really let it fly. The transmission is incredibly eager to drop gears on the slightest bit of deceleration, enough so that I rarely felt the need to reach for the column-mounted shift paddles.
But despite the eagerness, the suspension remained compliant and everyday-livable in every mode. I could certainly feel the (many) imperfections in the rural Italian roads more when the standard air suspension was dialed over to its stiffest, but even then, it was never punishing. And in the softest setting, the SUV was perfectly comfortable despite those 22-inch wheels and tires.
I didn’t get a chance to sample the SUV’s off-road prowess, but the SE carries a torque-vectoring rear differential like the Performante, which should mean it will be as eager to turn and drift on loose surfaces as that other, racier Urus.
Both the Urus Performante and the SE start at a bit over $260,000, which means choosing between the two isn’t a decision to be made on price alone. Personally, I’d much prefer the SE. It offers plenty of the charm and excitement of the Performante mixed with even better everyday capability than the Urus S. This one’s just right.
Review: Amazing open-world environs round out a tight, fun-filled adventure story.
No need to put Harrison Ford through the de-aging filter here! Credit: Bethesda / MachineGames
No need to put Harrison Ford through the de-aging filter here! Credit: Bethesda / MachineGames
Historically, games based on popular film or TV franchises have generally been seen as cheap cash-ins, slapping familiar characters and settings on a shovelware clone of a popular genre and counting on the license to sell enough copies to devoted fans. Indiana Jones and the Great Circleclearly has grander ambitions than that, putting a AAA budget behind a unique open-world exploration game built around stealth, melee combat, and puzzle solving.
Building such a game on top of such well-loved source material comes with plenty of challenges. The developers at MachineGames need to pay homage to the source material without resorting to the kind of slavish devotion that amounts to a mere retread of a familiar story. At the same time, any new Indy adventure carries with it the weight not just of the character’s many film and TV appearances but also well-remembered games like Indiana Jones and the Fate of Atlantis. Then there are game franchises like Tomb Raider and Uncharted, which have already put their own significant stamps on the Indiana Jones formula of action-packed, devil-may-care treasure-hunting.
No, this is not a scene from a new Uncharted game. Credit: Bethesda / MachineGames
Surprisingly, Indiana Jones and the Great Circle bears all this pressure pretty well. While the stealth-exploration gameplay and simplistic puzzles can feel a bit trite at points, the game’s excellent presentation, top-notch world-building, and fun-filled, campy storyline drive one of Indy’s most memorable adventures since the original movie trilogy.
A fun-filled adventure
The year is 1937, and Indiana Jones has already Raided a Lost Ark but has yet to investigate the Last Crusade. After a short introductory flashback that retells an interactive version of Raiders of the Lost Ark‘s famous golden idol extraction, Professor Jones gets unexpectedly drawn away from preparations for midterms when a giant of a man breaks into Marshall College’s antiquities wing and steals a lone mummified cat.
Investigating that theft takes Jones on a globetrotting tour of locations along “The Great Circle,” a ring of archaeologically significant sites around the world that house ancient artifacts rumored to hold great and mysterious power. Those rumors have attracted the attention of the Nazis (who else would you expect?), dragging Indy into a race to secure the artifacts before they threaten to alter the course of an impending world war.
You see a whip, I see a grappling hook. Credit: Bethesda / MachineGames
The game’s overarching narrative—told mainly through lengthy cut scenes that serve as the most captivating reward for in-game achievements—does a pitch-perfect job of replicating the campy, madcap, fun-filled, adventurous tone Indy is known for. The writing is full of all the pithy one-liners and cheesy puns you could hope for, as well as countless overt and subtle references to Indy movie moments that will be familiar to even casual fans.
Indy here is his usual mix of archaeological superhero and bumbling everyman. One moment, he’s using his whip and some hard-to-believe upper body strength to jump around some quickly crumbling ruins. The next, he’s avoiding death during a madcap fight scene through a combination of sheer dumb luck and overconfident opposition. The next, he’s solving ancient riddles with reams of historical techno-babble and showing a downright supernatural ability to decipher long-dead languages in an instant when the plot demands it.
You have to admit it, this circle is pretty great! Credit: Bethesda / MachineGames
It all works in large part thanks to Troy Baker’s excellent vocal performance as Jones, which he somehow pulls off as a compelling cross between Harrison Ford and Jeff Goldblum. The music does some heavy lifting in setting the tone, too; it’s full of downright cinematic stirring horns and tension-packed strings that fade in and out perfectly in sync with the on-screen action. The game even shows some great restraint in its sparing use of the famous Indiana Jones theme, which I ended up humming to myself as I played more often than I actually heard it referenced in the game’s score.
Indy quips well off of Gina, a roving reporter searching for her missing sister who serves as the obligatory love interest/globetrotting exploration partner. But the game’s best scenes all involve Emmerich Voss, the Nazi archaeologist antagonist who makes an absolute meal out of his scenery chewing. From his obsession with cranial shapes to his preening diatribes about the inferiority of American culture, Voss makes the perfect foil for Indy’s no-nonsense, homespun apple pie forthrightness.
Voss steals literally every scene he’s in. Credit: Bethesda / MachineGames
By the time the plot descends into an inevitable mess of pseudo-religious magical mysticism, it’s clear that this is a story that doesn’t take itself too seriously. You may cringe a bit at how over the top it all gets, but you’ll probably be having too much fun to care.
Take a look around
In between the cut scenes—which together could form the basis for a strong Indiana Jones-themed episodic streaming miniseries—there’s an actual interactive game to play here as well. That game primarily plays out across three decently sized maps—one urban, one desert, and one water-logged marsh—that you can explore relatively freely, broken up by shorter, more linear interludes in between.
Following the main story quests in each of these locales generally has you zigzagging across the map through a series of glorified fetch quests. Go to location A to collect some mystical doodad, then return it to unlock some fun exposition and a reason to go to location B. Repeat as necessary.
I say “point A” there, but it’s usually more accurate to say the game points you toward “circle A” on the map. Once you get there, you often have to do a bit of unguided exploring to find the hidden trinket or secret entry point you need.
Am I going in the right direction? Credit: Bethesda / MachineGames
At their best, these exploration bits made me feel more like an archaeological detective than the usual in-game tourist blindly following a waypoint from location to location. At its worst, I spent 15 minutes searching through one of these map circles before finding my in-game partner Gina standing right next to the target I was probably intended to find immediately. So it goes.
Traipsing across the map in this way slowly reveals the sizable scale of the game’s environments, which often extend beyond what’s first apparent on the map to multi-floor buildings and gigantic subterranean caverns. Unlocking and/or figuring out all of the best paths through these labyrinthine locales—which can involve climbing across rooftops or crawling through enemy barracks—is often half the fun.
As you crisscross the map, you also invariably stumble on a seemingly endless array of optional sidequests, mysteries, and “fieldwork,” which you keep track of in a dynamically updated journal. While there’s an attempt at a plot justification for each of these optional fetch quests, the ones I tried ended up being much less compelling than the main plot, which seems to have taken most of the writers’ attention.
Indiana Jones, famous Vatican tourist. Credit: Bethesda / MachineGames
As you explore, a tiny icon in the corner of the screen will also alert you to photo opportunities, which can unlock important bits of lore or context for puzzles. I thoroughly enjoyed these quick excuses to appreciate the game’s well-designed architecture and environments, even as it made Indy feel a bit more like a random tourist than a badass archaeologist hero.
Quick, hide!
Unfortunately, your ability to freely explore The Great Circle‘s environments is often hampered by large groups of roaming Nazi and/or fascist soldiers. Sometimes, you can put on a disguise to walk among them unseen, but even then, certain enemies can pick you out of the crowd, something that was not clear to me until I had already been plucked out of obscurity more than a few times.
When undisguised, you’ll spend a lot of time kneeling and sneaking silently just outside the soldiers’ vision cones or patiently waiting for them to move so you can sneak through a newly safe path. Remaining unseen also lets you silently take out enemies from behind, which includes pushing unsuspected enemy sentries off of ledges in a hilarious move that never, ever gets old.
They’ll never find me up here. Credit: Bethesda / MachineGames
When your sneaking skills fail you amid a large group of enemies, the best and easiest thing to do is immediately run and hide. For the most part, the enemies are incredibly inept in their inevitable pursuit; dodge around a couple of corners and hide in a dark alley and they’ll usually quickly lose track of you. While I appreciated that being spotted wasn’t an instant death sentence, the ease with which I could outsmart these soldiers made the sneaking a lot less tense.
If you get spotted by a group of just one or two enemy soldiers, though, it’s time for some first-person melee combat, which draws heavy inspiration from the developers’ previous work on the early ’00s Chronicles of Riddick games. These fights usually play out like the world’s most overdesigned game of Punch-Out!!—you stand there waiting for a heavily telegraphed punch to come in, at which point you throw up a quick block or dodge and then counter with a series of rapid, crunchy punches of your own. Repeat until the enemy goes down.
You can spice things up a bit here by disarming and/or unbalancing your foes with your whip or by grabbing a wide variety of nearby objects to use as improvised melee weapons. After a while, though, all the fistfights start to feel pretty rote and unmemorable. The first time you hit a Nazi upside the head with a plunger is hilarious. The fifth time is a bit tiresome.
It’s always a good time to punch a Nazi. Credit: Bethesda / MachineGames
While you can also pull out a trusty revolver to simply shoot your foes, the racket the shots make usually leads to so much unwelcome enemy attention that it’s rarely worth the trouble. Aside from a handful of obligatory sections where the game practically forces you into a shooting gallery situation, I found little need to engage in the serviceable but unexciting gun combat.
And while The Great Circle is far from a horror game, there are a few combat moments of genuine terror with foes more formidable than the average grunt. I don’t want to give away too much, but those with fear of underwater creatures, the dark, or confined spaces will find some parts of the game incredibly tense.
Not so puzzling
My favorite gameplay moments in The Great Circle were the extended sections where I didn’t have to worry about stealth or combat and could just focus on exploring massive underground ruins. These feature some of the game’s most interesting traversal challenges, where looking around and figuring out just how to make it to the next objective is engaging on its own terms. There’s little of the Uncharted-style gameplay of practically highlighting every handhold and jump with a flashing red sign.
When giant mechanical gears need placing, you know who to call! Credit: Bethesda / MachineGames
These exploratory bits are broken up by some obligatory puzzles, usually involving Indiana Jones’ trademark of unbelievably intricate ancient stone machinery. Arrange the giant stone gears so the door opens, put the right relic in the right spot, shine a light on some emblems with a few mirrors, and so on. You know the drill if you’ve played any number of similar action-adventure games, and you probably won’t be all that engaged if you know how to perform some basic logic and exploration (though snapping pictures with the in-game camera offers hints for those who get unexpectedly stuck).
But even during the least engaging puzzles or humdrum fights in The Great Circle, I was compelled forward by the promise of some intricate ruin or pithy cut scene quip to come. Like the best Indiana Jones movies, there’s a propulsive force to the game’s most exciting scenes that helps you push past any brief feelings of tedium in between. Here’s hoping we see a lot more of this version of Indiana Jones in the future.
A note on performance
Indiana Jones and the Great Circle has received some recentnegative attention for having relatively beefy system requirements, including calling for GPUs that have some form of real-time ray-tracing acceleration. We tested the game on a system with an Nvidia RTX 2080 Ti and an Intel i7-8700K CPU with 32 GB of RAM, which puts it roughly between the “minimum” and “recommended” specs suggested by the publisher.
Trace those rays. Credit: Bethesda / MachineGames
Despite this, we were able to run the game at 1440p resolution and “High” graphical settings at a steady 60 fps throughout. The game did occassionally suffer some heavy frame stuttering when loading new scenes, and far-off background elements had a tendency to noticeably “pop in” when running, but otherwise, we had few complaints about the graphical performance.
Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.
Expect significant changes for America’s space agency.
Jared Isaacman at SpaceX Headquarters in Hawthorne, California. Credit: SpaceX
Jared Isaacman at SpaceX Headquarters in Hawthorne, California. Credit: SpaceX
President-elect Donald Trump announced Wednesday his intent to nominate entrepreneur and commercial astronaut Jared Isaacman as the next administrator of NASA.
For those unfamiliar with Isaacman, who at just 16 years old founded a payment processing company in his parents’ basement that ultimately became a major player in online payments, it may seem an odd choice. However, those inside the space community welcomed the news, with figures across the political spectrum hailing Isaacman’s nomination variously as “terrific,” “ideal,” and “inspiring.”
This statement from Isaac Arthur, president of the National Space Society, is characteristic of the response: “Jared is a remarkable individual and a perfect pick for NASA Administrator. He brings a wealth of experience in entrepreneurial enterprise as well as unique knowledge in working with both NASA and SpaceX, a perfect combination as we enter a new era of increased cooperation between NASA and commercial spaceflight.”
So who is Jared Isaacman? Why is his nomination being welcomed in most quarters of the spaceflight community? And how might he shake up NASA? Read on.
Meet Jared
Isaacman is now 41 years old, about half the age of current NASA Administrator Bill Nelson. He has founded a couple of companies, including the publicly traded Shift4 (look at the number 4 on a keyboard to understand the meaning of the name), as well as Draken International, a company that trained pilots of the US Air Force.
Throughout his career, Isaacman has shown a passion for flying and adventure. About five years ago, he decided he wanted to fly into space and bought the first commercial mission on a SpaceX Dragon spacecraft. But this was no joy ride. Some of his friends assumed Isaacman would invite them along. Instead, he brought a cancer survivor, a science educator, and a raffle winner. As part of the flight, this Inspiration4 mission raised hundreds of millions of dollars for research into childhood cancer.
After this mission, Isaacman set about a more ambitious project he named Polaris. The nominal plan was to fly two additional missions on Dragon and then become the first person to fly on SpaceX’s Starship. He flew the first of these missions, Polaris Dawn, in September. He brought along a pilot, Scott “Kidd” Poteet, and two SpaceX engineers, Anna Menon and Sarah Gillis. They were the first SpaceX employees to ever fly into orbit.
The mission was characteristic of Isaacman’s goal to expand the horizon of what is possible for humans in space. Polaris Dawn flew to an altitude of 1,408.1 km on the first day, the highest Earth-orbit mission ever flown and the farthest humans have traveled from our planet since Apollo. On the third day of the flight, the four crew members donned spacesuits designed and developed by SpaceX within the last two years. After venting the cabin’s atmosphere into space, first Isaacman and then Gillis spent several minutes extending their bodies out of the Dragon spacecraft.
This was the first private spacewalk in history and underscored Isaacman’s commitment to accelerating the transition of spaceflight as rare and government-driven to more publicly accessible.
Why does the space community welcome him?
In the last five years, Isaacman has impressed most of those within the spaceflight community he has interacted with. He has taken his responsibilities seriously, training hard for his Dragon missions and using NASA facilities such as a pressure chamber at NASA’s Johnson Space Center when appropriate.
Through these interactions—based upon my interviews with many people—Isaacman has demonstrated that he is not a billionaire seeking a joyride but someone who wants to change spaceflight for the better. In his spaceflights, he has also demonstrated himself to be a thoughtful and careful leader.
Two examples illustrate this. The ride to space aboard a Crew Dragon vehicle is dynamic, with the passengers pulling in excess of 3 Gs during the initial ascent, the abrupt cutoff of the main Falcon 9 rocket’s engines, stage separation, and then the grinding thrust of the upper stage engines just behind the capsule. In interviews, each of the Polaris Dawn crew members remarked about how Isaacman calmly called out these milestones in advance, with a few words about what to expect. It had a calming, reassuring effect and demonstrated that his crew’s health and safety were foremost among his concerns.
Another way in which Isaacman shows care for his crew and families is through an annual event called “Fighter Jet Training.” Cognizant of the time crew members spend away from their families training, he invites them and SpaceX employees who have supported his flights to an airstrip in Montana. Over the course of two days, family members get to ride in jets, go on a zero-gravity flight, and participate in other fun activities to get a taste of what flying on the edge is like. Isaacman underwrites all of this as a way of thanking all who are helping him.
The bottom line is that Isaacman, through his actions and words, appears to be a caring person who wants the US spaceflight enterprise to advance to greater heights.
Why would Isaacman want the job?
So why would a billionaire who has been to space twice (and plans to go at least two more times) want to run a federal agency? I have not asked Isaacman this question directly, but in interviews over the years, he has made it clear that he is passionate about spaceflight and views his role as a facilitator desiring to move things forward.
Most likely, he has accepted the job because he wants to modernize NASA and put the space agency in the best position to succeed in the future. NASA is no longer the youthful agency that took the United States to the Moon during the Apollo program. That was more than half a century ago, and while NASA is still capable of great things, it is living with one foot in the past and beholden to large, traditional contractors.
The space agency has a budget of about $25 billion, and no one could credibly argue that all of those dollars are spent efficiently. Several major programs at NASA were created by Congress with the intent of ensuring maximum dollars flowed to certain states and districts. It seems likely that Isaacman and the Trump administration will take a whack at some of these sacred cows.
High on the list is the Space Launch System rocket, which Congress created more than a dozen years ago. The rocket, and its ground systems, have been a testament to the waste inherent in large government programs funded by cost-plus contracts. NASA’s current administrator, Nelson, had a hand in creating this SLS rocket. Even he has decried the effect of this type of contracting as a “plague” on the space agency.
Currently, NASA plans to use the SLS rocket as the means of launching four astronauts inside the Orion spacecraft to lunar orbit. There, they will rendezvous with SpaceX’s Starship vehicle, go down to the Moon for a few days, and then come back to Orion. The spacecraft will then return to Earth.
So long, SLS?
Multiple sources have told Ars that the SLS rocket—which has long had staunch backing from Congress—is now on the chopping block. No final decisions have been made, but a tentative deal is in place with lawmakers to end the rocket in exchange for moving US Space Command to Huntsville, Alabama.
So how would NASA astronauts get to the Moon without the SLS rocket? Nothing is final, and the trade space is open. One possible scenario being discussed for future Artemis missions is to launch the Orion spacecraft on a New Glenn rocket into low-Earth orbit. There, it could dock with a Centaur upper stage that would launch on a Vulcan rocket. This Centaur stage would then boost Orion toward lunar orbit.
NASA’s Space Launch System rocket is seen on the launch pad at Kennedy Space Center in April 2022.
Credit: Trevor Mahlmann
NASA’s Space Launch System rocket is seen on the launch pad at Kennedy Space Center in April 2022. Credit: Trevor Mahlmann
Such a scenario is elegant because it uses rockets that would cost a fraction of the SLS and also includes all key contractors currently involved in the Artemis program, with the exception of Boeing, which would lose out financially. (Northrop Grumman will still make solids for Vulcan, and Aerojet Rocketdyne will make the RL-10 upper stage engines for that rocket.)
As part of the Artemis program, NASA is competing with China to not only launch astronauts to the south pole of the Moon but also to develop a sustainable base of operations there. While there is considerable interest in Mars, sources told Ars that the focus of the space agency is likely to remain on a program that goes to the Moon first and then develops plans for Mars.
This competition is not one between Elon Musk, who founded SpaceX, and Jeff Bezos, who founded Blue Origin. Rather, they are both seen as players on the US team. The Trump administration seems to view entrepreneurial spirit as the key advantage the United States has over China in its competition with China. This op-ed in Space News offers a good overview of this sentiment.
So whither NASA? Under the Trump administration, NASA’s role is likely to focus on stimulating the efforts by commercial space entrepreneurs. Isaacman’s marching orders for NASA will almost certainly be two words: results and speed. NASA, they believe, should transition to become more like its roots in the National Advisory Committee for Aeronautics, which undertook, promoted, and institutionalized aeronautical research—but now for space.
It is not easy to turn a big bureaucracy, and there will undoubtedly be friction and pain points. But the opportunity here is enticing: NASA should not be competing with things that private industry is already doing better, such as launching big rockets. Rather, it should find difficult research and development projects at the edge of the possible. This will certainly be Isaacman’s most challenging mission yet.
Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.
One year ago, I didn’t know how to bake bread. I just knew how to follow a recipe.
If everything went perfectly, I could turn out something plain but palatable. But should anything change—temperature, timing, flour, Mercury being in Scorpio—I’d turn out a partly poofy pancake. I presented my partly poofy pancakes to people, and they were polite, but those platters were not particularly palatable.
During a group vacation last year, a friend made fresh sourdough loaves every day, and we devoured it. He gladly shared his knowledge, his starter, and his go-to recipe. I took it home, tried it out, and made a naturally leavened, artisanal pancake.
I took my confusion to YouTube, where I found Hendrik Kleinwächter’s “The Bread Code” channel and his video promising a course on “Your First Sourdough Bread.” I watched and learned a lot, but I couldn’t quite translate 30 minutes of intensive couch time to hours of mixing, raising, slicing, and baking. Pancakes, part three.
It felt like there had to be more to this. And there was—a whole GitHub repository more.
The Bread Code gave Kleinwächter a gratifying second career, and it’s given me bread I’m eager to serve people. This week alone, I’m making sourdough Parker House rolls, a rosemary olive loaf for Friendsgiving, and then a za’atar flatbread and standard wheat loaf for actual Thanksgiving. And each of us has learned more about perhaps the most important aspect of coding, bread, teaching, and lots of other things: patience.
Hendrik Kleinwächter on his Bread Code channel, explaining his book.
Resources, not recipes
The Bread Code is centered around a book, The Sourdough Framework. It’s an open source codebase that self-compiles into new LaTeX book editions and is free to read online. It has one real bread loaf recipe, if you can call a 68-page middle-section journey a recipe. It has 17 flowcharts, 15 tables, and dozens of timelines, process illustrations, and photos of sourdough going both well and terribly. Like any cookbook, there’s a bit about Kleinwächter’s history with this food, and some sourdough bread history. Then the reader is dropped straight into “How Sourdough Works,” which is in no way a summary.
“To understand the many enzymatic reactions that take place when flour and water are mixed, we must first understand seeds and their role in the lifecycle of wheat and other grains,” Kleinwächter writes. From there, we follow a seed through hibernation, germination, photosynthesis, and, through humans’ grinding of these seeds, exposure to amylase and protease enzymes.
I had arrived at this book with these specific loaf problems to address. But first, it asks me to consider, “What is wheat?” This sparked vivid memories of Computer Science 114, in which a professor, asked to troubleshoot misbehaving code, would instead tell students to “Think like a compiler,” or “Consider the recursive way to do it.”
And yet, “What is wheat” did help. Having a sense of what was happening inside my starter, and my dough (which is really just a big, slow starter), helped me diagnose what was going right or wrong with my breads. Extra-sticky dough and tightly arrayed holes in the bread meant I had let the bacteria win out over the yeast. I learned when to be rough with the dough to form gluten and when to gently guide it into shape to preserve its gas-filled form.
I could eat a slice of each loaf and get a sense of how things had gone. The inputs, outputs, and errors could be ascertained and analyzed more easily than in my prior stance, which was, roughly, “This starter is cursed and so am I.” Using hydration percentages, measurements relative to protein content, a few tests, and troubleshooting steps, I could move closer to fresh, delicious bread. Framework: accomplished.
I have found myself very grateful lately that Kleinwächter did not find success with 30-minute YouTube tutorials. Strangely, so has he.
Sometimes weird scoring looks pretty neat. Kevin Purdy
The slow bread of childhood dreams
“I have had some successful startups; I have also had disastrous startups,” Kleinwächter said in an interview. “I have made some money, then I’ve been poor again. I’ve done so many things.”
Most of those things involve software. Kleinwächter is a German full-stack engineer, and he has founded firms and worked at companies related to blogging, e-commerce, food ordering, travel, and health. He tried to escape the boom-bust startup cycle by starting his own digital agency before one of his products was acquired by hotel booking firm Trivago. After that, he needed a break—and he could afford to take one.
“I went to Naples, worked there in a pizzeria for a week, and just figured out, ‘What do I want to do with my life?’ And I found my passion. My passion is to teach people how to make amazing bread and pizza at home,” Kleinwächter said.
Kleinwächter’s formative bread experiences—weekend loaves baked by his mother, awe-inspiring pizza from Italian ski towns, discovering all the extra ingredients in a supermarket’s version of the dark Schwarzbrot—made him want to bake his own. Like me, he started with recipes, and he wasted a lot of time and flour turning out stuff that produced both failures and a drive for knowledge. He dug in, learned as much as he could, and once he had his head around the how and why, he worked on a way to guide others along the path.
Bugs and syntax errors in baking
When using recipes, there’s a strong, societally reinforced idea that there is one best, tested, and timed way to arrive at a finished food. That’s why we have America’s Test Kitchen, The Food Lab, and all manner of blogs and videos promoting food “hacks.” I should know; I wrote up a whole bunch of them as a young Lifehacker writer. I’m still a fan of such things, from the standpoint of simply getting food done.
As such, the ultimate “hack” for making bread is to use commercial yeast, i.e., dried “active” or “instant” yeast. A manufacturer has done the work of selecting and isolating yeast at its prime state and preserving it for you. Get your liquids and dough to a yeast-friendly temperature and you’ve removed most of the variables; your success should be repeatable. If you just want bread, you can make the iconic no-knead bread with prepared yeast and very little intervention, and you’ll probably get bread that’s better than you can get at the grocery store.
Baking sourdough—or “naturally leavened,” or with “levain”—means a lot of intervention. You are cultivating and maintaining a small ecosystem of yeast and bacteria, unleashing them onto flour, water, and salt, and stepping in after they’ve produced enough flavor and lift—but before they eat all the stretchy gluten bonds. What that looks like depends on many things: your water, your flours, what you fed your starter, how active it was when you added it, the air in your home, and other variables. Most important is your ability to notice things over long periods of time.
When things go wrong, debugging can be tricky. I was able to personally ask Kleinwächter what was up with my bread, because I was interviewing him for this article. There were many potential answers, including:
I should recognize, first off, that I was trying to bake the hardest kind of bread: Freestanding wheat-based sourdough
You have to watch—and smell—your starter to make sure it has the right mix of yeast to bacteria before you use it
Using less starter (lower “inoculation”) would make it easier not to over-ferment
Eyeballing my dough rise in a bowl was hard; try measuring a sample in something like an aliquot tube
Winter and summer are very different dough timings, even with modern indoor climate control.
But I kept with it. I was particularly susceptible to wanting things to go quicker and demanding to see a huge rise in my dough before baking. This ironically leads to the flattest results, as the bacteria eats all the gluten bonds. When I slowed down, changed just one thing at a time, and looked deeper into my results, I got better.
The Bread Code YouTube page and the ways in which one must cater to algorithms.
Credit: The Bread Code
The Bread Code YouTube page and the ways in which one must cater to algorithms. Credit: The Bread Code
YouTube faces and TikTok sausage
Emailing and trading video responses with Kleinwächter, I got the sense that he, too, has learned to go the slow, steady route with his Bread Code project.
For a while, he was turning out YouTube videos, and he wanted them to work. “I’m very data-driven and very analytical. I always read the video metrics, and I try to optimize my videos,” Kleinwächter said. “Which means I have to use a clickbait title, and I have to use a clickbait-y thumbnail, plus I need to make sure that I catch people in the first 30 seconds of the video.” This, however, is “not good for us as humans because it leads to more and more extreme content.”
Kleinwächter also dabbled in TikTok, making videos in which, leaning into his German heritage, “the idea was to turn everything into a sausage.” The metrics and imperatives on TikTok were similar to those on YouTube but hyperscaled. He could put hours or days into a video, only for 1 percent of his 200,000 YouTube subscribers to see it unless he caught the algorithm wind.
The frustrations inspired him to slow down and focus on his site and his book. With his community’s help, The Bread Code has just finished its second Kickstarter-backed printing run of 2,000 copies. There’s a Discord full of bread heads eager to diagnose and correct each other’s loaves and occasional pull requests from inspired readers. Kleinwächter has seen people go from buying what he calls “Turbo bread” at the store to making their own, and that’s what keeps him going. He’s not gambling on an attention-getting hit, but he’s in better control of how his knowledge and message get out.
“I think homemade bread is something that’s super, super undervalued, and I see a lot of benefits to making it yourself,” Kleinwächter said. “Good bread just contains flour, water, and salt—nothing else.”
A test loaf of rosemary olive sourdough bread. An uneven amount of olive bits ended up on the top and bottom, because there is always more to learn.
Credit: Kevin Purdy
A test loaf of rosemary olive sourdough bread. An uneven amount of olive bits ended up on the top and bottom, because there is always more to learn. Credit: Kevin Purdy
You gotta keep doing it—that’s the hard part
I can’t say it has been entirely smooth sailing ever since I self-certified with The Bread Code framework. I know what level of fermentation I’m aiming for, but I sometimes get home from an outing later than planned, arriving at dough that’s trying to escape its bucket. My starter can be very temperamental when my house gets dry and chilly in the winter. And my dough slicing (scoring), being the very last step before baking, can be rushed, resulting in some loaves with weird “ears,” not quite ready for the bakery window.
But that’s all part of it. Your sourdough starter is a collection of organisms that are best suited to what you’ve fed them, developed over time, shaped by their environment. There are some modern hacks that can help make good bread, like using a pH meter. But the big hack is just doing it, learning from it, and getting better at figuring out what’s going on. I’m thankful that folks like Kleinwächter are out there encouraging folks like me to slow down, hack less, and learn more.
When MagSafe was introduced, it promised an accessories revolution. Meh.
Apple’s current lineup of MagSafe accessories. Credit: Samuel Axon
When Apple introduced what it currently calls MagSafe in 2020, its marketing messaging suggested that the magnetic attachment standard for the iPhone would produce a boom in innovation in accessories, making things possible that simply weren’t before.
Four years later, that hasn’t really happened—either from third-party accessory makers or Apple’s own lineup of branded MagSafe products.
Instead, we have a lineup of accessories that matches pretty much what was available at launch in 2020: chargers, cases, and just a couple more unusual applications.
With the launch of the iPhone 16 just behind us and the holidays just in front of us, a bunch of people are moving to phones that support MagSafe for the first time. Apple loves an upsell, so it offers some first-party MagSafe accessories—some useful, some not worth the cash, given the premiums it sometimes charges.
Given all that, it’s a good time to check in and quickly point out which (if any) of these first-party MagSafe accessories might be worth grabbing alongside that new iPhone and which ones you should skip in favor of third-party offerings.
Cases with MagSafe
Look, we could write thousands of words about the variety of iPhone cases available, or even just about those that support MagSafe to some degree or another—and we still wouldn’t really scratch the surface. (Unless that surface was made with Apple’s leather-replacement FineWoven material—hey-o!)
It’s safe to say there’s a third-party case for every need and every type of person out there. If you want one that meets your exact needs, you’ll be able to find it. Just know that cases that are labeled as MagSafe-ready will allow charge through and will let the magnets align correctly between a MagSafe charger and an iPhone—that’s really the whole point of the “MagSafe” name.
But if you prefer to stick with Apple’s own cases, there are currently two options: the clear cases and the silicone cases.
The clear case is definitely the superior of Apple’s two first-party MagSafe cases. Credit: Samuel Axon
The clear cases actually have a circle where the edges of the MagSafe magnets are, which is pretty nice for getting the magnets to snap without any futzing—though it’s really not necessary, since, well, magnets attract. They have a firm plastic shell that is likely to do a good job of protecting your phone when you drop it.
The Silicone case is… fine. Frankly, it’s ludicrously priced for what it is. It offers no advantages over a plethora of third-party cases that cost exactly half as much.
Recommendation: The clear case has its advantages, but the silicone case is awfully expensive for what it is. Generally, third party is the way to go. There are lots of third-party cases from manufacturers who got licensed by Apple, and you can generally trust those will work with wireless charging just fine. That was the whole point of the MagSafe branding, after all.
The MagSafe charger
At $39 or $49 (depending on length, one meter or two), these charging cables are pretty pricey. But they’re also highly durable, relatively efficient, and super easy to use. In most cases, you might as well just use any old USB-C cable.
There are some situations where you might prefer this option, though—for example, if you prop your iPhone up against your bedside lamp like a nightstand clock, or if you (like me) listen to audiobooks on wired earbuds while you fall asleep via the USB-C port, but you want to make sure the phone is still charging.
The MagSafe charger for the iPhone. Credit: Samuel Axon
So the answer on Apple’s MagSafe charger is that it’s pretty specialized, but it’s arguably the best option for those who have some specific reason not to just use USB-C.
Recommendation: Just use a USB-C cable, unless you have a specific reason to go this route—shoutout to my fellow individuals who listen to audiobooks while falling asleep but need headphones so as not to keep their spouse awake but prefer wired earbuds that use the USB-C port over AirPods to avoid losing AirPods in the bed covers. I’m sure there are dozens of us! If you do go this route, Apple’s own cable is the safest pick.
Apple’s FineWoven Wallet with MagSafe
While I’d long known people with dense wallet cases for their iPhones, I was excited about Apple’s leather (and later FineWoven) wallet with MagSafe when it was announced. I felt the wallet cases I’d seen were way too bulky, making the phone less pleasant to use.
The problem is that the “durable microtwill” material that Apple went with instead of leather is prone to scratching, as many owners have complained. That’s a bit frustrating for something that costs nearly $60.
The MagSafe wallet has too many limitations to be worthwhile for most people. Credit: Samuel Axon
The wallet also only holds a few cards, and putting cards here means you probably can’t or at least shouldn’t try to use wireless charging, because the cards would be between the charger and the phone. Apple itself warns against doing this.
For those reasons, skip the FineWoven Wallet. There are lots of better-designed iPhone wallet cases out there, even though they might not be so minimalistic.
Recommendation: Skip this one. It’s a great idea in theory, but in practice and execution, it just doesn’t deliver. There are zillions of great wallet cases out there if you don’t mind a bit of bulk—just know you’ll have some wireless charging issues with many cases.
Other categories offered by third parties
Frankly, a lot of the more interesting applications of MagSafe for the iPhone are only available through third parties.
There are monitor mounts for using the iPhone as a webcam with Macs; bedside table stands for charging the phone while it acts as a smart display; magnetic phone stands for car dashboards that let you use GPS while you drive using MagSafe; magnetic versions for attaching power banks and portable batteries; and of course, multi-device chargers similar to the infamously canceled Airpower charging pad Apple had planned to release at one point. (I have the Belkin Boost Charge Pro 3-in-1 on my desk, and it works great.)
It’s not the revolution of new applications that some imagined when MagSafe was launched, but that’s not really a surprise. Still, there are some quality products out there. It’s both strange and a pity that Apple hasn’t made most of them itself.
No revolution here
Truthfully, MagSafe never seemed like it would be a huge smash. iPhones already supported Qi wireless charging before it came along, so the idea of magnets keeping the device aligned with the charger was always the main appeal—its existence potentially saved some users from ending up with chargers that didn’t quite work right with their phones, provided those users bought officially licensed MagSafe accessories.
Apple’s MagSafe accessories are often overpriced compared to alternatives from Belkin and other frequent partners. MagSafe seemed to do a better job bringing some standards to certain third-party products than it did bringing life to Apple’s offerings, and it certainly did not bring about a revolution of new accessory categories to the iPhone.
Still, it’s hard to blame anyone for choosing to go with Apple’s versions; the world of third-party accessories can be messy, and going the first-party route is generally a surefire way to know you’re not going to have many problems, even if the sticker’s a bit steep.
You could shop for third-party options, but sometimes you want a sure thing. With the possible exception of the FineWoven Wallet, all of these Apple-made MagSafe products are sure things.
Samuel Axon is a senior editor at Ars Technica. He covers Apple, software development, gaming, AI, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.
Instead of using antennas, could we wire up trees in a forest to detect neutrinos? Credit: Claire Gillo/PhotoPlus Magazine/Future via Getty Images
Neutrinos are some of nature’s most elusive particles. One hundred trillion fly through your body every second, but each one has only a tiny chance of jostling one of your atoms, a consequence of the incredible weakness of the weak nuclear force that governs neutrino interactions. That tiny chance means that reliably detecting neutrinos takes many more atoms than are in your body. To spot neutrinos colliding with atoms in the atmosphere, experiments have buried 1,000 tons of heavy water, woven cameras through a cubic kilometer of Antarctic ice, and planned to deploy 200,000 antennas.
In a field full of ambitious plans, a recent proposal by Steven Prohira, an assistant professor at the University of Kansas, is especially strange. Prohira suggests that instead of using antennas, we could detect the tell-tale signs of atmospheric neutrinos by wiring up a forest of trees. His suggestion may turn out to be impossible, but it could also be an important breakthrough. To find out which it is, he’ll need to walk a long path, refining prototypes and demonstrating his idea’s merits.
Prohira’s goal is to detect so-called ultra-high-energy neutrinos. Each one of these tiny particles carries more than fifty million times the energy released by uranium during nuclear fission. Their origins are not fully understood, but they are expected to be produced by some of the most powerful events in the Universe, from collapsing stars and pulsars to the volatile environments around the massive black holes at the centers of galaxies. If we could detect these particles more reliably, we could learn more about these extreme astronomical events.
Other experiments, like a project called GRAND, plan to build antennas to detect these neutrinos, watching for radio signals that come from their reactions with our atmosphere. However, finding places to place these antennas can be a challenge. Motivated by this experiment, Prohira dug up old studies by the US Army that suggested an alternative: instead of antennas, use trees. By wrapping a wire around each tree, army researchers found that the trees were sensitive to radio waves, which they hoped to use to receive radio signals in the jungle. Prohira argues that the same trick could be useful for neutrino detection.
Crackpot or legit science?
People suggest wacky ideas every day. Should we trust this one?
At first, you might be a bit suspicious. Prohira’s paper is cautious on the science but extremely optimistic in other ways. He describes the proposal as a way to help conserve the Earth’s forests and even suggests that “a forest detector could also motivate the large-scale reforesting of land, to grow a neutrino detector for future generations.”
Prohira is not a crackpot, though. He has a track record of research in detecting neutrinos via radio waves in more conventional experiments, and he even received an $800,000 MacArthur genius grant a few years ago to support his work.
More generally, studying particles from outer space often demands audacious proposals, especially ones that make use of the natural world. Professor Albrecht Karle works on the IceCube experiment, an array of cameras that detect neutrinos whizzing through a cubic kilometer of Antarctic ice.
“In astroparticle physics, where we often cannot build the entire experiment in a laboratory, we have to resort to nature to help us, to provide an environment that can be used to build a detector. For example, in many parts of astroparticle physics, we are using the atmosphere as a medium, or the ocean, or the ice, or we go deep underground because we need a shield because we cannot construct an artificial shield. There are even ideas to go into space for extremely energetic neutrinos, to build detectors on Jupiter’s moon Europa.”
Such uses of nature are common in the field. India’s GRAPES experiments were designed to measure muons, but they have to filter out anything that’s not a muon to do so. As Professor Sunil Gupta of the Tata Institute explained, the best way to do that was with dirt from a nearby hill.
“The only way we know you can make a muon detector work is by filtering out other radiation […] so what we decided is that we’ll make a civil structure, and we’ll dump three meters of soil on top of that, so those three meters of soil could act as a filter,” he said.
The long road to an experiment
While Prohira’s idea isn’t ridiculous, it’s still just an idea (and one among many). Prohira’s paper describing the idea was uploaded to arXiv.org, a pre-print server, in January. Physicists use pre-print servers to give access to their work before it’s submitted to a scientific journal. That gives other physicists time to comment on the work and suggest revisions. In the meantime, the journal will send the work out to a few selected reviewers, who are asked to judge both whether the paper is likely to be correct and whether it is of sufficient interest to the community.
At this stage, reviewers may find problems with Prohira’s idea. These may take the form of actual mistakes, such as if he made an error in his estimates of the sensitivity of the detector. But reviewers can also ask for more detail. For example, they could request a more extensive analysis of possible errors in measurements caused by the different shapes and sizes of the trees.
If Prohira’s idea makes it through to publication, the next step toward building an actual forest detector would be convincing the larger community. This kind of legwork often takes place at conferences. The International Cosmic Ray Conference is the biggest stage for the astroparticle community, with conferences every two years—the next is scheduled for 2025 in Geneva. Other more specialized conferences, like ARENA, focus specifically on attempts to detect radio waves from high-energy neutrinos. These conferences can offer an opportunity to get other scientists on board and start building a team.
That team will be crucial for the next step: testing prototypes. No matter how good an idea sounds in theory, some problems only arise during a real experiment.
An early version of the GRAPES experiment detected muons by the light they emit passing through tanks of water. To find how much water was needed, the researchers did tests, putting a detector on top of a tank and on the bottom and keeping track of how often both detectors triggered for different heights of water based on the muons that came through randomly from the atmosphere. After finding that the tanks of water would have to be too tall to fit in their underground facility, they had to find wavelength-shifting chemicals that would allow them to use shorter tanks and novel ways of dissolving these chemicals without eroding the aluminum of the tank walls.
“When you try to do something, you run into all kinds of funny challenges,” said Gupta.
The IceCube experiment has a long history of prototypes going back to early concepts that were only distantly related to the final project. The earliest, like the proposed DUMAND project in Hawaii, planned to put detectors in the ocean rather than ice. BDUNT was an intermediate stage, a project that used the depths of Lake Baikal to detect atmospheric neutrinos. While the detectors were still in liquid water, the ability to drive on the lake’s frozen surface made BDUNT’s construction easier.
In a 1988 conference, Robert March, Francis Halzen, and John G. Learned envisioned a kind of “solid state DUMAND” that would use ice instead of water to detect neutrinos. While the idea was attractive, the researchers cautioned that it would require a fair bit of luck. “In summary, this is a detector that requires a number of happy accidents to make it feasible. But if these should come to pass, it may provide the least expensive route to a truly large neutrino telescope,” they said.
In the case of the AMANDA experiment, early tests in Greenland and later tests at the South Pole began to provide these happy accidents. “It was discovered that the ice was even more exceptionally clear and has no radioactivities—absolutely quiet, so it is the darkest and quietest and purest place on Earth,” said Karle.
AMANDA was much smaller than the IceCube experiment, and theorists had already argued that to see cosmic neutrinos, the experiment would need to cover a cubic kilometer of ice. Still, the original AMANDA experiment wasn’t just a prototype; if neutrinos arrived at a sufficient rate, it would spot some. In this sense, it was like the original LIGO experiment, which ran for many years in the early 2000s with only a minimal chance of detecting gravitational waves, but it provided the information needed to perform an upgrade in the 2010s that led to repeated detections. Similarly, the hope of pioneers like Halzen was that AMANDA would be able to detect cosmic neutrinos despite its prototype status.
“There was the chance that, with the knowledge at the time, one might get lucky. He certainly tried,” said Karle.
Prototype experiments often follow this pattern. They’re set up in the hope that they could discover something new about the Universe, but they’re built to at least discover any unexpected challenges that would stop a larger experiment.
Major facilities and the National Science Foundation
For experiments that don’t need huge amounts of funding, these prototypes can lead to the real thing, with scientists ratcheting up their ambition at each stage. But for the biggest experiments, the governments that provide the funding tend to want a clearer plan.
Since Prohira is based in the US, let’s consider the US government. The National Science Foundation has a procedure for its biggest projects, called the Major Research Equipment and Facilities Construction program. Since 2009, it has had a “no cost overrun” policy. In the past, if a project ended up costing more than expected, the NSF could try to find additional funding. Now, projects are supposed to estimate beforehand how the cost could increase and budget extra for the risk. If the budget goes too high anyway, projects should compensate by reducing scope, shrinking the experiment until it falls under costs again.
To make sure they can actually do this, the NSF has a thorough review process.
First, the NSF expects that the scientists proposing a project have done their homework and have already put time and money into prototyping the experiment. The general expectation is that about 20 percent of the experiment’s total budget should have been spent testing out the idea before the NSF even starts reviewing it.
With the prototypes tested and a team assembled, the scientists will get together to agree on a plan. This often means writing a report to hash out what they have in mind. The IceCube team is in the process of proposing a second generation of their experiment, an expansion that would cover more ice with detectors and achieve further scientific goals. The team recently finished the third part of a Technical Design Report, which details the technical case for the experiment.
After that, experiments go into the NSF’s official experiment design process. This has three phases, conceptual design, preliminary design, and final design. Each phase ends with a review document summarizing the current state of the plans as they firm up, going from a general scientific case to a specific plan to put an experiment in a specific place. Risks are estimated in detail and list estimates of how likely risks are and how much they will cost, a process that sometimes involves computer simulations. By the end of the process, the project has a fully detailed plan and construction can begin.
Over the next few years, Prohira will test out his proposal. He may get lucky, like the researchers who dug into Antarctic ice, and find a surprisingly clear signal. He may be unlucky instead and find that the complexities of trees, with different spacings and scatterings of leaves, makes the signals they generate unfit for neutrino science. He, and we, cannot know in advance which will happen.
SpaceX wasn’t able to catch the Super Heavy booster, but Starship is on the cusp of orbital flight.
The sixth flight of Starship lifts off from SpaceX’s Starbase launch site at Boca Chica Beach, Texas. Credit: SpaceX.
SpaceX launched its sixth Starship rocket Tuesday, proving for the first time that the stainless steel ship can maneuver in space and paving the way for an even larger, upgraded vehicle slated to debut on the next test flight.
The only hiccup was an abortive attempt to catch the rocket’s Super Heavy booster back at the launch site in South Texas, something SpaceX achieved on the previous flight on October 13. The Starship upper stage flew halfway around the world, reaching an altitude of 118 miles (190 kilometers) before plunging through the atmosphere for a pinpoint slow-speed splashdown in the Indian Ocean.
The sixth flight of the world’s largest launcher—standing 398 feet (121.3 meters) tall—began with a lumbering liftoff from SpaceX’s Starbase facility near the US-Mexico border at 4 pm CST (22: 00 UTC) Tuesday. The rocket headed east over the Gulf of Mexico, propelled by 33 Raptor engines clustered on the bottom of its Super Heavy first stage.
A few miles away, President-elect Donald Trump joined SpaceX founder Elon Musk to witness the launch. The SpaceX boss became one of Trump’s closest allies in this year’s presidential election, giving the world’s richest man extraordinary influence in US space policy. Sen. Ted Cruz (R-Texas) was there, too, among other lawmakers. Gen. Chance Saltzman, the top commander in the US Space Force, stood nearby, chatting with Trump and other VIPs.
Elon Musk, SpaceX’s CEO, President-elect Donald Trump, and Gen. Chance Saltzman of the US Space Force watch the sixth launch of Starship Tuesday. Credit: Brandon Bell/Getty Images
From their viewing platform, they watched Starship climb into a clear autumn sky. At full power, the 33 Raptors chugged more than 40,000 pounds of super-cold liquid methane and liquid oxygen per second. The engines generated 16.7 million pounds of thrust, 60 percent more than the Soviet N1, the second-largest rocket in history.
Eight minutes later, the rocket’s upper stage, itself also known as Starship, was in space, completing the program’s fourth straight near-flawless launch. The first two test flights faltered before reaching their planned trajectory.
A brief but crucial demo
As exciting as it was, we’ve seen all that before. One of the most important new things engineers wanted to test on this flight occurred about 38 minutes after liftoff.
That’s when Starship reignited one of its six Raptor engines for a brief burn to make a slight adjustment to its flight path. The burn lasted only a few seconds, and the impulse was small—just a 48 mph (77 km/hour) change in velocity, or delta-V—but it demonstrated that the ship can safely deorbit itself on future missions.
With this achievement, Starship will likely soon be cleared to travel into orbit around Earth and deploy Starlink Internet satellites or conduct in-space refueling experiments, two of the near-term objectives on SpaceX’s Starship development roadmap.
Launching Starlinks aboard Starship will allow SpaceX to expand the capacity and reach of its commercial consumer broadband network, which, in turn, provides revenue for Musk to reinvest into Starship. Orbital refueling enables Starship voyages beyond low-Earth orbit, fulfilling SpaceX’s multibillion-dollar contract with NASA to provide a human-rated Moon lander for the agency’s Artemis program. Likewise, transferring cryogenic propellants in orbit is a prerequisite for sending Starships to Mars, making real Musk’s dream of creating a settlement on the red planet.
Artist’s illustration of Starship on the surface of the Moon. Credit: SpaceX
Until now, SpaceX has intentionally launched Starships to speeds just shy of the blistering velocities needed to maintain orbit. Engineers wanted to test the Raptor’s ability to reignite in space on the third Starship test flight in March, but the ship lost control of its orientation, and SpaceX canceled the engine firing.
Before going for a full orbital flight, officials needed to confirm that Starship could steer itself back into the atmosphere for reentry, ensuring it wouldn’t present any risk to the public with an unguided descent over a populated area. After Tuesday, SpaceX can check this off its to-do list.
“Congrats to SpaceX on Starship’s sixth test flight,” NASA Administrator Bill Nelson posted on X. “Exciting to see the Raptor engine restart in space—major progress towards orbital flight. Starship’s success is Artemis’ success. Together, we will return humanity to the Moon & set our sights on Mars.”
While it lacks the pizzazz of a fiery launch or landing, the engine relight unlocks a new phase of Starship development. SpaceX has now proven that the rocket is capable of reaching space with a fair measure of reliability. Next, engineers will fine-tune how to reliably recover the booster and the ship and learn how to use them.
Acid test
SpaceX appears well on its way to doing this. While SpaceX didn’t catch the Super Heavy booster with the launch tower’s mechanical arms Tuesday, engineers have shown they can do it. The challenge of catching Starship itself back at the launch pad is more daunting. The ship starts its reentry thousands of miles from Starbase, traveling approximately 17,000 mph (27,000 km/hour), and must thread the gap between the tower’s catch arms within a matter of inches.
The good news is that SpaceX has now twice proven it can bring Starship back to a precision splashdown in the Indian Ocean. In October, the ship settled into the sea in darkness. SpaceX moved the launch time for Tuesday’s flight to the late afternoon, setting up for splashdown shortly after sunrise northwest of Australia.
The shift in time paid off with some stunning new visuals. Cameras mounted on the outside of Starship beamed dazzling live views back to SpaceX through the Starlink network, showing a now-familiar glow of plasma encasing the spacecraft as it plowed deeper into the atmosphere. But this time, daylight revealed the ship’s flaps moving to control its belly-first descent toward the ocean. After passing through a deck of low clouds, Starship reignited its Raptor engines and tilted from horizontal to vertical, making contact with the water tail-first within view of a floating buoy and a nearby aircraft in position to observe the moment.
Here’s a replay of the spacecraft’s splashdown around 65 minutes after launch.
Splashdown confirmed! Congratulations to the entire SpaceX team on an exciting sixth flight test of Starship! pic.twitter.com/bf98Va9qmL
The ship made it through reentry despite flying with a substandard heat shield. Starship’s thermal protection system is made up of thousands of ceramic tiles to protect the ship from temperatures as high as 2,600° Fahrenheit (1,430° Celsius).
Kate Tice, a SpaceX engineer hosting the company’s live broadcast of the mission, said teams at Starbase removed 2,100 heat shield tiles from Starship ahead of Tuesday’s launch. Their removal exposed wider swaths of the ship’s stainless steel skin to super-heated plasma, and SpaceX teams were eager to see how well the spacecraft held up during reentry. In the language of flight testing, this approach is called exploring the corners of the envelope, where engineers evaluate how a new airplane or rocket performs in extreme conditions.
“Don’t be surprised if we see some wackadoodle stuff happen here,” Tice said. There was nothing of the sort. One of the ship’s flaps appeared to suffer some heating damage, but it remained intact and functional, and the harm looked to be less substantial than damage seen on previous flights.
Many of the removed tiles came from the sides of Starship where SpaceX plans to place catch fittings on future vehicles. These are the hardware protuberances that will catch on the top side of the launch tower’s mechanical arms, similar to fittings used on the Super Heavy booster.
“The next flight, we want to better understand where we can install catch hardware, not necessarily to actually do the catch but to see how that hardware holds up in those spots,” Tice said. “Today’s flight will help inform ‘does the stainless steel hold up like we think it may, based on experiments that we conducted on Flight 5?'”
Musk wrote on his social media platform X that SpaceX could try to bring Starship back to Starbase for a catch on the eighth test flight, which is likely to occur in the first half of 2025.
“We will do one more ocean landing of the ship,” Musk said. “If that goes well, then SpaceX will attempt to catch the ship with the tower.”
The heat shield, Musk added, is a focal point of SpaceX’s attention. The delicate heat-absorbing tiles used on the belly of the space shuttle proved vexing to NASA technicians. Early in the shuttle’s development, NASA had trouble keeping tiles adhered to the shuttle’s aluminum skin. Each of the shuttle tiles was custom-machined to fit on a specific location on the orbiter, complicating refurbishment between flights. Starship’s tiles are all hexagonal in shape and agnostic to where technicians place them on the vehicle.
“The biggest technology challenge remaining for Starship is a fully & immediately reusable heat shield,” Musk wrote on X. “Being able to land the ship, refill propellant & launch right away with no refurbishment or laborious inspection. That is the acid test.”
This photo of the Starship vehicle for Flight 6, numbered Ship 31, shows exposed portions of the vehicle’s stainless steel skin after tile removal. Credit: SpaceX
There were no details available Tuesday night on what caused the Super Heavy booster to divert from its planned catch on the launch tower. After detaching from the Starship upper stage less than three minutes into the flight, the booster reversed course to begin the journey back to Starbase.
Then SpaceX’s flight director announced the rocket would fly itself into the Gulf rather than back to the launch site: “Booster offshore divert.”
The booster finished its descent with a seemingly perfect landing burn using a subset of its Raptor engines. As expected after the water landing, the booster—itself 233 feet (71 meters) tall—toppled and broke apart in a dramatic fireball visible to onshore spectators.
In an update posted to its website after the launch, SpaceX said automated health checks of hardware on the launch and catch tower triggered the aborted catch attempt. The company did not say what system failed the health check. As a safety measure, SpaceX must send a manual command for the booster to come back to land in order to prevent a malfunction from endangering people or property.
Turning it up to 11
There will be plenty more opportunities for more booster catches in the coming months as SpaceX ramps up its launch cadence at Starbase. Gwynne Shotwell, SpaceX’s president and chief operating officer, hinted at the scale of the company’s ambitions last week.
“We just passed 400 launches on Falcon, and I would not be surprised if we fly 400 Starship launches in the next four years,” she said at the Barron Investment Conference.
The next batch of test flights will use an improved version of Starship designated Block 2, or V2. Starship Block 2 comes with larger propellant tanks, redesigned forward flaps, and a better heat shield.
The new-generation Starship will hold more than 11 million pounds of fuel and oxidizer, about a million pounds more than the capacity of Starship Block 1. The booster and ship will produce more thrust, and Block 2 will measure 408 feet (124.4 meters) tall, stretching the height of the full stack by a little more than 10 feet.
Put together, these modifications should give Starship the ability to heave a payload of up to 220,000 pounds (100 metric tons) into low-Earth orbit, about twice the carrying capacity of the first-generation ship. Further down the line, SpaceX plans to introduce Starship Block 3 to again double the ship’s payload capacity.
Just as importantly, these changes are designed to make it easier for SpaceX to recover and reuse the Super Heavy booster and Starship upper stage. SpaceX’s goal of fielding a fully reusable launcher builds on the partial reuse SpaceX pioneered with its Falcon 9 rocket. This should dramatically bring down launch costs, according to SpaceX’s vision.
With Tuesday’s flight, it’s clear Starship works. Now it’s time to see what it can do.
Updated with additional details, quotes, and images.
Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.
OpenGarage restored my home automations and gave me a whole bunch of new ideas.
Hark! The top portion of a garage door has entered my view, and I shall alert my owner to it. Credit: Kevin Purdy
Like Ars Senior Technology Editor Lee Hutchinson, I have a garage. The door on that garage is opened and closed by a device made by a company that, as with Lee’s, offers you a way to open and close it with a smartphone app. But that app doesn’t work with my preferred home automation system, Home Assistant, and also looks and works like an app made by a garage door company.
I had looked into the ratgdo Lee installed, and raved about, but hooking it up to my particular Genie/Aladdin system would have required installing limit switches. So I instead installed an OpenGarage unit ($50 plus shipping). My garage opener now works with Home Assistant (and thereby pretty much anything else), it’s not subject to the whims of API access, and I’ve got a few ideas how to make it even better. Allow me to walk you through what I did, why I did it, and what I might do next.
Thanks, I’ll take it from here, Genie
Genie, maker of my Wi-Fi-capable garage door opener (sold as an “Aladdin Connect” system), is not in the same boat as the Chamberlain/myQ setup that inspired Lee’s project. There was a working Aladdin Connect integration in Home Assistant, until the company changed its API in January 2024. Genie said it would release its own official Home Assistant integration in June, and it did, but then it was quickly pulled back, seemingly for licensing issues. Since then, no updates on the matter. (I have emailed Genie for comment and will update this post if I receive reply.)
This is not egregious behavior, at least on the scale of garage door opener firms. And Aladdin’s app works with Google Home and Amazon Alexa, but not with Home Assistant or my secondary/lazy option, HomeKit/Apple Home. It also logs me out “for security” more often than I’d like and tells me this only after an iPhone shortcut refuses to fire. It has some decent features, but without deeper integrations, I can’t do things like have the brighter ceiling lights turn on when the door opens or flash indoor lights if the garage door stays open too long. At least not without Google or Amazon.
I’ve seen OpenGarage passed around the Home Assistant forums and subreddits over the years. It is, as the name implies, fully open source: hardware design, firmware, and app code, API, everything. It is a tiny ESP board that has an ultrasonic distance sensor and circuit relay attached. You can control and monitor it from a web browser, mobile or desktop, from IFTTT, MQTT, and with the latest firmware, you can get email alerts. I decided to pull out the 6-foot ladder and give it a go.
Prototypes of the OpenGarage unit. To me, they look like little USB-powered owls, just with very stubby wings. Credit: OpenGarage
Installing the little watching owl
You generally mount the OpenGarage unit to the roof of your garage, so the distance sensor can detect if your garage door has rolled up in front of it. There are options for mounting with magnetic contact sensors or a side view of a roll-up door, or you can figure out some other way in which two different sensor depth distances would indicate an open or closed door. If you’ve got a Security+ 2.0 door (the kind with the yellow antenna, generally), you’ll need an adapter, too.
The toughest part of an overhead install is finding a spot that gives the unit a view of your garage door, not too close to rails or other obstructing objects, but then close enough for the contact wires and USB micro cable to reach. Ideally, too, it has a view of your car when the door is closed and the car is inside, so it can report its presence. I’ve yet to find the right thing to do with the “car is inside or not” data, but the seed is planted.
OpenGarage’s introduction and explanation video.
My garage setup, like most of them, is pretty simple. There’s a big red glowing button on the wall near the door, and there are two very thin wires running from it to the opener. On the opener, there are four ports that you can open up with a screwdriver press. Most of the wires are headed to the safety sensor at the door bottom, while two come in from the opener button. After stripping a bit of wire to expose more cable, I pressed the contact wires from the OpenGarage into those same opener ports.
The wire terminal on my Genie garage opener. The green and pink wires lead to the OpenGarage unit. Credit: Kevin Purdy
After that, I connected the wires to the OpenGarage unit’s screw terminals, then did some pencil work on the garage ceiling to figure out how far I could run the contact and micro-USB power cable, getting the proper door view while maintaining some right-angle sense of order up there. When I had reached a decent compromise between cable tension and placement, I screwed the sensor into an overhead stud and used a staple gun to secure the wires. It doesn’t look like a pro installed it, but it’s not half bad.
Where I ended up installing my OpenGarage unit. Key points: Above the garage door when open, view of the car below, not too close to rails, able to reach power and opener contact. Credit: Kevin Purdy
A very versatile board
If you’ve got everything placed and wired up correctly, opening the OpenGarage access point or IP address should give you an interface that shows you the status of your garage, your car (optional), and its Wi-Fi and external connections.
The landing screen for the OpenGarage. You can only open the door or change settings if you know the device key (which you should change immediately). Credit: Kevin Purdy
It’s a handy webpage and a basic opener (provided you know the secret device key you set), but OpenGarage is more powerful in how it uses that data. OpenGarage’s device can keep a cloud connection open to Blynk or the maker’s own OpenThings.io cloud server. You can hook it up to MQTT or an IFTTT channel. It can send you alerts when your garage has been open a certain amount of time or if it’s open after a certain time of day.
You’re telling me you can just… see the state of these things, at all times, on your own network? Credit: Kevin Purdy
You really don’t need a corporate garage coder
For me, the greatest benefit is in hooking OpenGarage up to Home Assistant. I’ve added an opener button to my standard dashboard (one that requires a long-press or two actions to open). I’ve restored the automation that turns on the overhead bulbs for five minutes when the garage door opens. And I can dig in if I want, like alerting me that it’s Monday night at 10 pm and I’ve yet to open the garage door, indicating I forgot to put the trash out. Or maybe some kind of NFC tag to allow for easy opening while on a bike, if that’s not a security nightmare (it might be).
Not for nothing, but OpenGarage is also a deeply likable bit of indie kit. It’s a two-person operation, with Ray Wang building on his work with the open and handy OpenSprinkler project, trading Arduino for ESP8266, and doing some 3D printing to fit the sensors and switches, and Samer Albahra providing mobile app, documentation, and other help. Their enthusiasm for DIY home control has likely brought out the same in others and certainly in me.
Kevin is a senior technology reporter at Ars Technica, covering open-source software, PC gaming, home automation, repairability, e-bikes, and tech history. He has previously worked at Lifehacker, Wirecutter, iFixit, and Carbon Switch.
“Events are weaving together quickly. The fate of the world shall be decided.”
Dragon Age: The Veilguard is as much about the world, story, and characters as the gameplay. Credit: EA
BioWare’s reputation as a AAA game development studio is built on three pillars: world-building, storytelling, and character development. In-game codices offer textual support for fan theories, replays are kept fresh by systems that encourage experimenting with alternative quest resolutions, and players get so attached to their characters that an entire fan-built ecosystem of player-generated fiction and artwork has sprung up over the years.
After two very publicly disappointing releases with Mass Effect: Andromeda and Anthem, BioWare pivoted back to the formula that brought it success, but I’m wrapping up the first third of The Veilguard, and it feels like there’s an ingredient missing from the special sauce. Where are the quests that really let me agonize over the potential repercussions of my choices?
I love Thedas, and I love the ragtag group of friends my hero has to assemble anew in each game, but what really gets me going as a roleplayer are the morally ambiguous questions that make me squirm: the dreadful and delicious BioWare decisions.
Should I listen to the tormented templar and assume every mage I meet is so dangerous that I need to adopt a “strike first, ask questions later” policy, or can I assume at least some magic users are probably not going to murder me on sight? When I find out my best friend’s kleptomania is the reason my city has been under armed occupation for the past 10 years, do I turn her in, or do I swear to defend her to the end?
Questions like these keep me coming back to replay BioWare games over and over. I’ve been eagerly awaiting the release of the fourth game in the Dragon Age franchise so I can find out what fresh dilemmas I’ll have to wrangle, but at about 70 hours in, they seem to be in short supply.
The allure of interactive media, and the limitations
Before we get into some actual BioWare choices, I think it’s important to acknowledge the realities of the medium. These games are brilliant interactive stories. They reach me like my favorite novels do, but they offer a flexibility not available in printed media (well, outside of the old Choose Your Own Adventure novels, anyway). I’m not just reading about the main character’s decisions; I’m making the main character’s decisions, and that can be some heady stuff.
There’s a limit to how much of the plot can be put into a player’s hands, though. A roleplaying game developer wants to give as much player agency as possible, but that has to happen through the illusion of choice. You must arrive at one specific location for the sake of the plot, but the game can accommodate letting you choose from several open pathways to get there. It’s a railroad—hopefully a well-hidden railroad—but at the end of the day, no matter how great the storytelling is, these are still video games. There’s only so much they can do.
So if you have to maintain an illusion of choice but also want to to invite your players to thoughtfully engage with your decision nodes, what do you do? You reward them for playing along and suspending their disbelief by giving their choices meaningful weight inside your shared fantasy world.
If the win condition of a basic quest is a simple “perform action X at location Y,” you have to spice that up with some complexity or the game gets very old very quickly. That complexity can be programmatic, or it can be narrative. With your game development tools, you can give the player more than one route to navigate to location Y through good map design, or you can make action X easier or harder to accomplish by setting preconditions like puzzles to solve or other nodes that need interaction. With the narrative, you’re not limited to what can be accomplished in your game engine. The question becomes, “How much can I give the player to emotionally react to?”
In a field packed with quality roleplaying game developers, this is where BioWare has historically shined: making me have big feelings about my companions and the world they live in. This is what I crave.
Who is (my) Rook, anyway?
The Veilguard sets up your protagonist, Rook, with a lightly sketched backstory tied to your chosen faction. You pick a first name, you are assigned a last name, and you read a brief summary of an important event in Rook’s recent history. The rest is on you, and you reveal Rook’s essential nature through the dialog wheel and the major plot choices you make. Those plot choices are necessarily mechanically limited in scope and in rewards/consequences, but narratively, there’s a lot of ground you can cover.
For the record, I picked “Oof.” That’s just how my Rook rolls. Credit: Marisol Cuervo
During the game’s tutorial, you’re given information about a town that has mysteriously fallen out of communication with the group you’re assisting. You and your companions set out to discover what happened. You investigate the town, find the person responsible, and decide what happens to him next. Mechanically, it’s pretty straightforward.
The real action is happening inside your head. As Rook, I’ve just walked through a real horror show in this small village, put together some really disturbing clues about what’s happening, and I’m now staring down the person responsible while he’s trapped inside an uncomfortably slimy-looking cyst of material the game calls the Blight. Here is the choice: What does my Rook decide to do with him, and what does that choice say about her character? I can’t answer that question without looking through the lens of my personal morality, even if I intend for Rook to act counter to my own nature.
My first emotional, knee-jerk reaction is to say screw this guy. Leave him to the consequences of his own making. He’s given me an offensively venal justification for how he got here, so let him sit there and stare at his material reward for all the good it will do him while he’s being swallowed by the Blight.
The alternative is saving him. You get to give him a scathing lecture, but he goes free, and it’s because you made that choice. You walked through the center of what used to be a vibrant settlement and saw this guy, you know he’s the one who allowed this mess to happen, and you stayed true to your moral center anyway. Don’t you feel good? Look at you, big hero! All those other people will die from the Blight, but you held the line and said, “Well, not this one.”
Being vindictive might feel good, but I feel leaving him is a profoundly evil choice. Credit: Marisol Cuervo
There’s no objectively right answer about what to do with the mayor, and I’m here for it. Leaving him or saving him: Neither option is without ethical hazards. I can use this medium to dig deep into who I am and how I see myself before building up my idea of who my Rook is going to be.
Make your decision, and Rook lives with the consequences. Some are significant, and some… not so much.
Your choices are world-changing—but also can’t be
Longtime BioWare fans have historically been given the luxury of having their choices—large and small—acknowledged by the next game in the franchise. In past games, this happened largely through small dialog mentions or NPC reappearances, but as satisfying as this is for me as a player, it creates a big problem for BioWare.
Here’s an example: depending on the actions of the player, ginger-haired bard and possible romantic companion Leliana can be missed entirely as a recruitable companion in Dragon Age: Origins, the first game in the franchise. If she is recruited, she can potentially die in a later quest. It’s not guaranteed that she survives the first game. That’s a bit of a problem in Dragon Age II, where Leliana shows up in one of the downloadable content packs. It’s a bigger problem in the third game, where Leliana is the official spymaster for the titular Inquisition. BioWare calls these NPCs who can exist in a superposition of states “quantum characters.”
One of the game’s creative leaders talking about “quantum characters.” Credit: Marisol Cuervo
If you follow this thought to its logical end, you can understand where BioWare is coming from: After a critical mass of quantum characters is reached, the effects are impossible to manage. BioWare sidesteps the Leliana problem entirely in The Veilguard by just not talking about her.
BioWare has staunchly maintained that, as a studio, it does not have a set canon for the history of its games; there’s only the personal canon each player develops as a result of their gameplay. As I’ve been playing, I can tell there’s been a lot of thought put into ensuring none of The Veilguard’s in-game references to areas covered in the previous three games would invalidate a player’s personal canon, and I appreciate that. That’s not an easy needle to thread. I can also see that the same care was put into ensuring that this game’s decisions would not create future quantum characters, and that means the choices we’re given are very carefully constrained to this story and only this story.
But it still feels like we’re missing an opportunity to make these moral decisions on a smaller scale. Dragon Age: Inquisition introduced a collectible and cleverly hidden item for players to track down while they worked on saving the world. Collect enough trinkets and you eventually open up an entirely optional area to explore. Because this is BioWare, though, there was a catch: To find the trinkets, you had to stare through the crystal eyes of a skull sourced from the body of a mage who has been forcibly cut off from the source of all magic in the world. Is your Inquisitor on board with that, even if it comes with a payoff? Personally, I don’t like the idea. My Inquisitor? She thoroughly looted the joint. It’s a small choice, and it doesn’t really impact the long-term state of the world, but I still really enjoyed working through it.
Later in the first act of The Veilguard, Rook finally gets an opportunity to make one of the big, ethically difficult decisions. I’ll avoid spoilers, but I don’t mind sharing that it was a satisfyingly difficult choice to make, and I wasn’t sure I felt good about my decision. I spent a lot of time staring at the screen before clicking on my answer. Yeah, that’s the good stuff right there.
In keeping with the studio’s effort to avoid creating quantum worldstates, The Veilguard treads lightly with the mechanical consequences of this specific choice and the player is asked to take up the narrative repercussions. How hard the consequences hit, or if they miss, comes down to your individual approach to roleplaying games. Are you a player who inhabits the character and lives in the world? Or is it more like you’re riding along, only watching a story unfold? Your answer will greatly influence how connected you feel to the choices BioWare asks you to make.
Is this better or worse?
Much online discussion around The Veilguard has centered on Bioware’s decision to incorporate only three choices from the previous game in the series, Inquisition, rather than using the existing Dragon Age Keep to import an entire worldstate. I’m a little disappointed by this, but I’m also not sure anything in Thedas is significantly changed because my Hero of Ferelden was a softie who convinced the guard in the Ostagar camp to give his lunch to the prisoner who was in the cage for attempted desertion.
At the same time, as I wrap up the first act, I’m missing the mild tension I should be feeling when the dialog wheel comes up, and not just because many of the dialog choices seem to be three flavors of “yes, and…” One of my companions was deeply unhappy with me for a period of time after I made the big first-act decision and sharply rebuffed my attempts at justification, snapping at me that I should go. Previous games allowed companions to leave your party forever if they disagreed enough with your main character; this doesn’t seem to be a mechanic you need to worry about in The Veilguard.
Rook’s friends might be divided on how they view her choice of verbal persuasion versus percussive diplomacy, but none of them had anything to say about it while she was very earnestly attempting to convince a significant NPC they were making a pretty big mistake. One of Rook’s companions later asked about her intentions during that interaction but otherwise had no reaction.
BioWare, are you OK? Why do you keep punching people who don’t agree with you? Credit: Marisol Cuervo
Seventy hours into the game, I’m looking for places where I have to navigate my own ethical landscape before I can choose to have Rook conform to, or flaunt, the social mores of northern Thedas. I’m still helping people, being the hero, and having a lot of fun doing so, but the problems I’m solving aren’t sticky, and they lack the nuance I enjoyed in previous games. I want to really wrestle with the potential consequences before I decide to do something. Maybe this is something I’ll see more of in the second act.
If the banal, puppy-kicking kind of evil has been minimized in favor of larger stakes—something I applaud—it has left a sort of vacuum on the roleplaying spectrum. BioWare has big opinions about how heroes should act and how they should handle interpersonal conflict. I wish I felt more like I was having that struggle rather than being told that’s how Rook is feeling.
I’m hopeful my Rook isn’t just going to just save the world, but that in the next act of the game, I’ll see more opportunities from BioWare to let her do it her way.
“You’ve taken this idea way too far,” a mentor told Prof. Fei-Fei Li.
Credit: Aurich Lawson | Getty Images
Credit: Aurich Lawson | Getty Images
During my first semester as a computer science graduate student at Princeton, I took COS 402: Artificial Intelligence. Toward the end of the semester, there was a lecture about neural networks. This was in the fall of 2008, and I got the distinct impression—both from that lecture and the textbook—that neural networks had become a backwater.
Neural networks had delivered some impressive results in the late 1980s and early 1990s. But then progress stalled. By 2008, many researchers had moved on to mathematically elegant approaches such as support vector machines.
I didn’t know it at the time, but a team at Princeton—in the same computer science building where I was attending lectures—was working on a project that would upend the conventional wisdom and demonstrate the power of neural networks. That team, led by Prof. Fei-Fei Li, wasn’t working on a better version of neural networks. They were hardly thinking about neural networks at all.
Rather, they were creating a new image dataset that would be far larger than any that had come before: 14 million images, each labeled with one of nearly 22,000 categories.
Li tells the story of ImageNet in her recent memoir, The Worlds I See. As she worked on the project, she faced plenty of skepticism from friends and colleagues.
“I think you’ve taken this idea way too far,” a mentor told her a few months into the project in 2007. “The trick is to grow with your field. Not to leap so far ahead of it.”
It wasn’t just that building such a large dataset was a massive logistical challenge. People doubted that the machine learning algorithms of the day would benefit from such a vast collection of images.
“Pre-ImageNet, people did not believe in data,” Li said in a September interview at the Computer History Museum. “Everyone was working on completely different paradigms in AI with a tiny bit of data.”
Ignoring negative feedback, Li pursued the project for more than two years. It strained her research budget and the patience of her graduate students. When she took a new job at Stanford in 2009, she took several of those students—and the ImageNet project—with her to California.
ImageNet received little attention for the first couple of years after its release in 2009. But in 2012, a team from the University of Toronto trained a neural network on the ImageNet dataset, achieving unprecedented performance in image recognition. That groundbreaking AI model, dubbed AlexNet after lead author Alex Krizhevsky, kicked off the deep learning boom that has continued to the present day.
AlexNet would not have succeeded without the ImageNet dataset. AlexNet also would not have been possible without a platform called CUDA, which allowed Nvidia’s graphics processing units (GPUs) to be used in non-graphics applications. Many people were skeptical when Nvidia announced CUDA in 2006.
So the AI boom of the last 12 years was made possible by three visionaries who pursued unorthodox ideas in the face of widespread criticism. One was Geoffrey Hinton, a University of Toronto computer scientist who spent decades promoting neural networks despite near-universal skepticism. The second was Jensen Huang, the CEO of Nvidia, who recognized early that GPUs could be useful for more than just graphics.
The third was Fei-Fei Li. She created an image dataset that seemed ludicrously large to most of her colleagues. But it turned out to be essential for demonstrating the potential of neural networks trained on GPUs.
Geoffrey Hinton
A neural network is a network of thousands, millions, or even billions of neurons. Each neuron is a mathematical function that produces an output based on a weighted average of its inputs.
Suppose you want to create a network that can identify handwritten decimal digits like the number two in the red square above. Such a network would take in an intensity value for each pixel in an image and output a probability distribution over the ten possible digits—0, 1, 2, and so forth.
To train such a network, you first initialize it with random weights. You then run it on a sequence of example images. For each image, you train the network by strengthening the connections that push the network toward the right answer (in this case, a high-probability value for the “2” output) and weakening connections that push toward a wrong answer (a low probability for “2” and high probabilities for other digits). If trained on enough example images, the model should start to predict a high probability for “2” when shown a two—and not otherwise.
In the late 1950s, scientists started to experiment with basic networks that had a single layer of neurons. However, their initial enthusiasm cooled as they realized that such simple networks lacked the expressive power required for complex computations.
Deeper networks—those with multiple layers—had the potential to be more versatile. But in the 1960s, no one knew how to train them efficiently. This was because changing a parameter somewhere in the middle of a multi-layer network could have complex and unpredictable effects on the output.
So by the time Hinton began his career in the 1970s, neural networks had fallen out of favor. Hinton wanted to study them, but he struggled to find an academic home in which to do so. Between 1976 and 1986, Hinton spent time at four different research institutions: Sussex University, the University of California San Diego (UCSD), a branch of the UK Medical Research Council, and finally Carnegie Mellon, where he became a professor in 1982.
Geoffrey Hinton speaking in Toronto in June.
Credit: Photo by Mert Alper Dervis/Anadolu via Getty Images
Geoffrey Hinton speaking in Toronto in June. Credit: Photo by Mert Alper Dervis/Anadolu via Getty Images
In a landmark 1986 paper, Hinton teamed up with two of his former colleagues at UCSD, David Rumelhart and Ronald Williams, to describe a technique called backpropagation for efficiently training deep neural networks.
Their idea was to start with the final layer of the network and work backward. For each connection in the final layer, the algorithm computes a gradient—a mathematical estimate of whether increasing the strength of that connection would push the network toward the right answer. Based on these gradients, the algorithm adjusts each parameter in the model’s final layer.
The algorithm then propagates these gradients backward to the second-to-last layer. A key innovation here is a formula—based on the chain rule from high school calculus—for computing the gradients in one layer based on gradients in the following layer. Using these new gradients, the algorithm updates each parameter in the second-to-last layer of the model. The gradients then get propagated backward to the third-to-last layer, and the whole process repeats once again.
The algorithm only makes small changes to the model in each round of training. But as the process is repeated over thousands, millions, billions, or even trillions of training examples, the model gradually becomes more accurate.
Hinton and his colleagues weren’t the first to discover the basic idea of backpropagation. But their paper popularized the method. As people realized it was now possible to train deeper networks, it triggered a new wave of enthusiasm for neural networks.
Hinton moved to the University of Toronto in 1987 and began attracting young researchers who wanted to study neural networks. One of the first was the French computer scientist Yann LeCun, who did a year-long postdoc with Hinton before moving to Bell Labs in 1988.
Hinton’s backpropagation algorithm allowed LeCun to train models deep enough to perform well on real-world tasks like handwriting recognition. By the mid-1990s, LeCun’s technology was working so well that banks started to use it for processing checks.
“At one point, LeCun’s creation read more than 10 percent of all checks deposited in the United States,” wrote Cade Metz in his 2022 book Genius Makers.
But when LeCun and other researchers tried to apply neural networks to larger and more complex images, it didn’t go well. Neural networks once again fell out of fashion, and some researchers who had focused on neural networks moved on to other projects.
Hinton never stopped believing that neural networks could outperform other machine learning methods. But it would be many years before he’d have access to enough data and computing power to prove his case.
Jensen Huang
Jensen Huang speaking in Denmark in October.
Credit: Photo by MADS CLAUS RASMUSSEN/Ritzau Scanpix/AFP via Getty Images
Jensen Huang speaking in Denmark in October. Credit: Photo by MADS CLAUS RASMUSSEN/Ritzau Scanpix/AFP via Getty Images
The brain of every personal computer is a central processing unit (CPU). These chips are designed to perform calculations in order, one step at a time. This works fine for conventional software like Windows and Office. But some video games require so many calculations that they strain the capabilities of CPUs. This is especially true of games like Quake, Call of Duty, and Grand Theft Auto, which render three-dimensional worlds many times per second.
So gamers rely on GPUs to accelerate performance. Inside a GPU are many execution units—essentially tiny CPUs—packaged together on a single chip. During gameplay, different execution units draw different areas of the screen. This parallelism enables better image quality and higher frame rates than would be possible with a CPU alone.
Nvidia invented the GPU in 1999 and has dominated the market ever since. By the mid-2000s, Nvidia CEO Jensen Huang suspected that the massive computing power inside a GPU would be useful for applications beyond gaming. He hoped scientists could use it for compute-intensive tasks like weather simulation or oil exploration.
So in 2006, Nvidia announced the CUDA platform. CUDA allows programmers to write “kernels,” short programs designed to run on a single execution unit. Kernels allow a big computing task to be split up into bite-sized chunks that can be processed in parallel. This allows certain kinds of calculations to be completed far faster than with a CPU alone.
But there was little interest in CUDA when it was first introduced, wrote Steven Witt in The New Yorker last year:
When CUDA was released, in late 2006, Wall Street reacted with dismay. Huang was bringing supercomputing to the masses, but the masses had shown no indication that they wanted such a thing.
“They were spending a fortune on this new chip architecture,” Ben Gilbert, the co-host of “Acquired,” a popular Silicon Valley podcast, said. “They were spending many billions targeting an obscure corner of academic and scientific computing, which was not a large market at the time—certainly less than the billions they were pouring in.”
Huang argued that the simple existence of CUDA would enlarge the supercomputing sector. This view was not widely held, and by the end of 2008, Nvidia’s stock price had declined by seventy percent…
Downloads of CUDA hit a peak in 2009, then declined for three years. Board members worried that Nvidia’s depressed stock price would make it a target for corporate raiders.
Huang wasn’t specifically thinking about AI or neural networks when he created the CUDA platform. But it turned out that Hinton’s backpropagation algorithm could easily be split up into bite-sized chunks. So training neural networks turned out to be a killer app for CUDA.
According to Witt, Hinton was quick to recognize the potential of CUDA:
In 2009, Hinton’s research group used Nvidia’s CUDA platform to train a neural network to recognize human speech. He was surprised by the quality of the results, which he presented at a conference later that year. He then reached out to Nvidia. “I sent an e-mail saying, ‘Look, I just told a thousand machine-learning researchers they should go and buy Nvidia cards. Can you send me a free one?’ ” Hinton told me. “They said no.”
Despite the snub, Hinton and his graduate students, Alex Krizhevsky and Ilya Sutskever, obtained a pair of Nvidia GTX 580 GPUs for the AlexNet project. Each GPU had 512 execution units, allowing Krizhevsky and Sutskever to train a neural network hundreds of times faster than would be possible with a CPU. This speed allowed them to train a larger model—and to train it on many more training images. And they would need all that extra computing power to tackle the massive ImageNet dataset.
Fei-Fei Li
Fei-Fei Li at the SXSW conference in 2018.
Credit: Photo by Hubert Vestil/Getty Images for SXSW
Fei-Fei Li at the SXSW conference in 2018. Credit: Photo by Hubert Vestil/Getty Images for SXSW
Fei-Fei Li wasn’t thinking about either neural networks or GPUs as she began a new job as a computer science professor at Princeton in January of 2007. While earning her PhD at Caltech, she had built a dataset called Caltech 101 that had 9,000 images across 101 categories.
That experience had taught her that computer vision algorithms tended to perform better with larger and more diverse training datasets. Not only had Li found her own algorithms performed better when trained on Caltech 101, but other researchers also started training their models using Li’s dataset and comparing their performance to one another. This turned Caltech 101 into a benchmark for the field of computer vision.
So when she got to Princeton, Li decided to go much bigger. She became obsessed with an estimate by vision scientist Irving Biederman that the average person recognizes roughly 30,000 different kinds of objects. Li started to wonder if it would be possible to build a truly comprehensive image dataset—one that included every kind of object people commonly encounter in the physical world.
A Princeton colleague told Li about WordNet, a massive database that attempted to catalog and organize 140,000 words. Li called her new dataset ImageNet, and she used WordNet as a starting point for choosing categories. She eliminated verbs and adjectives, as well as intangible nouns like “truth.” That left a list of 22,000 countable objects ranging from “ambulance” to “zucchini.”
She planned to take the same approach she’d taken with the Caltech 101 dataset: use Google’s image search to find candidate images, then have a human being verify them. For the Caltech 101 dataset, Li had done this herself over the course of a few months. This time she would need more help. She planned to hire dozens of Princeton undergraduates to help her choose and label images.
But even after heavily optimizing the labeling process—for example, pre-downloading candidate images so they’re instantly available for students to review—Li and her graduate student Jia Deng calculated that it would take more than 18 years to select and label millions of images.
The project was saved when Li learned about Amazon Mechanical Turk, a crowdsourcing platform Amazon had launched a couple of years earlier. Not only was AMT’s international workforce more affordable than Princeton undergraduates, but the platform was also far more flexible and scalable. Li’s team could hire as many people as they needed, on demand, and pay them only as long as they had work available.
AMT cut the time needed to complete ImageNet down from 18 to two years. Li writes that her lab spent two years “on the knife-edge of our finances” as the team struggled to complete the ImageNet project. But they had enough funds to pay three people to look at each of the 14 million images in the final data set.
ImageNet was ready for publication in 2009, and Li submitted it to the Conference on Computer Vision and Pattern Recognition, which was held in Miami that year. Their paper was accepted, but it didn’t get the kind of recognition Li hoped for.
“ImageNet was relegated to a poster session,” Li writes. “This meant that we wouldn’t be presenting our work in a lecture hall to an audience at a predetermined time but would instead be given space on the conference floor to prop up a large-format print summarizing the project in hopes that passersby might stop and ask questions… After so many years of effort, this just felt anticlimactic.”
To generate public interest, Li turned ImageNet into a competition. Realizing that the full dataset might be too unwieldy to distribute to dozens of contestants, she created a much smaller (but still massive) dataset with 1,000 categories and 1.4 million images.
The first year’s competition in 2010 generated a healthy amount of interest, with 11 teams participating. The winning entry was based on support vector machines. Unfortunately, Li writes, it was “only a slight improvement over cutting-edge work found elsewhere in our field.”
The second year of the ImageNet competition attracted fewer entries than the first. The winning entry in 2011 was another support vector machine, and it just barely improved on the performance of the 2010 winner. Li started to wonder if the critics had been right. Maybe “ImageNet was too much for most algorithms to handle.”
“For two years running, well-worn algorithms had exhibited only incremental gains in capabilities, while true progress seemed all but absent,” Li writes. “If ImageNet was a bet, it was time to start wondering if we’d lost.”
But when Li reluctantly staged the competition a third time in 2012, the results were totally different. Geoff Hinton’s team was the first to submit a model based on a deep neural network. And its top-5 accuracy was 85 percent—10 percentage points better than the 2011 winner.
Li’s initial reaction was incredulity: “Most of us saw the neural network as a dusty artifact encased in glass and protected by velvet ropes.”
“This is proof”
Yann LeCun testifies before the US Senate in September.
Credit: Photo by Kevin Dietsch/Getty Images
Yann LeCun testifies before the US Senate in September. Credit: Photo by Kevin Dietsch/Getty Images
The ImageNet winners were scheduled to be announced at the European Conference on Computer Vision in Florence, Italy. Li, who had a baby at home in California, was planning to skip the event. But when she saw how well AlexNet had done on her dataset, she realized this moment would be too important to miss: “I settled reluctantly on a twenty-hour slog of sleep deprivation and cramped elbow room.”
On an October day in Florence, Alex Krizhevsky presented his results to a standing-room-only crowd of computer vision researchers. Fei-Fei Li was in the audience. So was Yann LeCun.
Cade Metz reports that after the presentation, LeCun stood up and called AlexNet “an unequivocal turning point in the history of computer vision. This is proof.”
The success of AlexNet vindicated Hinton’s faith in neural networks, but it was arguably an even bigger vindication for LeCun.
AlexNet was a convolutional neural network, a type of neural network that LeCun had developed 20 years earlier to recognize handwritten digits on checks. (For more details on how CNNs work, see the in-depth explainer I wrote for Ars in 2018.) Indeed, there were few architectural differences between AlexNet and LeCun’s image recognition networks from the 1990s.
AlexNet was simply far larger. In a 1998 paper, LeCun described a document-recognition network with seven layers and 60,000 trainable parameters. AlexNet had eight layers, but these layers had 60 million trainable parameters.
LeCun could not have trained a model that large in the early 1990s because there were no computer chips with as much processing power as a 2012-era GPU. Even if LeCun had managed to build a big enough supercomputer, he would not have had enough images to train it properly. Collecting those images would have been hugely expensive in the years before Google and Amazon Mechanical Turk.
And this is why Fei-Fei Li’s work on ImageNet was so consequential. She didn’t invent convolutional networks or figure out how to make them run efficiently on GPUs. But she provided the training data that large neural networks needed to reach their full potential.
The technology world immediately recognized the importance of AlexNet. Hinton and his students formed a shell company with the goal to be “acquihired” by a big tech company. Within months, Google purchased the company for $44 million. Hinton worked at Google for the next decade while retaining his academic post in Toronto. Ilya Sutskever spent a few years at Google before becoming a cofounder of OpenAI.
AlexNet also made Nvidia GPUs the industry standard for training neural networks. In 2012, the market valued Nvidia at less than $10 billion. Today, Nvidia is one of the most valuable companies in the world, with a market capitalization north of $3 trillion. That high valuation is driven mainly by overwhelming demand for GPUs like the H100 that are optimized for training neural networks.
Sometimes the conventional wisdom is wrong
“That moment was pretty symbolic to the world of AI because three fundamental elements of modern AI converged for the first time,” Li said in a September interview at the Computer History Museum. “The first element was neural networks. The second element was big data, using ImageNet. And the third element was GPU computing.”
Today, leading AI labs believe the key to progress in AI is to train huge models on vast data sets. Big technology companies are in such a hurry to build the data centers required to train larger models that they’ve started to lease out entire nuclear power plants to provide the necessary power.
You can view this as a straightforward application of the lessons of AlexNet. But I wonder if we ought to draw the opposite lesson from AlexNet: that it’s a mistake to become too wedded to conventional wisdom.
“Scaling laws” have had a remarkable run in the 12 years since AlexNet, and perhaps we’ll see another generation or two of impressive results as the leading labs scale up their foundation models even more.
But we should be careful not to let the lessons of AlexNet harden into dogma. I think there’s at least a chance that scaling laws will run out of steam in the next few years. And if that happens, we’ll need a new generation of stubborn nonconformists to notice that the old approach isn’t working and try something different.
Tim Lee was on staff at Ars from 2017 to 2021. Last year, he launched a newsletter, Understanding AI, that explores how AI works and how it’s changing our world. You can subscribe here.
Timothy is a senior reporter covering tech policy and the future of transportation. He lives in Washington DC.
Saddle up, space cowboys. It may get bumpy for a while.
President Donald Trump steps on the stage at Kennedy Space Center after the successful launch of the Demo-2 crew mission in May 2020. Credit: NASA/Bill Ingalls
The global space community awoke to a new reality on Wednesday morning.
The founder of this century’s most innovative space company, Elon Musk, successfully used his fortune, time, and energy to help elect Donald Trump to president of the United States. Already, Musk was the dominant Western player in space. SpaceX launches national security satellites and NASA astronauts and operates a megaconstellation. He controls the machines that provide essential space services to NASA and the US military. And now, thanks to his gamble on backing Trump, Musk has strong-armed himself into Trump’s inner circle.
Although he may not have a cabinet-appointed position, Musk will have a broad portfolio in the new administration for as long as his relations with Trump remain positive. This gives Musk extraordinary power over a number of areas, including spaceflight. Already this week, he has been soliciting ideas and input from colleagues. The New York Times reported that Musk has advised Trump to hire key employees from SpaceX into his administration, including at the Department of Defense. This reflects the huge conflict of interest that Musk will face when it comes to space policy. His actions could significantly benefit SpaceX, of which he is the majority owner and has the final say in major decisions.
It will be a hugely weird dynamic. Musk is unquestionably in a position for self-dealing. Normally, such conflicts of interest would be frowned on within a government, but Trump has already shown a brazen disregard for norms, and there’s no reason to believe that will change during his second go at the presidency. One way around this could be to give Musk a “special adviser” tag, which means he would not have to comply with federal conflict-of-interest laws.
So it’s entirely possible that the sitting chief executive of SpaceX could be the nation’s most important adviser on space policy, conflicts be damned. Musk possesses flaws as a leader, but it is difficult to argue against results. His intuitions for the industry, such as pushing hard for reusable launch and broadband Internet from space, have largely been correct. In a vacuum, it is not necessarily bad to have someone like Musk providing a vision for US spaceflight in the 21st century. But while space may be a vacuum, there is plenty of oxygen in Washington, DC.
Being a space journalist got a lot more interesting this week—and a lot more difficult. As I waded through this reality on Wednesday, I began to reach out to sources about what is likely to happen. It’s way too early to have much certainty, but we can begin to draw some broad outlines for what may happen to space policy during a second Trump presidency. Buckle up—it could be a wild ride.
Bringing efficiency to NASA?
Let’s start with NASA and firmly establish what we mean. The US space agency does some pretty great things, but it’s also a bloated bureaucracy. That’s by design. Members of Congress write budgets and inevitably seek to steer more federal dollars to NASA activities in the areas they represent. Two decades ago, an engineer named Mike Griffin—someone Musk sought to hire as SpaceX’s first chief engineer in 2002—became NASA administrator under President George W. Bush.
Griffin recognized NASA’s bloat. For starters, it had too many field centers. NASA simply doesn’t need 10 major outposts across the country, as they end up fighting one another for projects and funding. However, Griffin knew he would face a titanic political struggle to close field centers, on par with federal efforts to close duplicative military bases during the “Base Realignment and Closure” process after the Cold War. So Griffin instead sought to make the best of the situation with his “Ten Healthy Centers” initiative. Work together, he told his teams across the country.
Essentially, then, for the last two decades, NASA programs have sought to leverage expertise across the agency. Consider the development of the Orion spacecraft, which began nearly 20 years ago. The following comment comes from Julie Kramer-White from an oral history interview conducted in 2016. Kramer is a long-time NASA engineer who was chief engineer of Orion at the time.
“I’ll tell you the truth, ten healthy centers is a pain in the butt,” she said. “The engineering team is a big engineering team, and they are spread across 9 of the 10 Centers… Our guys don’t think anything about a phone call that’s got people from six different centers. You’re trying to balance the time zone differences, and of course that’s got its own challenge with Europe as well but even within the United States with the different centers managing the time zone issue. I would say as a net technically, it’s a good thing. From a management perspective, boy, it’s a hassle.”
Space does not get done fast or efficiently by committee. But that’s how NASA operates—committees within committees, reviewed by committees.
Musk has repeatedly said he wants to bring efficiency to the US government and vowed to identify $2 trillion in savings. Well, NASA would certainly be more efficient with fewer centers—each of which has its own management layers, human resources setups, and other extensive overhead. But will the Trump administration really have the stomach to close centers? Certainly the congressional leadership from a state like Ohio would fight tooth and nail for Glenn Research Center. This offers an example of how bringing sweeping change to the US government in general, and NASA in particular, will run into the power of the purse held by Congress.
One tool NASA has used in recent years to increase efficiency is buying commercial services rather than leading the development of systems, such as the Orion spacecraft. This most prominent example is cargo and crew transportation to the International Space Station, but NASA has extended this approach to all manner of areas, from space communications to lunar landers to privately operated space stations. Congress has not always been happy with this transition because it has lessened its influence over steering funding directly to centers. NASA has nonetheless continued to push for this change because it has lowered agency costs, allowing it to do more.
Yet here again we run into conflicts of interest with Musk. The primary reason that NASA’s transition toward buying services has been a success is due to SpaceX. Private companies not named SpaceX have struggled to compete as NASA awards more fixed-price contracts for space services. Given Congress’ love for directing space funds to local centers, it’s unlikely to let Musk overhaul the agency in ways that send huge amounts of new business to SpaceX.
Where art thou, Artemis?
The biggest question is what to do with the Artemis program to return humans to the Moon. Ars wrote extensively about some of the challenges with this program a little more than a month ago, and Michael Bloomberg, founder of Bloomberg News, wrote a scathing assessment of Artemis recently under the headline “NASA’s $100 billion Moon mission is going nowhere.”
It is unlikely that outright cancellation of Artemis is on the table—after all, the first Trump administration created Artemis six years ago. However, Musk is clearly focused on sending humans to Mars, and the Moon-first approach of Artemis was championed by former Vice President Mike Pence, who is long gone. Trump loves grand gestures, and Musk has told Trump it will be possible to send humans to Mars before the end of his term. (That would be 2028, and it’s almost impossible to see this happening for a lot of reasons.) The Artemis architecture was developed around a “Moon-then-Mars” philosophy—as in, NASA will send humans to the Moon now, with Mars missions pushed into a nebulous future. Whatever Artemis becomes, it is likely to at least put Mars on equal footing to the Moon.
Notably, Musk despises NASA’s Space Launch System rocket, a central element of Artemis. He sees the rocket as the epitome of government bloat. And it’s not hard to understand why. The Space Launch System is completely expendable and costs about 10 to 100 times as much to launch as his own massive Starship rocket.
The key function the SLS rocket and the Orion spacecraft currently provide in Artemis is transporting astronauts from Earth to lunar orbit and back. There are ways to address this. Trump could refocus Artemis on using Starship to get humans to Mars. Alternatively, he could direct NASA to kludge together some combination of Orion, Dragon, and Falcon rockets to get astronauts to the Moon. He might also direct NASA to use the SLS for now but cancel further upgrades to it and a lunar space station called Gateway.
“The real question is how far is a NASA landing team and beachhead team are willing to go in destabilizing the program of record,” one policy source told Ars. “I can’t see Trump and Vance being less willing to shake up NASA than they are other public policy zones.”
What does seem clear is that, for the first time in 15 years, canceling the Space Launch System rocket or dramatically reducing its influence is on the table. This will be an acid test for Musk and Trump’s rhetoric on government efficiency, since the base of support for Artemis is in the deep-red South: states like Alabama, Mississippi, Louisiana, and Florida.
Will they really cut jobs there in the name of efficiency?
Regulatory reform
Reducing government regulations is one area in which the pathway for Musk and Trump is clear. The first Trump administration pushed to reduce regulations on US businesses almost from day one. In spaceflight, this produced Space Policy Directive-2 in 2018. Some progress was made, but it was far from total.
For spaceflight, Musk’s goal is to get faster approval for Starship test flights and licensing for the (literally) hundreds of launches SpaceX is already conducting annually. This will be broadly supported by the second Trump administration. During Trump’s first term, some of the initiatives in Space Policy Directive-2 were slowed or blocked by the Federal Aviation Administration and NASA, but the White House push will be even harder this time.
A looser regulatory environment should theoretically lead to more and more rapid progress in commercial space capabilities.
It’s worth noting here that if you spend any time talking to space startup executives, they all have horror stories about interacting with the FAA or other agencies. Pretty much everyone agrees that regulators could be more efficient but also that they need more resources to process rules in a timely manner. The FAA and Federal Communications Commission have important jobs when it comes to keeping people on the ground safe and keeping orbits sustainable in terms of traffic and space junk.
The second Trump administration will have some important allies on this issue in Congress. Ted Cruz, the US Senator from Texas, will likely chair the Senate Committee on Commerce, Science, and Transportation, which oversees legislation for space activities. He is one of the senators who has shown the most interest in commercial space, and he will support pro-business legislation—that is, laws that allow companies freer rein and regulatory agencies fewer teeth. How far this gets will depend on whether Republicans keep the House or Democrats take control.
Other areas of change
Over the course of the last seven decades, space has largely been a non-partisan topic.
But Musk’s deepening involvement in US space policy could pose a serious problem to this, as he’s now viewed extremely negatively by many Democrats. It seems probable that many people in Congress will oppose any significant shift of NASA’s focus from the Moon to Mars, particularly because it aligns with Musk’s long-stated goal of making humans a multiplanetary species.
There are likely to be battles in space science, as well. Traditionally, Republican presidents have cut funding for Earth science missions, and Democrats have increased funding to better study and understand climate change. Generally, given the administration’s likely focus on human spaceflight, space science will probably take a back seat and may lose funding.
Another looming issue is Mars Sample Return, which NASA is reconsidering due to budget and schedule issues. Presently, the agency intends to announce a new plan for retrieving rock and soil samples from Mars and returning them to Earth in December.
But if Musk and Trump are bent on sending humans to Mars as soon as possible, there is little sense in the space agency spending billions of dollars on a robotic sample return mission. Astronauts can just bring them back inside Starship.
Finally, at present, NASA has rich partnerships with space agencies around the world. In fact, it was the first Trump administration that created the Artemis Accords a little more than four years ago to develop an international coalition to return to the Moon. Since then, the United States and China have both been signing up partners in their competition to establish a presence at the South Pole of the Moon.
One huge uncertainty is how some of NASA’s long-established partners, especially in Europe, where there is bound to be tension around Ukraine and other issues with the Trump administration, will react at the US space agency’s exploration plans. Europeans are already wary of SpaceX’s prowess in global spaceflight and likely will not want to be on board with any space activities that further Musk’s ambitions.
These are just some of the high-level questions facing NASA and US spaceflight. There are many others. For example, how will Trump’s proposed tariffs on key components impact the national security and civil space supply chain? And there’s the Department of Defense, where the military already has multibillion dollar contracts with SpaceX, and there are bound to be similar conflicts and ethical concerns.
No one can hear you scream in space, but there will be plenty of screaming about space in the coming months.
Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.