Features

spacex-just-stomped-the-competition-for-a-new-contract—that’s-not-great

SpaceX just stomped the competition for a new contract—that’s not great

A rocket sits on a launch pad during a purple- and gold-streaked dawn.

Enlarge / With Dragon and Falcon, SpaceX has become an essential contractor for NASA.

SpaceX

There is an emerging truth about NASA’s push toward commercial contracts that is increasingly difficult to escape: Companies not named SpaceX are struggling with NASA’s approach of awarding firm, fixed-price contracts for space services.

This belief is underscored by the recent award of an $843 million contract to SpaceX for a heavily modified Dragon spacecraft that will be used to deorbit the International Space Station by 2030.

The recently released source selection statement for the “US Deorbit Vehicle” contract, a process led by NASA head of space operations Ken Bowersox, reveals that the competition was a total stomp. SpaceX faced just a single serious competitor in this process, Northrop Grumman. And in all three categories—price, mission suitability, and past performance—SpaceX significantly outclassed Northrop.

Although it’s wonderful that NASA has an excellent contractor in SpaceX, it’s not healthy in the long term that there are so few credible competitors. Moreover, a careful reading of the source selection statement reveals that NASA had to really work to get a competition at all.

“I was really happy that we got proposals from the companies that we did,” Bowersox said during a media teleconference last week. “The companies that sent us proposals are both great companies, and it was awesome to see that interest. I would have expected a few more [proposals], honestly, but I was very happy to get the ones that we got.”

Commercial initiatives struggling

NASA’s push into “commercial” space began nearly two decades ago with a program to deliver cargo to the International Space Station. The space agency initially selected SpaceX and Rocketplane Kistler to develop rockets and spacecraft to accomplish this, but after Kistler missed milestones, the company was subsequently replaced by Orbital Sciences Corporation. The cargo delivery program was largely successful, resulting in the Cargo Dragon (SpaceX) and Cygnus (Orbital Sciences) spacecraft. It continues to this day.

A commercial approach generally means that NASA pays a “fixed” price for a service rather than paying a contractor’s costs plus a fee. It also means that NASA hopes to become one of many customers. The idea is that, as the first mover, NASA is helping to stimulate a market by which its fixed-priced contractors can also sell their services to other entities—both private companies and other space agencies.

NASA has since extended this commercial approach to crew, with SpaceX and Boeing winning large contracts in 2014. However, only SpaceX has flown operational astronaut missions, while Boeing remains in the development and test phase, with its ongoing Crew Flight Test. Whereas SpaceX has sold half a dozen private crewed missions on Dragon, Boeing has yet to announce any.

Such a commercial approach has also been tried with lunar cargo delivery through the “Commercial Lunar Payload Services” program, as well as larger lunar landers (Human Landing System), next-generation spacesuits, and commercial space stations. Each of these programs has a mixed record at best. For example, NASA’s inspector general was highly critical of the lunar cargo program in a recent report, and one of the two spacesuit contractors, Collins Aerospace, recently dropped out because it could not execute on its fixed-price contract.

Some of NASA’s most important traditional space contractors, including Lockheed Martin, Boeing, and Northrop Grumman, have all said they are reconsidering whether to participate in fixed-price contract competitions in the future. For example, Northrop CEO Kathy Warden said last August, “We are being even more disciplined moving forward in ensuring that we work with the government to have the appropriate use of fixed-price contracts.”

So the large traditional space contractors don’t like fixed-price contracts, and many new space companies are struggling to survive in this environment.

SpaceX just stomped the competition for a new contract—that’s not great Read More »

we’re-building-nuclear-spaceships-again—this-time-for-real 

We’re building nuclear spaceships again—this time for real 

Artist concept of the Demonstration for Rocket to Agile Cislunar Operations (DRACO) spacecraft.

Enlarge / Artist concept of the Demonstration for Rocket to Agile Cislunar Operations (DRACO) spacecraft.

DARPA

Phoebus 2A, the most powerful space nuclear reactor ever made, was fired up at Nevada Test Site on June 26, 1968. The test lasted 750 seconds and confirmed it could carry first humans to Mars. But Phoebus 2A did not take anyone to Mars. It was too large, it cost too much, and it didn’t mesh with Nixon’s idea that we had no business going anywhere further than low-Earth orbit.

But it wasn’t NASA that first called for rockets with nuclear engines. It was the military that wanted to use them for intercontinental ballistic missiles. And now, the military wants them again.

Nuclear-powered ICBMs

The work on nuclear thermal rockets (NTRs) started with the Rover program initiated by the US Air Force in the mid-1950s. The concept was simple on paper. Take tanks of liquid hydrogen and use turbopumps to feed this hydrogen through a nuclear reactor core to heat it up to very high temperatures and expel it through the nozzle to generate thrust. Instead of causing the gas to heat and expand by burning it in a combustion chamber, the gas was heated by coming into contact with a nuclear reactor.

Tokino, vectorized by CommiM at en.wikipedia

The key advantage was fuel efficiency. “Specific impulse,” a measurement that’s something like the gas mileage of a rocket, could be calculated from the square root of the exhaust gas temperature divided by the molecular weight of the propellant. This meant the most efficient propellant for rockets was hydrogen because it had the lowest molecular weight.

In chemical rockets, hydrogen had to be mixed with an oxidizer, which increased the total molecular weight of the propellant but was necessary for combustion to happen. Nuclear rockets didn’t need combustion and could work with pure hydrogen, which made them at least twice as efficient. The Air Force wanted to efficiently deliver nuclear warheads to targets around the world.

The problem was that running stationary reactors on Earth was one thing; making them fly was quite another.

Space reactor challenge

Fuel rods made with uranium 235 oxide distributed in a metal or ceramic matrix comprise the core of a standard fission reactor. Fission happens when a slow-moving neutron is absorbed by a uranium 235 nucleus and splits it into two lighter nuclei, releasing huge amounts of energy and excess, very fast neutrons. These excess neutrons normally don’t trigger further fissions, as they move too fast to get absorbed by other uranium nuclei.

Starting a chain reaction that keeps the reactor going depends on slowing them down with a moderator, like water, that “moderates” their speed. This reaction is kept at moderate levels using control rods made of neutron-absorbing materials, usually boron or cadmium, that limit the number of neutrons that can trigger fission. Reactors are dialed up or down by moving the control rods in and out of the core.

Translating any of this to a flying reactor is a challenge. The first problem is the fuel. The hotter you make the exhaust gas, the more you increase specific impulse, so NTRs needed the core to operate at temperatures reaching 3,000 K—nearly 1,800 K higher than ground-based reactors. Manufacturing fuel rods that could survive such temperatures proved extremely difficult.

Then there was the hydrogen itself, which is extremely corrosive at these temperatures, especially when interacting with those few materials that are stable at 3,000 K. Finally, standard control rods had to go, too, because on the ground, they were gravitationally dropped into the core, and that wouldn’t work in flight.

Los Alamos Scientific Laboratory proposed a few promising NTR designs that addressed all these issues in 1955 and 1956, but the program really picked up pace after it was transferred to NASA and Atomic Energy Commission (AEC) in 1958, There, the idea was rebranded as NERVA, Nuclear Engine for Rocket Vehicle Applications. NASA and AEC, blessed with nearly unlimited budget, got busy building space reactors—lots of them.

We’re building nuclear spaceships again—this time for real  Read More »

gazelle-eclipse-c380+-e-bike-review:-a-smart,-smooth-ride-at-a-halting-price

Gazelle Eclipse C380+ e-bike review: A smart, smooth ride at a halting price

Gazelle Eclipse C380+ HMB review —

It’s a powerful, comfortable, fun, and very smart ride. Is that enough?

Gazelle Eclipse C380+ in front of a railing, overlooking a river crosswalk in Navy Yard, Washington, D.C.

Kevin Purdy

Let me get three negative points about the Gazelle Eclipse out of the way first. First, it’s a 62-pound e-bike, so it’s tough to get moving without its battery. Second, its rack is a thick, non-standard size, so you might need new bags for it. Third—and this is the big one—with its $6,000 suggested retail price, it’s expensive, and you will probably feel nervous about locking it anywhere you don’t completely trust.

Apart from those issues, though, this e-bike is great fun. When I rode the Eclipse (the C380+ HMB version of it), I felt like Batman on a day off, or maybe Bruce Wayne doing reconnaissance as a bike enthusiast. The matte gray color, the black hardware, and the understated but impressively advanced tech certainly helped. But I felt prepared to handle anything that was thrown at me without having to think about it much. Brutally steep hills, poorly maintained gravel paths, curbs, stop lights, or friends trying to outrun me on their light road bikes—the Eclipse was ready.

It assists up to 28 miles per hour (i.e., Class 3) and provides up to 85 Nm of torque, and the front suspension absorbs shocks without shaking your grip confidence. It has integrated lights, the display can show you navigation while your phone is tucked away, and the automatic assist changing option balances your mechanical and battery levels, leaving you to just pedal and look.

  • The little shifter guy, who will take a few rides to get used to, is either really clever or overthinking it.

    Kevin Purdy

  • The Bosch Kiox 300 is the only screen I’ve had on an e-bike that I ever put time into customizing and optimizing.

    Kevin Purdy

  • The drivetrain on the C80+ is a remarkable thing, and it’s well-hidden inside matte aluminum.

    Kevin Purdy

  • The shocks on the Eclipse are well-tuned for rough roads, if not actual mountains. (The author is aware the headlamp was at an angle in this shot).

    Kevin Purdy

  • The electric assist changer on the left handlebar, and the little built-in bell that you always end up replacing on new e-bikes for something much louder.

    Kevin Purdy

What kind of bike is this? A fun one.

The Eclipse comes in two main variants, the 11-speed, chain-and-derailleur model T11+ HMB and the stepless Enviolo hub and Gates Carbon belt-based C380+ HMB. Both come in three sizes (45, 50, and 55 cm), in one of two colors (Anthracite Grey, Thyme Green for the T11+, and Metallic Orange for the C380+), and with either a low-step or high-step version, the latter with a sloping top bar. Most e-bikes come in two sizes if you’re lucky, typically “Medium” and “Large,” and their suggested height spans are far too generous. The T11+ starts at $5,500 and the C380+ starts at $6,000.

The Eclipse’s posture is an “active” one, seemingly halfway between the upright Dutch style and a traditional road or flat-bar bike. It’s perfect for this kind of ride. The front shocks have a maximum of 75 mm of travel, which won’t impress your buddies riding real trails but will make gravel, dirt, wooden bridges, and woodland trails a potential. Everything about the Eclipse tells you to stop worrying about whether you have the right kind of bike for a ride and just start pedaling.

“But I’m really into exercise riding, and I need lots of metrics and data, during and after the ride,” I hear some of you straw people saying. That’s why the Eclipse has the Bosch Kiox 300, a center display that is, for an e-bike, remarkably readable, navigable, and informative. You can see your max and average speed, distance, which assist levels you spent time in, power output, cadence, and more. You can push navigation directions from Komoot or standard maps apps from your phone to the display, using Bosch’s Flow app. And, of course, you can connect to Strava.

Halfway between maximum efficiency and careless joyriding, the Eclipse offers a feature that I can only hope makes it down to cheaper e-bikes over time: automatic assist changing. Bikes that have both gears and motor assist levels can sometimes leave you guessing as to which one you should change when approaching a hill or starting from a dead stop. Set the Eclipse to automatic assist and you only have to worry about the right-hand grip shifter. There are no gear numbers; there is a little guy on a bike, and as you raise or lower the gearing, the road he’s approaching get steep or flat.

Gazelle Eclipse C380+ e-bike review: A smart, smooth ride at a halting price Read More »

peer-review-is-essential-for-science-unfortunately,-it’s-broken.

Peer review is essential for science. Unfortunately, it’s broken.

Peer review is essential for science. Unfortunately, it’s broken.

Aurich Lawson | Getty Images

Rescuing Science: Restoring Trust in an Age of Doubt was the most difficult book I’ve ever written. I’m a cosmologist—I study the origins, structure, and evolution of the Universe. I love science. I live and breathe science. If science were a breakfast cereal, I’d eat it every morning. And at the height of the COVID-19 pandemic, I watched in alarm as public trust in science disintegrated.

But I don’t know how to change people’s minds. I don’t know how to convince someone to trust science again. So as I started writing my book, I flipped the question around: is there anything we can do to make the institution of science more worthy of trust?

The short answer is yes. The long answer takes an entire book. In the book, I explore several different sources of mistrust—the disincentives scientists face when they try to communicate with the public, the lack of long-term careers, the complicitness of scientists when their work is politicized, and much more—and offer proactive steps we can take to address these issues to rebuild trust.

The section below is taken from a chapter discussing the relentless pressure to publish that scientists face, and the corresponding explosion in fraud that this pressure creates. Fraud can take many forms, from the “hard fraud” of outright fabrication of data, to many kinds of “soft fraud” that include plagiarism, manipulation of data, and careful selection of methods to achieve a desired result. The more that fraud thrives, the more that the public loses trust in science. Addressing this requires a fundamental shift in the incentive and reward structures that scientists work in. A difficult task to be sure, but not an impossible one—and one that I firmly believe will be worth the effort.

Modern science is hard, complex, and built from many layers and many years of hard work. And modern science, almost everywhere, is based on computation. Save for a few (and I mean very few) die-hard theorists who insist on writing things down with pen and paper, there is almost an absolute guarantee that with any paper in any field of science that you could possibly read, a computer was involved in some step of the process.

Whether it’s studying bird droppings or the collisions of galaxies, modern-day science owes its very existence—and continued persistence—to the computer. From the laptop sitting on an unkempt desk to a giant machine that fills up a room, “S. Transistor” should be the coauthor on basically all three million journal articles published every year.

The sheer complexity of modern science, and its reliance on customized software, renders one of the frontline defenses against soft and hard fraud useless. That defense is peer review.

The practice of peer review was developed in a different era, when the arguments and analysis that led to a paper’s conclusion could be succinctly summarized within the paper itself. Want to know how the author arrived at that conclusion? The derivation would be right there. It was relatively easy to judge the “wrongness” of an article because you could follow the document from beginning to end, from start to finish, and have all the information you needed to evaluate it right there at your fingerprints.

That’s now largely impossible with the modern scientific enterprise so reliant on computers.

To makes matters worse, many of the software codes used in science are not publicly available. I’ll say this again because it’s kind of wild to even contemplate: there are millions of papers published every year that rely on computer software to make the results happen, and that software is not available for other scientists to scrutinize to see if it’s legit or not. We simply have to trust it, but the word “trust” is very near the bottom of the scientist’s priority list.

Why don’t scientists make their code available? It boils down to the same reason that scientists don’t do many things that would improve the process of science: there’s no incentive. In this case, you don’t get any h-index points for releasing your code on a website. You only get them for publishing papers.

This infinitely agitates me when I peer-review papers. How am I supposed to judge the correctness of an article if I can’t see the entire process? What’s the point of searching for fraud when the computer code that’s sitting behind the published result can be shaped and molded to give any result you want, and nobody will be the wiser?

I’m not even talking about intentional computer-based fraud here; this is even a problem for detecting basic mistakes. If you make a mistake in a paper, a referee or an editor can spot it. And science is better off for it. If you make a mistake in your code… who checks it? As long as the results look correct, you’ll go ahead and publish it and the peer reviewer will go ahead and accept it. And science is worse off for it.

Science is getting more complex over time and is becoming increasingly reliant on software code to keep the engine going. This makes fraud of both the hard and soft varieties easier to accomplish. From mistakes that you pass over because you’re going too fast, to using sophisticated tools that you barely understand but use to get the result that you wanted, to just totally faking it, science is becoming increasingly wrong.

Peer review is essential for science. Unfortunately, it’s broken. Read More »

the-yellowstone-supervolcano-destroyed-an-ecosystem-but-saved-it-for-us

The Yellowstone supervolcano destroyed an ecosystem but saved it for us

Set in stone —

50 years of excavation unveiled the story of a catastrophic event and its aftermath.

Interior view of the Rhino Barn. Exposed fossil skeletons left in-situ for research and public viewing.

Enlarge / Interior view of the Rhino Barn. Exposed fossil skeletons left in-situ for research and public viewing.

Rick E. Otto, University of Nebraska State Museum

Death was everywhere. Animal corpses littered the landscape and were mired in the local waterhole as ash swept around everything in its path. For some, death happened quickly; for others, it was slow and painful.

This was the scene in the aftermath of a supervolcanic eruption in Idaho, approximately 1,600 kilometers (900 miles) away. It was an eruption so powerful that it obliterated the volcano itself, leaving a crater 80 kilometers (50 miles) wide and spewing clouds of ash that the wind carried over long distances, killing almost everything that inhaled it. This was particularly true here, in this location in Nebraska, where animals large and small succumbed to the eruption’s deadly emissions.

Eventually, all traces of this horrific event were buried; life continued, evolved, and changed. That’s why, millions of years later in the summer of 1971, Michael Voorhies was able to enjoy another delightful day of exploring.

Finding rhinos

He was, as he had been each summer between academic years, creating a geologic map of his hometown in Nebraska. This meant going from farm to farm and asking if he could walk through the property to survey the rocks and look for fossils. “I’m basically just a kid at heart, and being a paleontologist in the summer was my idea of heaven,” Voorhies, now retired from the University of Georgia, told Ars.

What caught his eye on one particular farm was a layer of volcanic ash—something treasured by geologists and paleontologists, who use it to get the age of deposits. But as he got closer, he also noticed exposed bone. “Finding what was obviously a lower jaw which was still attached to the skull, now that was really quite interesting!” he said. “Mostly what you find are isolated bones and teeth.”

That skull belonged to a juvenile rhino. Voorhies and some of his students returned to the site to dig further, uncovering the rest of the rhino’s completely articulated remains (meaning the bones of its skeleton were connected as they would be in life). More digging produced the intact skeletons of another five or six rhinos. That was enough to get National Geographic funding for a massive excavation that took place between 1978 and 1979. Crews amassed, among numerous other animals, the remarkable total of 70 complete rhino skeletons.

To put this into perspective, most fossil sites—even spectacular locations preserving multiple animals—are composed primarily of disarticulated skeletons, puzzle pieces that paleontologists painstakingly put back together. Here, however, was something no other site had ever before produced: vast numbers of complete skeletons preserved where they died.

Realizing there was still more yet to uncover, Voorhies and others appealed to the larger Nebraska community to help preserve the area. Thanks to hard work and substantial local donations, the Ashfall Fossil Beds park opened to the public in 1991, staffed by two full-time employees.

Fossils discovered are now left in situ, meaning they remain exposed exactly where they are found, protected by a massive structure called the Hubbard Rhino Barn. Excavations are conducted within the barn at a much slower and steadier pace than those in the ’70s due in large part to the small, rotating number of seasonal employees—mostly college students—who excavate further each summer.

The Rhino Barn protects the fossil bed from the elements.

Enlarge / The Rhino Barn protects the fossil bed from the elements.

Photos by Rick E. Otto, University of Nebraska State Museum

A full ecosystem

Almost 50 years of excavation and research have unveiled the story of a catastrophic event and its aftermath, which took place in a Nebraska that nobody would recognize—one where species like rhinoceros, camels, and saber-toothed deer were a common sight.

But to understand that story, we have to set the stage. The area we know today as Ashfall Fossil Beds was actually a waterhole during the Miocene, one frequented by a diversity of animals. We know this because there are fossils of those animals in a layer of sand at the very bottom of the waterhole, a layer that was not impacted by the supervolcanic eruption.

Rick Otto was one of the students who excavated fossils in 1978. He became Ashfall’s superintendent in 1991 and retired in late 2023. “There were animals dying a natural death around the Ashfall waterhole before the volcanic ash storm took place,” Otto told Ars, which explains the fossils found in that sand. After being scavenged, their bodies may have been trampled by some of the megafauna visiting the waterhole, which would have “worked those bones into the sand.”

The Yellowstone supervolcano destroyed an ecosystem but saved it for us Read More »

tool-preventing-ai-mimicry-cracked;-artists-wonder-what’s-next

Tool preventing AI mimicry cracked; artists wonder what’s next

Tool preventing AI mimicry cracked; artists wonder what’s next

Aurich Lawson | Getty Images

For many artists, it’s a precarious time to post art online. AI image generators keep getting better at cheaply replicating a wider range of unique styles, and basically every popular platform is rushing to update user terms to seize permissions to scrape as much data as possible for AI training.

Defenses against AI training exist—like Glaze, a tool that adds a small amount of imperceptible-to-humans noise to images to stop image generators from copying artists’ styles. But they don’t provide a permanent solution at a time when tech companies appear determined to chase profits by building ever-more-sophisticated AI models that increasingly threaten to dilute artists’ brands and replace them in the market.

In one high-profile example just last month, the estate of Ansel Adams condemned Adobe for selling AI images stealing the famous photographer’s style, Smithsonian reported. Adobe quickly responded and removed the AI copycats. But it’s not just famous artists who risk being ripped off, and lesser-known artists may struggle to prove AI models are referencing their works. In this largely lawless world, every image uploaded risks contributing to an artist’s downfall, potentially watering down demand for their own work each time they promote new pieces online.

Unsurprisingly, artists have increasingly sought protections to diminish or dodge these AI risks. As tech companies update their products’ terms—like when Meta suddenly announced that it was training AI on a billion Facebook and Instagram user photos last December—artists frantically survey the landscape for new defenses. That’s why, counting among those offering scarce AI protections available today, The Glaze Project recently reported a dramatic surge in requests for its free tools.

Designed to help prevent style mimicry and even poison AI models to discourage data scraping without an artist’s consent or compensation, The Glaze Project’s tools are now in higher demand than ever. University of Chicago professor Ben Zhao, who created the tools, told Ars that the backlog for approving a “skyrocketing” number of requests for access is “bad.” And as he recently posted on X (formerly Twitter), an “explosion in demand” in June is only likely to be sustained as AI threats continue to evolve. For the foreseeable future, that means artists searching for protections against AI will have to wait.

Even if Zhao’s team did nothing but approve requests for WebGlaze, its invite-only web-based version of Glaze, “we probably still won’t keep up,” Zhao said. He’s warned artists on X to expect delays.

Compounding artists’ struggles, at the same time as demand for Glaze is spiking, the tool has come under attack by security researchers who claimed it was not only possible but easy to bypass Glaze’s protections. For security researchers and some artists, this attack calls into question whether Glaze can truly protect artists in these embattled times. But for thousands of artists joining the Glaze queue, the long-term future looks so bleak that any promise of protections against mimicry seems worth the wait.

Attack cracking Glaze sparks debate

Millions have downloaded Glaze already, and many artists are waiting weeks or even months for access to WebGlaze, mostly submitting requests for invites on social media. The Glaze Project vets every request to verify that each user is human and ensure bad actors don’t abuse the tools, so the process can take a while.

The team is currently struggling to approve hundreds of requests submitted daily through direct messages on Instagram and Twitter in the order they are received, and artists requesting access must be patient through prolonged delays. Because these platforms’ inboxes aren’t designed to sort messages easily, any artist who follows up on a request gets bumped to the back of the line—as their message bounces to the top of the inbox and Zhao’s team, largely volunteers, continues approving requests from the bottom up.

“This is obviously a problem,” Zhao wrote on X while discouraging artists from sending any follow-ups unless they’ve already gotten an invite. “We might have to change the way we do invites and rethink the future of WebGlaze to keep it sustainable enough to support a large and growing user base.”

Glaze interest is likely also spiking due to word of mouth. Reid Southen, a freelance concept artist for major movies, is advocating for all artists to use Glaze. Reid told Ars that WebGlaze is especially “nice” because it’s “available for free for people who don’t have the GPU power to run the program on their home machine.”

Tool preventing AI mimicry cracked; artists wonder what’s next Read More »

surface-pro-11-and-laptop-7-review:-an-apple-silicon-moment-for-windows

Surface Pro 11 and Laptop 7 review: An Apple Silicon moment for Windows

Microsoft's Surface Pro 11, the first flagship Surface to ship exclusively using Arm processors.

Enlarge / Microsoft’s Surface Pro 11, the first flagship Surface to ship exclusively using Arm processors.

Andrew Cunningham

Microsoft has been trying to make Windows-on-Arm-processors a thing for so long that, at some point, I think I just started assuming it was never actually going to happen.

The first effort was Windows RT, which managed to run well enough on the piddly Arm hardware available at the time but came with a perplexing new interface and couldn’t run any apps designed for regular Intel- and AMD-based Windows PCs. Windows RT failed, partly because a version of Windows that couldn’t run Windows apps and didn’t use a familiar Windows interface was ignoring two big reasons why people keep using Windows.

Windows-on-Arm came back in the late 2010s, with better performance and a translation layer for 32-bit Intel apps in tow. This version of Windows, confined mostly to oddball Surface hardware and a handful of barely promoted models from the big PC OEMs, has quietly percolated for years. It has improved slowly and gradually, as have the Qualcomm processors that have powered these devices.

That brings us to this year’s flagship Microsoft Surface hardware: the 7th-edition Surface Laptop and the 11th-edition Surface Pro.

These devices are Microsoft’s first mainstream, flagship Surface devices to use Arm chips, whereas previous efforts have been side projects or non-default variants. Both hardware and software have improved enough that I finally feel I could recommend a Windows-on-Arm device to a lot of people without having to preface it with a bunch of exceptions.

Unfortunately, Microsoft has chosen to launch this impressive and capable Arm hardware and improved software alongside a bunch of generative AI features, including the Recall screen recorder, a feature that became so radioactively unpopular so quickly that Microsoft was forced to delay it to address major security problems (and perception problems stemming from the security problems).

The remaining AI features are so superfluous that I’ll ignore them in this review and cover them later on when we look closer at Windows 11’s 24H2 update. This is hardware that is good enough that it doesn’t need buzzy AI features to sell it. Windows on Arm continues to present difficulties, but the new Surface Pro and Surface Laptop—and many of the other Arm-based Copilot+ PCs that have launched in the last couple of weeks—are a whole lot better than Arm PCs were even a year or two ago.

Familiar on the outside

The Surface Laptop 7 (left) and Surface Pro 11 (right) are either similar or identical to their Intel-powered predecessors on the outside.

Enlarge / The Surface Laptop 7 (left) and Surface Pro 11 (right) are either similar or identical to their Intel-powered predecessors on the outside.

Andrew Cunningham

When Apple released the first couple of Apple Silicon Macs back in late 2020, the one thing the company pointedly did not change was the exterior design. Apple didn’t comment much on it at the time, but the subliminal message was that these were just Macs, they looked the same as other Macs, and there was nothing to worry about.

Microsoft’s new flagship Surface hardware, powered exclusively by Arm-based chips for the first time rather than a mix of Arm and Intel/AMD, takes a similar approach: inwardly overhauled, externally unremarkable. These are very similar to the last (and the current) Intel-powered Surface Pro and Surface Laptop designs, and in the case of the Surface Pro, they actually look identical.

Both PCs still include some of the defining elements of Surface hardware designs. Both have screens with 3:2 aspect ratios that make them taller than most typical laptop displays, which still use 16: 10 or 16:9 aspect ratios. Those screens also support touch input via fingers or the Surface Pen, and they still use gently rounded corners (which Windows doesn’t formally recognize in-software, so the corners of your windows will get cut off, not that it has ever been a problem for me).

Surface Pro 11 and Laptop 7 review: An Apple Silicon moment for Windows Read More »

30-years-later,-freedos-is-still-keeping-the-dream-of-the-command-prompt-alive

30 years later, FreeDOS is still keeping the dream of the command prompt alive

Preparing to install the floppy disk edition of FreeDOS 1.3 in a virtual machine.

Enlarge / Preparing to install the floppy disk edition of FreeDOS 1.3 in a virtual machine.

Andrew Cunningham

Two big things happened in the world of text-based disk operating systems in June 1994.

The first is that Microsoft released MS-DOS version 6.22, the last version of its long-running operating system that would be sold to consumers as a standalone product. MS-DOS would continue to evolve for a few years after this, but only as an increasingly invisible loading mechanism for Windows.

The second was that a developer named Jim Hall wrote a post announcing something called “PD-DOS.” Unhappy with Windows 3.x and unexcited by the project we would come to know as Windows 95, Hall wanted to break ground on a new “public domain” version of DOS that could keep the traditional command-line interface alive as most of the world left it behind for more user-friendly but resource-intensive graphical user interfaces.

PD-DOS would soon be renamed FreeDOS, and 30 years and many contributions later, it stands as the last MS-DOS-compatible operating system still under active development.

While it’s not really usable as a standalone modern operating system in the Internet age—among other things, DOS is not really innately aware of “the Internet” as a concept—FreeDOS still has an important place in today’s computing firmament. It’s there for people who need to run legacy applications on modern systems, whether it’s running inside of a virtual machine or directly on the hardware; it’s also the best way to get an actively maintained DOS offshoot running on legacy hardware going as far back as the original IBM PC and its Intel 8088 CPU.

To mark FreeDOS’ 20th anniversary in 2014, we talked with Hall and other FreeDOS maintainers about its continued relevance, the legacy of DOS, and the developers’ since-abandoned plans to add ambitious modern features like multitasking and built-in networking support (we also tried, earnestly but with mixed success, to do a modern day’s work using only FreeDOS). The world of MS-DOS-compatible operating systems moves slowly enough that most of this information is still relevant; FreeDOS was at version 1.1 back in 2014, and it’s on version 1.3 now.

For the 30th anniversary, we’ve checked in with Hall again about how the last decade or so has treated the FreeDOS project, why it’s still important, and how it continues to draw new users into the fold. We also talked, strange as it might seem, about what the future might hold for this inherently backward-looking operating system.

FreeDOS is still kicking, even as hardware evolves beyond it

Running AsEasyAs, a Lotus 1-2-3-compatible spreadsheet program, in FreeDOS.

Running AsEasyAs, a Lotus 1-2-3-compatible spreadsheet program, in FreeDOS.

Jim Hall

If the last decade hasn’t ushered in The Year of FreeDOS On The Desktop, Hall says that interest in and usage of the operating system has stayed fairly level since 2014. The difference is that, as time has gone on, more users are encountering FreeDOS as their first DOS-compatible operating system, not as an updated take on Microsoft and IBM’s dusty old ’80s- and ’90s-era software.

“Compared to about 10 years ago, I’d say the interest level in FreeDOS is about the same,” Hall told Ars in an email interview. “Our developer community has remained about the same over that time, I think. And judging by the emails that people send me to ask questions, or the new folks I see asking questions on our freedos-user or freedos-devel email lists, or the people talking about FreeDOS on the Facebook group and other forums, I’d say there are still about the same number of people who are participating in the FreeDOS community in some way.”

“I get a lot of questions around September and October from people who ask, basically, ‘I installed FreeDOS, but I don’t know how to use it. What do I do?’ And I think these people learned about FreeDOS in a university computer science course and wanted to learn more about it—or maybe they are already working somewhere and they read an article about it, never heard of this “DOS” thing before, and wanted to try it out. Either way, I think more folks in the user community are learning about “DOS” at the same time they are learning about FreeDOS.”

30 years later, FreeDOS is still keeping the dream of the command prompt alive Read More »

the-world’s-toughest-race-starts-saturday,-and-it’s-delightfully-hard-to-call-this-year

The world’s toughest race starts Saturday, and it’s delightfully hard to call this year

Is it Saturday yet? —

Setting the stage for what could be a wild ride across France.

The peloton passing through a sunflowers field during the stage eight of the 110th Tour de France in 2023.

Enlarge / The peloton passing through a sunflowers field during the stage eight of the 110th Tour de France in 2023.

David Ramos/Getty Images

Most readers probably did not anticipate seeing a Tour de France preview on Ars Technica, but here we are. Cycling is a huge passion of mine and several other staffers, and this year, a ton of intrigue surrounds the race, which has a fantastic route. So we’re here to spread Tour fever.

The three-week race starts Saturday, paradoxically in the Italian region of Tuscany. Usually, there is a dominant rider, or at most two, and a clear sense of who is likely to win the demanding race. But this year, due to rider schedules, a terrible crash in early April, and new contenders, there is more uncertainty than usual. A solid case could be made for at least four riders to win this year’s Tour de France.

For people who aren’t fans of pro road cycling—which has to be at least 99 percent of the United States—there’s a great series on Netflix called Unchained to help get you up to speed. The second season, just released, covers last year’s Tour de France and introduces you to most of the protagonists in the forthcoming edition. If this article sparks your interest, I recommend checking it out.

Anyway, for those who are cycling curious, I want to set the stage for this year’s race by saying a little bit about the four main contenders, from most likely to least likely to win, and provide some of the backstory to what could very well be a dramatic race this year.

Tadej Pogačar

Tadej Pogacar of Slovenia and UAE Team Emirates won the Giro d'Italia in May.

Enlarge / Tadej Pogacar of Slovenia and UAE Team Emirates won the Giro d’Italia in May.

Tim de Waele/Getty Images

  • Slovenia
  • 25 years old
  • UAE Team Emirates
  • Odds: -190

Pogačar burst onto the scene in 2019 at the very young age of 20 by finishing third in the Vuelta a España, one of the three grand tours of cycling. He then went on to win the 2020 and 2021 Tours de France, first by surprising fellow countryman Primož Roglič (more on him below) in 2020 and then utterly dominating in 2021. Given his youth, it seemed he would be the premiere grand tour competitor for the next decade.

But then another slightly older rider, a teammate of Roglič’s named Jonas Vingegaard, emerged in 2022 and won the next two races. Last year, in fact, Vingegaard cracked Pogačar by 7 minutes and 29 seconds in the Tour, a huge winning margin, especially for two riders of relatively close talent. This established Vingegaard as the alpha male of grand tour cyclists, having proven himself a better climber and time trialist than Pogačar, especially in the highest and hardest stages.

So this year, Pogačar decided to change up his strategy. Instead of focusing on the Tour de France, Pogačar participated in the first grand tour of the season, the Giro d’Italia, which occurred in May. He likely did so for a couple of reasons. First of all, he almost certainly received a generous appearance fee from the Italian organizers. And secondly, riding the Giro would give him a ready excuse for not beating Vingegaard in France.

Why is this? Because there are just five weeks between the end of the Giro and the start of the Tour. So if a rider peaks for the Giro and exerts himself in winning the race, it is generally thought that he can’t arrive at the Tour in winning form. He will be a few percent off, not having ideal preparation.

Predictably, Pogačar smashed the lesser competition at the Giro and won the race by 9 minutes and 56 seconds. Because he was so far ahead, he was able to take the final week of the race a bit easier. The general thinking in the cycling community is that Pogačar is arriving at the Tour in excellent but not peak form. But given everything else that has happened so far this season, the bettors believe that will be enough for him to win. Maybe.

The world’s toughest race starts Saturday, and it’s delightfully hard to call this year Read More »

t-mobile-users-enraged-as-“un-carrier”-breaks-promise-to-never-raise-prices

T-Mobile users enraged as “Un-carrier” breaks promise to never raise prices

Illustration of T-Mobile customers protesting price hikes

Aurich Lawson

In 2017, Kathleen Odean thought she had found the last cell phone plan she would ever need. T-Mobile was offering a mobile service for people age 55 and over, with an “Un-contract” guarantee that it would never raise prices.

“I thought, wow, I can live out my days with this fixed plan,” Odean, a Rhode Island resident who is now 70 years old, told Ars last week. Odean and her husband switched from Verizon to get the T-Mobile deal, which cost $60 a month for two lines.

Despite its Un-contract promise, T-Mobile in May 2024 announced a price hike for customers like Odean who thought they had a lifetime price guarantee on plans such as T-Mobile One, Magenta, and Simple Choice. The $5-per-line price hike will raise her and her husband’s monthly bill from $60 to $70, Odean said.

As we’ve reported, T-Mobile’s January 2017 announcement of its “Un-contract” for T-Mobile One plans said that “T-Mobile One customers keep their price until THEY decide to change it. T-Mobile will never change the price you pay for your T-Mobile One plan. When you sign up for T-Mobile One, only YOU have the power to change the price you pay.”

T-Mobile contradicted that clear promise on a separate FAQ page, which said the only real guarantee was that T-Mobile would pay your final month’s bill if the company raised the price and you decided to cancel. Customers like Odean bitterly point to the press release that made the price guarantee without including the major caveat that essentially nullifies the promise.

“I gotta tell you, it really annoys me”

T-Mobile’s 2017 press release even blasted other carriers for allegedly being dishonest, saying that “customers are subjected to a steady barrage of ads for wireless deals—only to face bill shock and wonder what the hell happened when their Verizon or AT&T bill arrives.”

T-Mobile made the promise under the brash leadership of CEO John Legere, who called the company the “Un-carrier” and frequently insulted its larger rivals while pledging that T-Mobile would treat customers more fairly. Legere left T-Mobile in 2020 after the company completed a merger with Sprint in a deal that made T-Mobile one of three major nationwide carriers alongside AT&T and Verizon.

Then-CEO of T-Mobile John Legere at the company's Un-Carrier X event in Los Angeles on Tuesday, Nov. 10, 2015.

Enlarge / Then-CEO of T-Mobile John Legere at the company’s Un-Carrier X event in Los Angeles on Tuesday, Nov. 10, 2015.

Getty Images | Bloomberg

After being notified of the price hike, Odean filed complaints with the Federal Communications Commission and the Rhode Island attorney general’s office. “I can afford it, but I gotta tell you, it really annoys me because the promise was so absolutely clear… It’s right there in writing: ‘T-Mobile will never change the price you pay for your T-Mobile One plan.’ It couldn’t be more clear,” she said.

Now, T-Mobile is “acting like, oh, well, we gave ourselves a way out,” Odean said. But the caveat that lets T-Mobile raise prices whenever it wants, “as far as I can tell, was never mentioned to the customers… I don’t care what they say in the FAQ,” she said.

T-Mobile users enraged as “Un-carrier” breaks promise to never raise prices Read More »

taking-a-closer-look-at-ai’s-supposed-energy-apocalypse

Taking a closer look at AI’s supposed energy apocalypse

Someone just asked what it would look like if their girlfriend was a Smurf. Better add another rack of servers!

Enlarge / Someone just asked what it would look like if their girlfriend was a Smurf. Better add another rack of servers!

Getty Images

Late last week, both Bloomberg and The Washington Post published stories focused on the ostensibly disastrous impact artificial intelligence is having on the power grid and on efforts to collectively reduce our use of fossil fuels. The high-profile pieces lean heavily on recent projections from Goldman Sachs and the International Energy Agency (IEA) to cast AI’s “insatiable” demand for energy as an almost apocalyptic threat to our power infrastructure. The Post piece even cites anonymous “some [people]” in reporting that “some worry whether there will be enough electricity to meet [the power demands] from any source.”

Digging into the best available numbers and projections available, though, it’s hard to see AI’s current and near-future environmental impact in such a dire light. While generative AI models and tools can and will use a significant amount of energy, we shouldn’t conflate AI energy usage with the larger and largely pre-existing energy usage of “data centers” as a whole. And just like any technology, whether that AI energy use is worthwhile depends largely on your wider opinion of the value of generative AI in the first place.

Not all data centers

While the headline focus of both Bloomberg and The Washington Post’s recent pieces is on artificial intelligence, the actual numbers and projections cited in both pieces overwhelmingly focus on the energy used by Internet “data centers” as a whole. Long before generative AI became the current Silicon Valley buzzword, those data centers were already growing immensely in size and energy usage, powering everything from Amazon Web Services servers to online gaming services, Zoom video calls, and cloud storage and retrieval for billions of documents and photos, to name just a few of the more common uses.

The Post story acknowledges that these “nondescript warehouses packed with racks of servers that power the modern Internet have been around for decades.” But in the very next sentence, the Post asserts that, today, data center energy use “is soaring because of AI.” Bloomberg asks one source directly “why data centers were suddenly sucking up so much power” and gets back a blunt answer: “It’s AI… It’s 10 to 15 times the amount of electricity.”

The massive growth in data center power usage mostly predates the current mania for generative AI (red 2022 line added by Ars).

Enlarge / The massive growth in data center power usage mostly predates the current mania for generative AI (red 2022 line added by Ars).

Unfortunately for Bloomberg, that quote is followed almost immediately by a chart that heavily undercuts the AI alarmism. That chart shows worldwide data center energy usage growing at a remarkably steady pace from about 100 TWh in 2012 to around 350 TWh in 2024. The vast majority of that energy usage growth came before 2022, when the launch of tools like Dall-E and ChatGPT largely set off the industry’s current mania for generative AI. If you squint at Bloomberg’s graph, you can almost see the growth in energy usage slowing down a bit since that momentous year for generative AI.

Determining precisely how much of that data center energy use is taken up specifically by generative AI is a difficult task, but Dutch researcher Alex de Vries found a clever way to get an estimate. In his study “The growing energy footprint of artificial intelligence,” de Vries starts with estimates that Nvidia’s specialized chips are responsible for about 95 percent of the market for generative AI calculations. He then uses Nvidia’s projected production of 1.5 million AI servers in 2027—and the projected power usage for those servers—to estimate that the AI sector as a whole could use up anywhere from 85 to 134 TWh of power in just a few years.

To be sure, that is an immense amount of power, representing about 0.5 percent of projected electricity demand for the entire world (and an even greater ratio in the local energy mix for some common data center locations). But measured against other common worldwide uses of electricity, it’s not representative of a mind-boggling energy hog. A 2018 study estimated that PC gaming as a whole accounted for 75 TWh of electricity use per year, to pick just one common human activity that’s on the same general energy scale (and that’s without console or mobile gamers included).

Worldwide projections for AI energy use in 2027 are on the same scale as the energy used by PC gamers.

Enlarge / Worldwide projections for AI energy use in 2027 are on the same scale as the energy used by PC gamers.

More to the point, de Vries’ AI energy estimates are only a small fraction of the 620 to 1,050 TWh that data centers as a whole are projected to use by 2026, according to the IEA’s recent report. The vast majority of all that data center power will still be going to more mundane Internet infrastructure that we all take for granted (and which is not nearly as sexy of a headline bogeyman as “AI”).

Taking a closer look at AI’s supposed energy apocalypse Read More »

decades-later,-john-romero-looks-back-at-the-birth-of-the-first-person-shooter

Decades later, John Romero looks back at the birth of the first-person shooter

Daikatana didn’t come up —

Id Software co-founder talks to Ars about everything from Catacomb 3-D to “boomer shooters.”

Decades later, John Romero looks back at the birth of the first-person shooter

John Romero remembers the moment he realized what the future of gaming would look like.

In late 1991, Romero and his colleagues at id Software had just released Catacomb 3-D, a crude-looking, EGA-colored first-person shooter that was nonetheless revolutionary compared to other first-person games of the time. “When we started making our 3D games, the only 3D games out there were nothing like ours,” Romero told Ars in a recent interview. “They were lockstep, going through a maze, do a 90-degree turn, that kind of thing.”

Despite Catacomb 3-D‘s technological advances in first-person perspective, though, Romero remembers the team at id followed its release by going to work on the next entry in the long-running Commander Keen series of 2D platform games. But as that process moved forward, Romero told Ars that something didn’t feel right.

Catacombs 3-D is less widely remembered than its successor, Wolfenstein 3D.

“Within two weeks, [I was up] at one in the morning and I’m just like, ‘Guys we need to not make this game [Keen],'” he said. “‘This is not the future. The future is getting better at what we just did with Catacomb.’ … And everyone was immediately was like, ‘Yeah, you know, you’re right. That is the new thing, and we haven’t seen it, and we can do it, so why aren’t we doing it?'”

The team started working on Wolfenstein 3D that very night, Romero said. And the rest is history.

Going for speed

What set Catacomb 3-D and its successors apart from other first-person gaming experiments of the time, Romero said, “was our speed—the speed of the game was critical to us having that massive differentiation. Everyone else was trying to do a world that was proper 3D—six degrees of freedom or representation that was really detailed. And for us, the way that we were going to go was a simple rendering at a high speed with good gameplay. Those were our pillars, and we stuck with them, and that’s what really differentiated them from everyone else.”

That focus on speed extended to id’s development process, which Romero said was unrecognizable compared to even low-budget indie games of today. The team didn’t bother writing out design documents laying out crucial ideas beforehand, for instance, because Romero said “the design doc was next to us; it was the creative director… The games weren’t that big back then, so it was easy for us to say, ‘this is what we’re making’ and ‘things are going to be like this.’ And then we all just work on our own thing.”

John Carmack (left) and John Romero (second from right) pose with their id Software colleagues in the early '90s.

Enlarge / John Carmack (left) and John Romero (second from right) pose with their id Software colleagues in the early ’90s.

The early id designers didn’t even use basic development tools like version control systems, Romero said. Instead, development was highly compartmentalized between different developers; “the files that I’m going to work on, he doesn’t touch, and I don’t touch his files,” Romero remembered of programming games alongside John Carmack. “I only put the files on my transfer floppy disk that he needs, and it’s OK for him to copy everything off of there and overwrite what he has because it’s only my files, and vice versa. If for some reason the hard drive crashed, we could rebuild the source from anyone’s copies of what they’ve got.”

Decades later, John Romero looks back at the birth of the first-person shooter Read More »