Features

the-world’s-toughest-race-starts-saturday,-and-it’s-delightfully-hard-to-call-this-year

The world’s toughest race starts Saturday, and it’s delightfully hard to call this year

Is it Saturday yet? —

Setting the stage for what could be a wild ride across France.

The peloton passing through a sunflowers field during the stage eight of the 110th Tour de France in 2023.

Enlarge / The peloton passing through a sunflowers field during the stage eight of the 110th Tour de France in 2023.

David Ramos/Getty Images

Most readers probably did not anticipate seeing a Tour de France preview on Ars Technica, but here we are. Cycling is a huge passion of mine and several other staffers, and this year, a ton of intrigue surrounds the race, which has a fantastic route. So we’re here to spread Tour fever.

The three-week race starts Saturday, paradoxically in the Italian region of Tuscany. Usually, there is a dominant rider, or at most two, and a clear sense of who is likely to win the demanding race. But this year, due to rider schedules, a terrible crash in early April, and new contenders, there is more uncertainty than usual. A solid case could be made for at least four riders to win this year’s Tour de France.

For people who aren’t fans of pro road cycling—which has to be at least 99 percent of the United States—there’s a great series on Netflix called Unchained to help get you up to speed. The second season, just released, covers last year’s Tour de France and introduces you to most of the protagonists in the forthcoming edition. If this article sparks your interest, I recommend checking it out.

Anyway, for those who are cycling curious, I want to set the stage for this year’s race by saying a little bit about the four main contenders, from most likely to least likely to win, and provide some of the backstory to what could very well be a dramatic race this year.

Tadej Pogačar

Tadej Pogacar of Slovenia and UAE Team Emirates won the Giro d'Italia in May.

Enlarge / Tadej Pogacar of Slovenia and UAE Team Emirates won the Giro d’Italia in May.

Tim de Waele/Getty Images

  • Slovenia
  • 25 years old
  • UAE Team Emirates
  • Odds: -190

Pogačar burst onto the scene in 2019 at the very young age of 20 by finishing third in the Vuelta a España, one of the three grand tours of cycling. He then went on to win the 2020 and 2021 Tours de France, first by surprising fellow countryman Primož Roglič (more on him below) in 2020 and then utterly dominating in 2021. Given his youth, it seemed he would be the premiere grand tour competitor for the next decade.

But then another slightly older rider, a teammate of Roglič’s named Jonas Vingegaard, emerged in 2022 and won the next two races. Last year, in fact, Vingegaard cracked Pogačar by 7 minutes and 29 seconds in the Tour, a huge winning margin, especially for two riders of relatively close talent. This established Vingegaard as the alpha male of grand tour cyclists, having proven himself a better climber and time trialist than Pogačar, especially in the highest and hardest stages.

So this year, Pogačar decided to change up his strategy. Instead of focusing on the Tour de France, Pogačar participated in the first grand tour of the season, the Giro d’Italia, which occurred in May. He likely did so for a couple of reasons. First of all, he almost certainly received a generous appearance fee from the Italian organizers. And secondly, riding the Giro would give him a ready excuse for not beating Vingegaard in France.

Why is this? Because there are just five weeks between the end of the Giro and the start of the Tour. So if a rider peaks for the Giro and exerts himself in winning the race, it is generally thought that he can’t arrive at the Tour in winning form. He will be a few percent off, not having ideal preparation.

Predictably, Pogačar smashed the lesser competition at the Giro and won the race by 9 minutes and 56 seconds. Because he was so far ahead, he was able to take the final week of the race a bit easier. The general thinking in the cycling community is that Pogačar is arriving at the Tour in excellent but not peak form. But given everything else that has happened so far this season, the bettors believe that will be enough for him to win. Maybe.

The world’s toughest race starts Saturday, and it’s delightfully hard to call this year Read More »

t-mobile-users-enraged-as-“un-carrier”-breaks-promise-to-never-raise-prices

T-Mobile users enraged as “Un-carrier” breaks promise to never raise prices

Illustration of T-Mobile customers protesting price hikes

Aurich Lawson

In 2017, Kathleen Odean thought she had found the last cell phone plan she would ever need. T-Mobile was offering a mobile service for people age 55 and over, with an “Un-contract” guarantee that it would never raise prices.

“I thought, wow, I can live out my days with this fixed plan,” Odean, a Rhode Island resident who is now 70 years old, told Ars last week. Odean and her husband switched from Verizon to get the T-Mobile deal, which cost $60 a month for two lines.

Despite its Un-contract promise, T-Mobile in May 2024 announced a price hike for customers like Odean who thought they had a lifetime price guarantee on plans such as T-Mobile One, Magenta, and Simple Choice. The $5-per-line price hike will raise her and her husband’s monthly bill from $60 to $70, Odean said.

As we’ve reported, T-Mobile’s January 2017 announcement of its “Un-contract” for T-Mobile One plans said that “T-Mobile One customers keep their price until THEY decide to change it. T-Mobile will never change the price you pay for your T-Mobile One plan. When you sign up for T-Mobile One, only YOU have the power to change the price you pay.”

T-Mobile contradicted that clear promise on a separate FAQ page, which said the only real guarantee was that T-Mobile would pay your final month’s bill if the company raised the price and you decided to cancel. Customers like Odean bitterly point to the press release that made the price guarantee without including the major caveat that essentially nullifies the promise.

“I gotta tell you, it really annoys me”

T-Mobile’s 2017 press release even blasted other carriers for allegedly being dishonest, saying that “customers are subjected to a steady barrage of ads for wireless deals—only to face bill shock and wonder what the hell happened when their Verizon or AT&T bill arrives.”

T-Mobile made the promise under the brash leadership of CEO John Legere, who called the company the “Un-carrier” and frequently insulted its larger rivals while pledging that T-Mobile would treat customers more fairly. Legere left T-Mobile in 2020 after the company completed a merger with Sprint in a deal that made T-Mobile one of three major nationwide carriers alongside AT&T and Verizon.

Then-CEO of T-Mobile John Legere at the company's Un-Carrier X event in Los Angeles on Tuesday, Nov. 10, 2015.

Enlarge / Then-CEO of T-Mobile John Legere at the company’s Un-Carrier X event in Los Angeles on Tuesday, Nov. 10, 2015.

Getty Images | Bloomberg

After being notified of the price hike, Odean filed complaints with the Federal Communications Commission and the Rhode Island attorney general’s office. “I can afford it, but I gotta tell you, it really annoys me because the promise was so absolutely clear… It’s right there in writing: ‘T-Mobile will never change the price you pay for your T-Mobile One plan.’ It couldn’t be more clear,” she said.

Now, T-Mobile is “acting like, oh, well, we gave ourselves a way out,” Odean said. But the caveat that lets T-Mobile raise prices whenever it wants, “as far as I can tell, was never mentioned to the customers… I don’t care what they say in the FAQ,” she said.

T-Mobile users enraged as “Un-carrier” breaks promise to never raise prices Read More »

taking-a-closer-look-at-ai’s-supposed-energy-apocalypse

Taking a closer look at AI’s supposed energy apocalypse

Someone just asked what it would look like if their girlfriend was a Smurf. Better add another rack of servers!

Enlarge / Someone just asked what it would look like if their girlfriend was a Smurf. Better add another rack of servers!

Getty Images

Late last week, both Bloomberg and The Washington Post published stories focused on the ostensibly disastrous impact artificial intelligence is having on the power grid and on efforts to collectively reduce our use of fossil fuels. The high-profile pieces lean heavily on recent projections from Goldman Sachs and the International Energy Agency (IEA) to cast AI’s “insatiable” demand for energy as an almost apocalyptic threat to our power infrastructure. The Post piece even cites anonymous “some [people]” in reporting that “some worry whether there will be enough electricity to meet [the power demands] from any source.”

Digging into the best available numbers and projections available, though, it’s hard to see AI’s current and near-future environmental impact in such a dire light. While generative AI models and tools can and will use a significant amount of energy, we shouldn’t conflate AI energy usage with the larger and largely pre-existing energy usage of “data centers” as a whole. And just like any technology, whether that AI energy use is worthwhile depends largely on your wider opinion of the value of generative AI in the first place.

Not all data centers

While the headline focus of both Bloomberg and The Washington Post’s recent pieces is on artificial intelligence, the actual numbers and projections cited in both pieces overwhelmingly focus on the energy used by Internet “data centers” as a whole. Long before generative AI became the current Silicon Valley buzzword, those data centers were already growing immensely in size and energy usage, powering everything from Amazon Web Services servers to online gaming services, Zoom video calls, and cloud storage and retrieval for billions of documents and photos, to name just a few of the more common uses.

The Post story acknowledges that these “nondescript warehouses packed with racks of servers that power the modern Internet have been around for decades.” But in the very next sentence, the Post asserts that, today, data center energy use “is soaring because of AI.” Bloomberg asks one source directly “why data centers were suddenly sucking up so much power” and gets back a blunt answer: “It’s AI… It’s 10 to 15 times the amount of electricity.”

The massive growth in data center power usage mostly predates the current mania for generative AI (red 2022 line added by Ars).

Enlarge / The massive growth in data center power usage mostly predates the current mania for generative AI (red 2022 line added by Ars).

Unfortunately for Bloomberg, that quote is followed almost immediately by a chart that heavily undercuts the AI alarmism. That chart shows worldwide data center energy usage growing at a remarkably steady pace from about 100 TWh in 2012 to around 350 TWh in 2024. The vast majority of that energy usage growth came before 2022, when the launch of tools like Dall-E and ChatGPT largely set off the industry’s current mania for generative AI. If you squint at Bloomberg’s graph, you can almost see the growth in energy usage slowing down a bit since that momentous year for generative AI.

Determining precisely how much of that data center energy use is taken up specifically by generative AI is a difficult task, but Dutch researcher Alex de Vries found a clever way to get an estimate. In his study “The growing energy footprint of artificial intelligence,” de Vries starts with estimates that Nvidia’s specialized chips are responsible for about 95 percent of the market for generative AI calculations. He then uses Nvidia’s projected production of 1.5 million AI servers in 2027—and the projected power usage for those servers—to estimate that the AI sector as a whole could use up anywhere from 85 to 134 TWh of power in just a few years.

To be sure, that is an immense amount of power, representing about 0.5 percent of projected electricity demand for the entire world (and an even greater ratio in the local energy mix for some common data center locations). But measured against other common worldwide uses of electricity, it’s not representative of a mind-boggling energy hog. A 2018 study estimated that PC gaming as a whole accounted for 75 TWh of electricity use per year, to pick just one common human activity that’s on the same general energy scale (and that’s without console or mobile gamers included).

Worldwide projections for AI energy use in 2027 are on the same scale as the energy used by PC gamers.

Enlarge / Worldwide projections for AI energy use in 2027 are on the same scale as the energy used by PC gamers.

More to the point, de Vries’ AI energy estimates are only a small fraction of the 620 to 1,050 TWh that data centers as a whole are projected to use by 2026, according to the IEA’s recent report. The vast majority of all that data center power will still be going to more mundane Internet infrastructure that we all take for granted (and which is not nearly as sexy of a headline bogeyman as “AI”).

Taking a closer look at AI’s supposed energy apocalypse Read More »

decades-later,-john-romero-looks-back-at-the-birth-of-the-first-person-shooter

Decades later, John Romero looks back at the birth of the first-person shooter

Daikatana didn’t come up —

Id Software co-founder talks to Ars about everything from Catacomb 3-D to “boomer shooters.”

Decades later, John Romero looks back at the birth of the first-person shooter

John Romero remembers the moment he realized what the future of gaming would look like.

In late 1991, Romero and his colleagues at id Software had just released Catacomb 3-D, a crude-looking, EGA-colored first-person shooter that was nonetheless revolutionary compared to other first-person games of the time. “When we started making our 3D games, the only 3D games out there were nothing like ours,” Romero told Ars in a recent interview. “They were lockstep, going through a maze, do a 90-degree turn, that kind of thing.”

Despite Catacomb 3-D‘s technological advances in first-person perspective, though, Romero remembers the team at id followed its release by going to work on the next entry in the long-running Commander Keen series of 2D platform games. But as that process moved forward, Romero told Ars that something didn’t feel right.

Catacombs 3-D is less widely remembered than its successor, Wolfenstein 3D.

“Within two weeks, [I was up] at one in the morning and I’m just like, ‘Guys we need to not make this game [Keen],'” he said. “‘This is not the future. The future is getting better at what we just did with Catacomb.’ … And everyone was immediately was like, ‘Yeah, you know, you’re right. That is the new thing, and we haven’t seen it, and we can do it, so why aren’t we doing it?'”

The team started working on Wolfenstein 3D that very night, Romero said. And the rest is history.

Going for speed

What set Catacomb 3-D and its successors apart from other first-person gaming experiments of the time, Romero said, “was our speed—the speed of the game was critical to us having that massive differentiation. Everyone else was trying to do a world that was proper 3D—six degrees of freedom or representation that was really detailed. And for us, the way that we were going to go was a simple rendering at a high speed with good gameplay. Those were our pillars, and we stuck with them, and that’s what really differentiated them from everyone else.”

That focus on speed extended to id’s development process, which Romero said was unrecognizable compared to even low-budget indie games of today. The team didn’t bother writing out design documents laying out crucial ideas beforehand, for instance, because Romero said “the design doc was next to us; it was the creative director… The games weren’t that big back then, so it was easy for us to say, ‘this is what we’re making’ and ‘things are going to be like this.’ And then we all just work on our own thing.”

John Carmack (left) and John Romero (second from right) pose with their id Software colleagues in the early '90s.

Enlarge / John Carmack (left) and John Romero (second from right) pose with their id Software colleagues in the early ’90s.

The early id designers didn’t even use basic development tools like version control systems, Romero said. Instead, development was highly compartmentalized between different developers; “the files that I’m going to work on, he doesn’t touch, and I don’t touch his files,” Romero remembered of programming games alongside John Carmack. “I only put the files on my transfer floppy disk that he needs, and it’s OK for him to copy everything off of there and overwrite what he has because it’s only my files, and vice versa. If for some reason the hard drive crashed, we could rebuild the source from anyone’s copies of what they’ve got.”

Decades later, John Romero looks back at the birth of the first-person shooter Read More »

internet-archive-forced-to-remove-500,000-books-after-publishers’-court-win

Internet Archive forced to remove 500,000 books after publishers’ court win

Internet Archive forced to remove 500,000 books after publishers’ court win

As a result of book publishers successfully suing the Internet Archive (IA) last year, the free online library that strives to keep growing online access to books recently shrank by about 500,000 titles.

IA reported in a blog post this month that publishers abruptly forcing these takedowns triggered a “devastating loss” for readers who depend on IA to access books that are otherwise impossible or difficult to access.

To restore access, IA is now appealing, hoping to reverse the prior court’s decision by convincing the US Court of Appeals in the Second Circuit that IA’s controlled digital lending of its physical books should be considered fair use under copyright law. An April court filing shows that IA intends to argue that the publishers have no evidence that the e-book market has been harmed by the open library’s lending, and copyright law is better served by allowing IA’s lending than by preventing it.

“We use industry-standard technology to prevent our books from being downloaded and redistributed—the same technology used by corporate publishers,” Chris Freeland, IA’s director of library services, wrote in the blog. “But the publishers suing our library say we shouldn’t be allowed to lend the books we own. They have forced us to remove more than half a million books from our library, and that’s why we are appealing.”

IA will have an opportunity to defend its practices when oral arguments start in its appeal on June 28.

“Our position is straightforward; we just want to let our library patrons borrow and read the books we own, like any other library,” Freeland wrote, while arguing that the “potential repercussions of this lawsuit extend far beyond the Internet Archive” and publishers should just “let readers read.”

“This is a fight for the preservation of all libraries and the fundamental right to access information, a cornerstone of any democratic society,” Freeland wrote. “We believe in the right of authors to benefit from their work; and we believe that libraries must be permitted to fulfill their mission of providing access to knowledge, regardless of whether it takes physical or digital form. Doing so upholds the principle that knowledge should be equally and equitably accessible to everyone, regardless of where they live or where they learn.”

Internet Archive fans beg publishers to end takedowns

After publishers won an injunction stopping IA’s digital lending, which “limits what we can do with our digitized books,” IA’s help page said, the open library started shrinking. While “removed books are still available to patrons with print disabilities,” everyone else has been cut off, causing many books in IA’s collection to show up as “Borrow Unavailable.”

Ever since, IA has been “inundated” with inquiries from readers all over the world searching for the removed books, Freeland said. And “we get tagged in social media every day where people are like, ‘why are there so many books gone from our library’?” Freeland told Ars.

In an open letter to publishers signed by nearly 19,000 supporters, IA fans begged publishers to reconsider forcing takedowns and quickly restore access to the lost books.

Among the “far-reaching implications” of the takedowns, IA fans counted the negative educational impact of academics, students, and educators—”particularly in underserved communities where access is limited—who were suddenly cut off from “research materials and literature that support their learning and academic growth.”

They also argued that the takedowns dealt “a serious blow to lower-income families, people with disabilities, rural communities, and LGBTQ+ people, among many others,” who may not have access to a local library or feel “safe accessing the information they need in public.”

“Your removal of these books impedes academic progress and innovation, as well as imperiling the preservation of our cultural and historical knowledge,” the letter said.

“This isn’t happening in the abstract,” Freeland told Ars. “This is real. People no longer have access to a half a million books.”

Internet Archive forced to remove 500,000 books after publishers’ court win Read More »

from-infocom-to-80-days:-an-oral-history-of-text-games-and-interactive-fiction

From Infocom to 80 Days: An oral history of text games and interactive fiction

Zork running on an Amiga at the Computerspielemuseum in Berlin, Germany.

Enlarge / Zork running on an Amiga at the Computerspielemuseum in Berlin, Germany.

You are standing at the end of a road before a small brick building.

That simple sentence first appeared on a PDP-10 mainframe in the 1970s, and the words marked the beginning of what we now know as interactive fiction.

From the bare-bones text adventures of the 1980s to the heartfelt hypertext works of Twine creators, interactive fiction is an art form that continues to inspire a loyal audience. The community for interactive fiction, or IF, attracts readers and players alongside developers and creators. It champions an open source ethos and a punk-like individuality.

But whatever its production value or artistic merit, at heart, interactive fiction is simply words on a screen. In this time of AAA video games, prestige television, and contemporary novels and poetry, how does interactive fiction continue to endure?

To understand the history of IF, the best place to turn for insight is the authors themselves. Not just the authors of notable text games—although many of the people I interviewed for this article do have that claim to fame—but the authors of the communities and the tools that have kept the torch burning. Here’s what they had to say about IF and its legacy.

Examine roots: Adventure and Infocom

The interactive fiction story began in the 1970s. The first widely played game in the genre was Colossal Cave Adventure, also known simply as Adventure. The text game was made by Will Crowther in 1976, based on his experiences spelunking in Kentucky’s aptly named Mammoth Cave. Descriptions of the different spaces would appear on the terminal, then players would type in two-word commands—a verb followed by a noun—to solve puzzles and navigate the sprawling in-game caverns.

During the 1970s, getting the chance to interact with a computer was a rare and special thing for most people.

“My father’s office had an open house in about 1978,” IF author and tool creator Andrew Plotkin recalled. “We all went in and looked at the computers—computers were very exciting in 1978—and he fired up Adventure on one of the terminals. And I, being eight years old, realized this was the best thing in the universe and immediately wanted to do that forever.”

“It is hard to overstate how potent the effect of this game was,” said Graham Nelson, creator of the Inform language and author of the landmark IF Curses, of his introduction to the field. “Partly that was because the behemoth-like machine controlling the story was itself beyond ordinary human experience.”

Perhaps that extraordinary factor is what sparked the curiosity of people like Plotkin and Nelson to play Adventure and the other text games that followed. The roots of interactive fiction are entangled with the roots of the computing industry. “I think it’s always been a focus on the written word as an engine for what we consider a game,” said software developer and tech entrepreneur Liza Daly. “Originally, that was born out of necessity of primitive computers of the ’70s and ’80s, but people discovered that there was a lot to mine there.”

Home computers were just beginning to gain traction as Stanford University student Don Woods released his own version of Adventure in 1977, based on Crowther’s original Fortran work. Without wider access to comparatively pint-sized machines like the Apple 2 and the Vic-20, Scott Adams might not have found an audience for his own text adventure games, released under his company Adventure International, in another homage to Crowther. As computers spread to more people around the world, interactive fiction was able to reach more and more readers.

From Infocom to 80 Days: An oral history of text games and interactive fiction Read More »

hello-sunshine:-we-test-mclaren’s-drop-top-hybrid-artura-spider

Hello sunshine: We test McLaren’s drop-top hybrid Artura Spider

orange express —

The addition of a retractable roof makes this Artura the one to pick.

An orange McLaren Artura Spider drives on a twisy road

Enlarge / The introduction of model year 2025 brings a retractable hard-top option for the McLaren Artura, plus a host of other upgrades.

McLaren

MONACO—The idea of an “entry-level” supercar might sound like a contradiction in terms, but every car company’s range has to start somewhere, and in McLaren’s case, that’s the Artura. When Ars first tested this mid-engined plug-in hybrid in 2022, It was only available as a coupe. But for those who prefer things al fresco, the British automaker has now given you that option with the addition of the Artura Spider.

The Artura represented a step forward for McLaren. There’s a brand-new carbon fiber chassis tub, an advanced electronic architecture (with a handful of domain controllers that replace the dozens of individual ECUs you might find in some of its other models), and a highly capable hybrid powertrain that combines a twin-turbo V6 gasoline engine with an axial flux electric motor.

More power, faster shifts

For model year 2025 and the launch of the $273,800 Spider version, the engineering team at McLaren have given it a spruce-up, despite only being a couple of years old. Overall power output has increased by 19 hp (14 kW) thanks to new engine maps for the V6, which now has a bit more surge from 4,000 rpm all the way to the 8,500 rpm redline. Our test car was fitted with the new sports exhaust, which isn’t obnoxiously loud. It makes some interesting noises as you lift the throttle in the middle of the rev range, but like most turbo engines, it’s not particularly mellifluous.

  • The new engine map means the upper half of third gear will give you a real shove toward the horizon.

    McLaren

  • The Artura Spider’s buttresses are made from a lightweight and clear polymer, so they do their job aerodynamically without completely obscuring your view over your shoulder.

    McLaren

  • The Artura Spider is covered in vents and exhausts to channel air into and out of various parts of the car.

    McLaren

  • You could have your Artura Spider painted in a more somber color. But Orange with carbon fiber looks pretty great to me.

  • If you look closely, you can see the transmission hiding behind the diffuser.

    Jonathan Gitlin

Combined with the 94 hp (70 kW) electric motor, that gives the Artura Spider a healthy 680 hp (507 kW), which helps compensate for the added 134 lbs (62 kg) due to the car’s retractable hard top. There are stiffer engine mounts and new throttle maps, and the dual-clutch transmission shifts 25 percent faster than what we saw in the car that launched two years ago. (These upgrades are carried over to the Artura coupe as well, and the good news for existing owners is that the engine remapping can be applied to their cars, too, with a visit to a McLaren dealer.)

Despite the hybrid system—which uses a 7.4 kWh traction battery—and the roof mechanism, the Artura Spider remains a remarkably light car by 2024 standards, with a curb weight of 3,439 lbs (1,559 kg), which makes it lighter than any comparable car on the market.

In fact, picking a comparable car is a little tricky. Ferrari will sell you a convertible hybrid in the shape of the 296 GTS, but you’ll need another $100,000 or more to get behind the wheel of one of those, which in truth is more of a competitor for the (not-hybrid) 750S, McLaren’s middle model. Any other mid-engined drop-top will be propelled by dino juice alone.

What modes do you want today?

It's easy to drive around town and a lot of fun to drive on a twisty road.

Enlarge / It’s easy to drive around town and a lot of fun to drive on a twisty road.

McLaren

You can drive it using just the electric motor for up to 11 miles if you keep the powertrain in E-mode and start with a fully charged battery. In fact, when you start the car, it begins in this mode by default. Outside of E-mode, the Artura will use spare power from the engine to top up the battery as you drive, and it’s very easy to set a target state of charge if you want to save some battery power for later, for example. Plugged into a Level 2 charger, it should take about 2.5 hours to reach 80 percent.

The car is light enough that 94 hp is more than adequate for the 20 mph or 30 km/h zones you’re sure to encounter whether you’re driving this supercar through a rural village or past camera-wielding car-spotters in the city. Electric mode is serious, and the car won’t fire up the engine until you switch to Comfort (or Sport, or Track) with the control on the right side of the main instrument display.

On the left side is another control to switch the chassis settings between Comfort, Sport, and Track. For road driving, comfort never felt wrong-footed, and I really would leave track for the actual track. The same goes for the Track powertrain setting; for the open road, Sport is the best-sounding, and comfort is well-judged for everyday use and will kill the V6 when it’s not needed. Sport and Track instead use the electric motor—mounted inside the case of the eight-speed transmission—to fill in torque where needed, similar to an F1 or LMDh race car.

Hello sunshine: We test McLaren’s drop-top hybrid Artura Spider Read More »

mod-easy:-a-retro-e-bike-with-a-sidecar-perfect-for-indiana-jones-cosplay

Mod Easy: A retro e-bike with a sidecar perfect for Indiana Jones cosplay

Pure fun —

It’s not the most practical option for passengers, but my son had a blast.

The Mod Easy Sidecar

Enlarge / The Mod Easy Sidecar

As some Ars readers may recall, I reviewed The Maven Cargo e-bike earlier this year as a complete newb to e-bikes. For my second foray into the world of e-bikes, I took an entirely different path.

The stylish Maven was designed with utility in mind—it’s safe, user-friendly, and practical for accomplishing all the daily transportation needs of a busy family. The second bike, the $4,299 Mod Easy Sidecar 3, is on the other end of the spectrum. Just a cursory glance makes it clear: This bike is built for pure, head-turning fun.

The Mod Easy 3 is a retro-style Class 2 bike—complete with a sidecar that looks like it’s straight out of Indiana Jones and the Last Crusade. Nailing this look wasn’t the initial goal of Mod Bike founder Dor Korngold. In an interview with Ars, Korngold said the Mod Easy was the first bike he designed for himself. “It started with me wanting to have this classic cruiser,” he said, but he didn’t have a sketch or final design in mind at the outset. Instead, the design was based on what parts he had in his garage.

The first step was adding a wooden battery compartment to an old Electra frame he had painted. The battery compartment “looked vintage from the beginning,” he said, but the final look came together gradually as he added the sidecar and some of the other motorcycle-style features. Today, the Mod Easy is a sleek bike reminiscent of World War II-era motorcycles and comes in a chic matte finish.

An early version of the Mod Easy bike.

Enlarge / An early version of the Mod Easy bike.

Dor Korngold

When I showed my 5-year-old son a picture of the bike and sidecar, he was instantly enamored and insisted I review it. How could I refuse? He thoroughly enjoyed riding with me on the Maven, but riding in the sidecar turned out to be some next-level fun. He will readily tell you he gives it a five out of five-star rating. But in case you want a more thorough review, my thoughts are below. I’ll start with some general impressions and then discuss specific features of the bike and experience.

The Mod Easy Sidecar 3 at a glance

General impressions

  • The Mod Easy Sidecar 3.

  • Just the bike, which is sold at $3,299

    Beth Mole

  • The Mod Easy Sidecar 3.

    Beth Mole

Again, this is a stylish, fun bike. The bike alone is an effortless and smooth ride. Although it has the heft of an e-bike at 77 pounds (without the sidecar), it never felt unwieldy to me as a 5-foot-4-inch rider. The torque sensors are beautifully integrated into the riding experience, allowing the motor to feel like a gentle, natural assist to pedaling rather than an on-off boost. Of course, with my limited experience, I can’t comment on how these torque sensors compare to other torque sensors, but I have no complaints, and they’re an improvement over my experience with cadence sensors.

You may remember from my review of the Maven that the entrance to a bike path in my area has a switchback path with three tight turns on a hill. With the Maven’s cadence sensors, I struggled to go through the U-turns smoothly, especially going uphill, even after weeks of practice. With the Mod Easy’s torque sensors (and non-cargo length), I glided through them perfectly on the first try. Overall, the bike handles and corners nicely. The wide-set handlebars give the driving experience a relaxed, cruising feel, while the cushy saddle invites you to sink in and stay awhile. The sidecar, meanwhile, was a fun, head-turning feature, but it presents some practical aspects to consider.

Below, I’ll go through key features, starting with the headlining one: the sidecar.

Mod Easy: A retro e-bike with a sidecar perfect for Indiana Jones cosplay Read More »

may-contain-nuts:-precautionary-allergen-labels-lead-to-consumer-confusion

May contain nuts: Precautionary allergen labels lead to consumer confusion

can i eat this or not? —

Some labels suggest allergen cross-contamination that might not exist.

May contain nuts: Precautionary allergen labels lead to consumer confusion

TopMicrobialStock, Getty Images

When Ina Chung, a Colorado mother, first fed packaged foods to her infant, she was careful to read the labels. Her daughter was allergic to peanuts, dairy, and eggs, so products containing those ingredients were out. So were foods with labels that said they may contain the allergens.

Chung felt like this last category suggested a clear risk that wasn’t worth taking. “I had heard that the ingredient labels were regulated. And so I thought that that included those statements,” said Chung. “Which was not true.”

Precautionary allergen labels like those that say “processed in a facility that uses milk” or “may contain fish” are meant to address the potential for cross-contact. For instance, a granola bar that doesn’t list peanuts as an ingredient could still say they may be included. And in the United States, these warnings are not regulated; companies can use whatever precautionary phrasing they choose on any product. Some don’t bother with any labels, even in facilities where unintended allergens slip in; others list allergens that may pose little risk. Robert Earl, vice president of regulatory affairs at Food Allergy Research & Education, or FARE, a nonprofit advocacy, research, and education group, has even seen such labels that include all nine common food allergens. “I would bet my bottom dollar not all of those allergens are even in the facility,” he said.

So what are the roughly 20 million people with food allergies in the US supposed to do with these warnings? Should they eat the granola bar or not?

Recognizing this uncertainty, food safety experts, allergy advocates, policymakers, and food producers are discussing how to demystify precautionary allergen labels. One widely considered solution is to restrict warnings to cases where visual or analytical tests demonstrate that there is enough allergen to actually trigger a reaction. Experts say the costs to the food industry are minimal, and some food producers across the globe, including in Canada, Australia, Thailand, and the United States, already voluntarily take this approach. But in the US, where there are no clear guidelines to follow, consumers are still left wondering what each individual precautionary allergen label even means.

Pull a packaged food off an American store shelf and the ingredients label should say if the product intentionally contains one of nine recognized allergens. That’s because in 2004, Congress granted the Food and Drug Administration the power to regulate labeling of eight major food allergens—eggs, fish, milk, crustaceans, peanuts, tree nuts, soybeans, and wheat. In 2021, sesame was added to the list.

But the language often gets murkier further down the label, where companies may include precautionary allergen labels, also called advisory statements, to address the fact that allergens can unintentionally wind up in foods at many stages of production. Perhaps wheat grows near a field of rye destined for bread, for instance, or peanuts get lodged in processing equipment that later pumps out chocolate chip cookies. Candy manufacturers, in particular, struggle to keep milk out of dark chocolate.

The FDA offers no labeling guidance beyond declaring that “advisory statements should not be used as a substitute for adhering to current good manufacturing practices and must be truthful and not misleading.”

Companies can choose when to use these warnings, which vary widely. For example, a 2017 survey conducted by the FDA and the Illinois Institute of Technology of 78 dark chocolate products found that almost two-thirds contained an advisory statement for peanuts; of those, only about four actually contained the allergen. Meanwhile, of 18 bars that carried no advisory statement for peanuts specifically, three contained the allergen. (One product that was positive for peanuts did warn more generally of nuts, but the researchers noted that this term is ambiguous.) Another product that tested positive included a nut warning on one lot but not on another. Individual companies also select their own precautionary label phrasing.

For consumers, the inconsistency can be confusing, said Ruchi Gupta, a pediatrician and director of the Center for Food Allergy & Asthma Research at Northwestern University’s Feinberg School of Medicine in Chicago. In 2019, Gupta and colleagues asked around 3,000 US adults who have allergies or care for someone who does about how different precautionary allergen label phrases make a difference when they are considering whether to buy a particular food. About 80 percent never purchase products with a may contain warning. Less than half avoid products with labels suggesting that it was manufactured in a facility that also processes an allergen, even though numerous studies show that the wording of a precautionary allergen label has no bearing on risk level. “People are making their own decisions on what sounds safe,” said Gupta.

When Chung learned that advisory labels were unregulated, she experimented with ignoring them when her then-toddler really wanted a particular food. When her daughter developed a couple of hives after eating a cereal labeled may contain peanuts, Chung went back to heeding warnings of peanut cross-contact but continued ignoring the rest.

“A lot of families just make up their own rules,” she said. “There’s no way to really know exactly what you’re getting.”

May contain nuts: Precautionary allergen labels lead to consumer confusion Read More »

neutrinos:-the-inscrutable-“ghost-particles”-driving-scientists-crazy

Neutrinos: The inscrutable “ghost particles” driving scientists crazy

ghostly experiments —

They hold the keys to new physics. If only we could understand them.

The Super-Kamiokande neutrino detector at the Kamioka Observatory in Japan.

Enlarge / The Super-Kamiokande neutrino detector at the Kamioka Observatory in Japan.

Kamioka Observatory, ICRR (Institute for Cosmic Ray Research), the University of Tokyo

Somehow, neutrinos went from just another random particle to becoming tiny monsters that require multi-billion-dollar facilities to understand. And there’s just enough mystery surrounding them that we feel compelled to build those facilities since neutrinos might just tear apart the entire particle physics community at the seams.

It started out innocently enough. Nobody asked for or predicted the existence of neutrinos, but there they were in our early particle experiments. Occasionally, heavy atomic nuclei spontaneously—and for no good reason—transform themselves, with either a neutron converting into a proton or vice-versa. As a result of this process, known as beta decay, the nucleus also emits an electron or its antimatter partner, the positron.

There was just one small problem: Nothing added up. The electrons never came out of the nucleus with the same energy; it was a little different every time. Some physicists argued that our conceptions of the conservation of energy only held on average, but that didn’t feel so good to say out loud, so others argued that perhaps there was another, hidden particle participating in the transformations. Something, they argued, had to sap energy away from the electron in a random way to explain this.

Eventually, that little particle got a name, the neutrino, an Italian-ish word meaning “little neutral one.” Whatever the neutrino was, it didn’t carry any electric charge and only participated in the weak nuclear force, so we only saw neutrinos at work in radioactive decay processes. But even with the multitude of decays with energies great and small happening all across the Universe every single second, the elusive nature of neutrinos meant we could only occasionally, rarely, weakly see them.

But see them we did (although it took 25 years), and for a while, we could just pretend that nothing was wrong. The neutrino was just another particle the Universe didn’t strictly need to give us but somehow stubbornly insisted on giving us anyway.

And then we discovered there wasn’t just one neutrino but three of them. For reasons the cosmos has yet to divulge to us, it likes to organize its particles into groups of three, known as generations. Take a nice, stable, regular fundamental particle, like an electron or an up or down quark—those particles represent the first generation. The other two generations share the same properties (like spin and electric charge) but have a heavier mass.

For the electron, we have its generational sibling, the muon, which is just like the electron but 200 times heavier, and the tau, which is also just like the electron but 3,500 times heavier (that’s heavier than a proton). For the down quark, we have its siblings, the “strange” and “bottom” quarks. And we call the heavier versions of the up quark the “charm” and “top” quarks. Why does the Universe do this? Why three generations with these masses? As I said, the cosmos has chosen not to reveal that to us (yet).

So there are three generations of neutrinos, named for the kinds of interactions they participate in. Some nuclear reactions involve only the first generation of particles (which are the most common by far), the up and down quarks, and the electrons. Here, electron-neutrinos are involved. When muons play around, muon-neutrinos come out, too. And no points will be awarded for guessing the name of the neutrinos associated with tau particle interactions.

All this is… fine. Aside from the burning mystery of the existence of particle generations in the first place, it would be a bit greedy for one neutrino to participate in all possible reactions. So it has to share the job with two other generations. It seemed odd, but it all worked.

And then we discovered that neutrinos had mass, and the whole thing blew up.

Neutrinos: The inscrutable “ghost particles” driving scientists crazy Read More »

brompton-c-line-electric-review:-fun-and-foldable,-fits-better-than-you’d-think

Brompton C Line Electric review: Fun and foldable, fits better than you’d think

Brompton C Line Electric Review —

A motor evens out its natural disadvantages, but there’s still a learning curve.

What can I say? It was tough putting the Brompton C Line Electric through its paces. Finding just the right context for it. Grueling work.

Enlarge / What can I say? It was tough putting the Brompton C Line Electric through its paces. Finding just the right context for it. Grueling work.

Kevin Purdy

There’s never been a better time to ride a weird bike.

That’s especially true if you live in a city where you can regularly see kids being dropped off at schools from cargo bikes with buckets, child seats, and full rain covers. Further out from the urban core, fat-tire e-bikes share space on trails with three-wheelers, retro-style cruisers, and slick roadies. And folding bikes, once an obscurity, are showing up in more places, especially as they’ve gone electric.

So when I got to try out the Brompton Electric C Line (in a six-speed model), I felt far less intimidated riding, folding, and stashing the little guy wherever I went than I might have been a few years back. A few folks recognized the distinctively small and British bike and offered a thumbs-up or light curiosity. If anyone was concerned about the oddity of this quirky ride, it was me, mostly because I obsessed over whether I could and should lock it up outside or not.

But for the most part, the Brompton fits in, and it works as a bike. It sat next to me at bars and coffee shops and outdoor eateries, it rode the DC Metro, it went on a memorial group ride, and it went to the grocery store. I repeatedly hauled it to a third-floor walkup apartment and brought it on a week’s vacation, fitting it on the floor behind the car driver’s seat. And with an electric battery pack, it was even easier to forget that it was any different from a stereotypical bike—so long as you didn’t look down.

Still, should you pay a good deal more than $3,000 (and probably more like $4,000 after accessories) for a bike with 16-inch tires—especially one you might never want to leave locked up outside?

Let’s get into that.

  • The Brompton C Line, pre-fold (mid-beer).

    Kevin Purdy

  • Step 1: Release a clasp and pull the bike frame up, allowing the rear wheel to swing forward underneath.

    Kevin Purdy

  • Step 2: Loosen the clamp and fold the front half back to align with the rear wheel, lining up a little hook on the wheel with the frame.

    Kevin Purdy

  • Step 3: Remove the battery (technically unnecessary, but wise), loosen a clamp holding up the handlebar, then fold it down onto the frame, letting a nub tuck into a locking notch.

    Kevin Purdy

  • Step 4: Drop down the seat (which also locks the frame into position), rotate one pedal onto the tire, and flip the other pedal up.

    Kevin Purdy

Learning The Fold

Whether you buy it at a store or have it shipped to you, a Brompton C Line is possibly the easiest e-bike to unpack, set up, and get rolling. You take out the folded-up bike, screw in the crucial hinge clamps that hold it together, put on the saddle, and learn how to unfold it for the first time. Throw some air in the tires, and you could be on your way about 20 minutes after getting the bike.

But you shouldn’t head out without getting some reps in on The Fold. The Fold is the reason the Brompton exists. It hasn’t actually changed that much since Andrew Ritchie designed it in 1975. Release a rear frame clip and yank the frame up, and the rear wheel and its frame triangle roll underneath the top tube. Unscrew a hinged clamp, then “stir” the front wheel backward, allowing a subtle hook to catch on the rear frame. Drop the seat and you’ll feel something lock inside the frame. You can then unhinge and fold the handlebar down, or you can keep it up to push the bike around on its tiny frame wheels in “shopping cart mode.”

If you forget the sequence of the fold, there are little reminders in a few spots on the bike.

If you forget the sequence of the fold, there are little reminders in a few spots on the bike.

Kevin Purdy

After maybe five attempts, I began to get The Fold done in less than a minute. After around a dozen tries, I started to appreciate its design and motions. The way a Brompton folds up is great for certain applications, like fitting into a car instead of using a rack, bringing on public transit or train rides, tucking underneath a counter or table, or fitting into the corner of the most space-challenged home. It can also be handy if you’re heading somewhere you’re wary of locking it up outside (more on that in a moment).

Brompton C Line Electric review: Fun and foldable, fits better than you’d think Read More »

can-a-technology-called-rag-keep-ai-models-from-making-stuff-up?

Can a technology called RAG keep AI models from making stuff up?

Can a technology called RAG keep AI models from making stuff up?

Aurich Lawson | Getty Images

We’ve been living through the generative AI boom for nearly a year and a half now, following the late 2022 release of OpenAI’s ChatGPT. But despite transformative effects on companies’ share prices, generative AI tools powered by large language models (LLMs) still have major drawbacks that have kept them from being as useful as many would like them to be. Retrieval augmented generation, or RAG, aims to fix some of those drawbacks.

Perhaps the most prominent drawback of LLMs is their tendency toward confabulation (also called “hallucination”), which is a statistical gap-filling phenomenon AI language models produce when they are tasked with reproducing knowledge that wasn’t present in the training data. They generate plausible-sounding text that can veer toward accuracy when the training data is solid but otherwise may just be completely made up.

Relying on confabulating AI models gets people and companies in trouble, as we’ve covered in the past. In 2023, we saw two instances of lawyers citing legal cases, confabulated by AI, that didn’t exist. We’ve covered claims against OpenAI in which ChatGPT confabulated and accused innocent people of doing terrible things. In February, we wrote about Air Canada’s customer service chatbot inventing a refund policy, and in March, a New York City chatbot was caught confabulating city regulations.

So if generative AI aims to be the technology that propels humanity into the future, someone needs to iron out the confabulation kinks along the way. That’s where RAG comes in. Its proponents hope the technique will help turn generative AI technology into reliable assistants that can supercharge productivity without requiring a human to double-check or second-guess the answers.

“RAG is a way of improving LLM performance, in essence by blending the LLM process with a web search or other document look-up process” to help LLMs stick to the facts, according to Noah Giansiracusa, associate professor of mathematics at Bentley University.

Let’s take a closer look at how it works and what its limitations are.

A framework for enhancing AI accuracy

Although RAG is now seen as a technique to help fix issues with generative AI, it actually predates ChatGPT. Researchers coined the term in a 2020 academic paper by researchers at Facebook AI Research (FAIR, now Meta AI Research), University College London, and New York University.

As we’ve mentioned, LLMs struggle with facts. Google’s entry into the generative AI race, Bard, made an embarrassing error on its first public demonstration back in February 2023 about the James Webb Space Telescope. The error wiped around $100 billion off the value of parent company Alphabet. LLMs produce the most statistically likely response based on their training data and don’t understand anything they output, meaning they can present false information that seems accurate if you don’t have expert knowledge on a subject.

LLMs also lack up-to-date knowledge and the ability to identify gaps in their knowledge. “When a human tries to answer a question, they can rely on their memory and come up with a response on the fly, or they could do something like Google it or peruse Wikipedia and then try to piece an answer together from what they find there—still filtering that info through their internal knowledge of the matter,” said Giansiracusa.

But LLMs aren’t humans, of course. Their training data can age quickly, particularly in more time-sensitive queries. In addition, the LLM often can’t distinguish specific sources of its knowledge, as all its training data is blended together into a kind of soup.

In theory, RAG should make keeping AI models up to date far cheaper and easier. “The beauty of RAG is that when new information becomes available, rather than having to retrain the model, all that’s needed is to augment the model’s external knowledge base with the updated information,” said Peterson. “This reduces LLM development time and cost while enhancing the model’s scalability.”

Can a technology called RAG keep AI models from making stuff up? Read More »