Features

breaking-down-why-apple-tvs-are-privacy-advocates’-go-to-streaming-device

Breaking down why Apple TVs are privacy advocates’ go-to streaming device


Using the Apple TV app or an Apple account means giving Apple more data, though.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Every time I write an article about the escalating advertising and tracking on today’s TVs, someone brings up Apple TV boxes. Among smart TVs, streaming sticks, and other streaming devices, Apple TVs are largely viewed as a safe haven.

“Just disconnect your TV from the Internet and use an Apple TV box.”

That’s the common guidance you’ll hear from Ars readers for those seeking the joys of streaming without giving up too much privacy. Based on our research and the experts we’ve consulted, that advice is pretty solid, as Apple TVs offer significantly more privacy than other streaming hardware providers.

But how private are Apple TV boxes, really? Apple TVs don’t use automatic content recognition (ACR, a user-tracking technology leveraged by nearly all smart TVs and streaming devices), but could that change? And what about the software that Apple TV users do use—could those apps provide information about you to advertisers or Apple?

In this article, we’ll delve into what makes the Apple TV’s privacy stand out and examine whether users should expect the limited ads and enhanced privacy to last forever.

Apple TV boxes limit tracking out of the box

One of the simplest ways Apple TVs ensure better privacy is through their setup process, during which you can disable Siri, location tracking, and sending analytics data to Apple. During setup, users also receive several opportunities to review Apple’s data and privacy policies. Also off by default is the boxes’ ability to send voice input data to Apple.

Most other streaming devices require users to navigate through pages of settings to disable similar tracking capabilities, which most people are unlikely to do. Apple’s approach creates a line of defense against snooping, even for those unaware of how invasive smart devices can be.

Apple TVs running tvOS 14.5 and later also make third-party app tracking more difficult by requiring such apps to request permission before they can track users.

“If you choose Ask App Not to Track, the app developer can’t access the system advertising identifier (IDFA), which is often used to track,” Apple says. “The app is also not permitted to track your activity using other information that identifies you or your device, like your email address.”

Users can access the Apple TV settings and disable the ability of third-party apps to ask permission for tracking. However, Apple could further enhance privacy by enabling this setting by default.

The Apple TV also lets users control which apps can access the set-top box’s Bluetooth functionality, photos, music, and HomeKit data (if applicable), and the remote’s microphone.

“Apple’s primary business model isn’t dependent on selling targeted ads, so it has somewhat less incentive to harvest and monetize incredible amounts of your data,” said RJ Cross, director of the consumer privacy program at the Public Interest Research Group (PIRG). “I personally trust them more with my data than other tech companies.”

What if you share analytics data?

If you allow your Apple TV to share analytics data with Apple or app developers, that data won’t be personally identifiable, Apple says. Any collected personal data is “not logged at all, removed from reports before they’re sent to Apple, or protected by techniques, such as differential privacy,” Apple says.

Differential privacy, which injects noise into collected data, is one of the most common methods used for anonymizing data. In support documentation (PDF), Apple details its use of differential privacy:

The first step we take is to privatize the information using local differential privacy on the user’s device. The purpose of privatization is to assure that Apple’s servers don’t receive clear data. Device identifiers are removed from the data, and it is transmitted to Apple over an encrypted channel. The Apple analysis system ingests the differentially private contributions, dropping IP addresses and other metadata. The final stage is aggregation, where the privatized records are processed to compute the relevant statistics, and the aggregate statistics are then shared with relevant Apple teams. Both the ingestion and aggregation stages are performed in a restricted access environment so even the privatized data isn’t broadly accessible to Apple employees.

What if you use an Apple account with your Apple TV?

Another factor to consider is Apple’s privacy policy regarding Apple accounts, formerly Apple IDs.

Apple support documentation says you “need” an Apple account to use an Apple TV, but you can use the hardware without one. Still, it’s common for people to log into Apple accounts on their Apple TV boxes because it makes it easier to link with other Apple products. Another reason someone might link an Apple TV box with an Apple account is to use the Apple TV app, a common way to stream on Apple TV boxes.

So what type of data does Apple harvest from Apple accounts? According to its privacy policy, the company gathers usage data, such as “data about your activity on and use of” Apple offerings, including “app launches within our services…; browsing history; search history; [and] product interaction.”

Other types of data Apple may collect from Apple accounts include transaction information (Apple says this is “data about purchases of Apple products and services or transactions facilitated by Apple, including purchases on Apple platforms”), account information (“including email address, devices registered, account status, and age”), device information (including serial number and browser type), contact information (including physical address and phone number), and payment information (including bank details). None of that is surprising considering the type of data needed to make an Apple account work.

Many Apple TV users can expect Apple to gather more data from their Apple account usage on other devices, such as iPhones or Macs. However, if you use the same Apple account across multiple devices, Apple recognizes that all the data it has collected from, for example, your iPhone activity, also applies to you as an Apple TV user.

A potential workaround could be maintaining multiple Apple accounts. With an Apple account solely dedicated to your Apple TV box and Apple TV hardware and software tracking disabled as much as possible, Apple would have minimal data to ascribe to you as an Apple TV owner. You can also use your Apple TV box without an Apple account, but then you won’t be able to use the Apple TV app, one of the device’s key features.

Data collection via the Apple TV app

You can download third-party apps like Netflix and Hulu onto an Apple TV box, but most TV and movie watching on Apple TV boxes likely occurs via the Apple TV app. The app is necessary for watching content on the Apple TV+ streaming service, but it also drives usage by providing access to the libraries of many (but not all) popular streaming apps in one location. So understanding the Apple TV app’s privacy policy is critical to evaluating how private Apple TV activity truly is.

As expected, some of the data the app gathers is necessary for the software to work. That includes, according to the app’s privacy policy, “information about your purchases, downloads, activity in the Apple TV app, the content you watch, and where you watch it in the Apple TV app and in connected apps on any of your supported devices.” That all makes sense for ensuring that the app remembers things like which episode of Severance you’re on across devices.

Apple collects other data, though, that isn’t necessary for functionality. It says it gathers data on things like the “features you use (for example, Continue Watching or Library),” content pages you view, how you interact with notifications, and approximate location information (that Apple says doesn’t identify users) to help improve the app.

Additionally, Apple tracks the terms you search for within the app, per its policy:

We use Apple TV search data to improve models that power Apple TV. For example, aggregate Apple TV search queries are used to fine-tune the Apple TV search model.

This data usage is less intrusive than that of other streaming devices, which might track your activity and then sell that data to third-party advertisers. But some people may be hesitant about having any of their activities tracked to benefit a multi-trillion-dollar conglomerate.

Data collected from the Apple TV app used for ads

By default, the Apple TV app also tracks “what you watch, your purchases, subscriptions, downloads, browsing, and other activities in the Apple TV app” to make personalized content recommendations. Content recommendations aren’t ads in the traditional sense but instead provide a way for Apple to push you toward products by analyzing data it has on you.

You can disable the Apple TV app’s personalized recommendations, but it’s a little harder than you might expect since you can’t do it through the app. Instead, you need to go to the Apple TV settings and then select Apps > TV > Use Play History > Off.

The most privacy-conscious users may wish that personalized recommendations were off by default. Darío Maestro, senior legal fellow at the nonprofit Surveillance Technology Oversight Project (STOP), noted to Ars that even though Apple TV users can opt out of personalized content recommendations, “many will not realize they can.”

Apple can also use data it gathers on you from the Apple TV app to serve traditional ads. If you allow your Apple TV box to track your location, the Apple TV app can also track your location. That data can “be used to serve geographically relevant ads,” according to the Apple TV app privacy policy. Location tracking, however, is off by default on Apple TV boxes.

Apple’s tvOS doesn’t have integrated ads. For comparison, some TV OSes, like Roku OS and LG’s webOS, show ads on the OS’s home screen and/or when showing screensavers.

But data gathered from the Apple TV app can still help Apple’s advertising efforts. This can happen if you allow personalized ads in other Apple apps serving targeted apps, such as Apple News, the App Store, or Stocks. In such cases, Apple may apply data gathered from the Apple TV app, “including information about the movies and TV shows you purchase from Apple, to serve ads in those apps that are more relevant to you,” the Apple TV app privacy policy says.

Apple also provides third-party advertisers and strategic partners with “non-personal data” gathered from the Apple TV app:

We provide some non-personal data to our advertisers and strategic partners that work with Apple to provide our products and services, help Apple market to customers, and sell ads on Apple’s behalf to display on the App Store and Apple News and Stocks.

Apple also shares non-personal data from the Apple TV with third parties, such as content owners, so they can pay royalties, gauge how much people are watching their shows or movies, “and improve their associated products and services,” Apple says.

Apple’s policy notes:

For example, we may share non-personal data about your transactions, viewing activity, and region, as well as aggregated user demographics[,] such as age group and gender (which may be inferred from information such as your name and salutation in your Apple Account), to Apple TV strategic partners, such as content owners, so that they can measure the performance of their creative work [and] meet royalty and accounting requirements.

When reached for comment, an Apple spokesperson told Ars that Apple TV users can clear their play history from the app.

All that said, the Apple TV app still shares far less data with third parties than other streaming apps. Netflix, for example, says it discloses some personal information to advertising companies “in order to select Advertisements shown on Netflix, to facilitate interaction with Advertisements, and to measure and improve effectiveness of Advertisements.”

Warner Bros. Discovery says it discloses information about Max viewers “with advertisers, ad agencies, ad networks and platforms, and other companies to provide advertising to you based on your interests.” And Disney+ users have Nielsen tracking on by default.

What if you use Siri?

You can easily deactivate Siri when setting up an Apple TV. But those who opt to keep the voice assistant and the ability to control Apple TV with their voice take somewhat of a privacy hit.

According to the privacy policy accessible in Apple TV boxes’ settings, Apple boxes automatically send all Siri requests to Apple’s servers. If you opt into using Siri data to “Improve Siri and Dictation,” Apple will store your audio data. If you opt out, audio data won’t be stored, but per the policy:

In all cases, transcripts of your interactions will be sent to Apple to process your requests and may be stored by Apple.

Apple TV boxes also send audio and transcriptions of dictation input to Apple servers for processing. Apple says it doesn’t store the audio but may store transcriptions of the audio.

If you opt to “Improve Siri and Dictation,” Apple says your history of voice requests isn’t tied to your Apple account or email. But Apple is vague about how long it may store data related to voice input performed with the Apple TV if you choose this option.

The policy states:

Your request history, which includes transcripts and any related request data, is associated with a random identifier for up to six months and is not tied to your Apple Account or email address. After six months, you request history is disassociated from the random identifier and may be retained for up to two years. Apple may use this data to develop and improve Siri, Dictation, Search, and limited other language processing functionality in Apple products …

Apple may also review a subset of the transcripts of your interactions and this … may be kept beyond two years for the ongoing improvements of products and services.

Apple promises not to use Siri and voice data to build marketing profiles or sell them to third parties, but it hasn’t always adhered to that commitment. In January, Apple agreed to pay $95 million to settle a class-action lawsuit accusing Siri of recording private conversations and sharing them with third parties for targeted ads. In 2019, contractors reported hearing private conversations and recorded sex via Siri-gathered audio.

Outside of Apple, we’ve seen voice request data used questionably, including in criminal trials and by corporate employees. Siri and dictation data also represent additional ways a person’s Apple TV usage might be unexpectedly analyzed to fuel Apple’s business.

Automatic content recognition

Apple TVs aren’t preloaded with automatic content recognition (ACR), an Apple spokesperson confirmed to Ars, another plus for privacy advocates. But ACR is software, so Apple could technically add it to Apple TV boxes via a software update at some point.

Sherman Li, the founder of Enswers, the company that first put ACR in Samsung TVs, confirmed to Ars that it’s technically possible for Apple to add ACR to already-purchased Apple boxes. Years ago, Enswers retroactively added ACR to other types of streaming hardware, including Samsung and LG smart TVs. (Enswers was acquired by Gracenote, which Nielsen now owns.)

In general, though, there are challenges to adding ACR to hardware that people already own, Li explained:

Everyone believes, in theory, you can add ACR anywhere you want at any time because it’s software, but because of the way [hardware is] architected… the interplay between the chipsets, like the SoCs, and the firmware is different in a lot of situations.

Li pointed to numerous variables that could prevent ACR from being retroactively added to any type of streaming hardware, “including access to video frame buffers, audio streams, networking connectivity, security protocols, OSes, and app interface communication layers, especially at different levels of the stack in these devices, depending on the implementation.”

Due to the complexity of Apple TV boxes, Li suspects it would be difficult to add ACR to already-purchased Apple TVs. It would likely be simpler for Apple to release a new box with ACR if it ever decided to go down that route.

If Apple were to add ACR to old or new Apple TV boxes, the devices would be far less private, and the move would be highly unpopular and eliminate one of the Apple TV’s biggest draws.

However, Apple reportedly has a growing interest in advertising to streaming subscribers. The Apple TV+ streaming service doesn’t currently show commercials, but the company is rumored to be exploring a potential ad tier. The suspicions stem from a reported meeting between Apple and the United Kingdom’s ratings body, Barb, to discuss how it might track ads on Apple TV+, according to a July report from The Telegraph.

Since 2023, Apple has also hired several prominent names in advertising, including a former head of advertising at NBCUniversal and a new head of video ad sales. Further, Apple TV+ is one of the few streaming services to remain ad-free, and it’s reported to be losing Apple $1 billion per year since its launch.

One day soon, Apple may have much more reason to care about advertising in streaming and being able to track the activities of people who use its streaming offerings. That has implications for Apple TV box users.

“The more Apple creeps into the targeted ads space, the less I’ll trust them to uphold their privacy promises. You can imagine Apple TV being a natural progression for selling ads,” PIRG’s Cross said.

Somewhat ironically, Apple has marketed its approach to privacy as a positive for advertisers.

“Apple’s commitment to privacy and personal relevancy builds trust amongst readers, driving a willingness to engage with content and ads alike,” Apple’s advertising guide for buying ads on Apple News and Stocks reads.

The most private streaming gadget

It remains technologically possible for Apple to introduce intrusive tracking or ads to Apple TV boxes, but for now, the streaming devices are more private than the vast majority of alternatives, save for dumb TVs (which are incredibly hard to find these days). And if Apple follows its own policies, much of the data it gathers should be kept in-house.

However, those with strong privacy concerns should be aware that Apple does track certain tvOS activities, especially those that happen through Apple accounts, voice interaction, or the Apple TV app. And while most of Apple’s streaming hardware and software settings prioritize privacy by default, some advocates believe there’s room for improvement.

For example, STOP’s Maestro said:

Unlike in the [European Union], where the upcoming Data Act will set clearer rules on transfers of data generated by smart devices, the US has no real legislation governing what happens with your data once it reaches Apple’s servers. Users are left with little way to verify those privacy promises.

Maestro suggested that Apple could address these concerns by making it easier for people to conduct security research on smart device software. “Allowing the development of alternative or modified software that can evaluate privacy settings could also increase user trust and better uphold Apple’s public commitment to privacy,” Maestro said.

There are ways to limit the amount of data that advertisers can get from your Apple TV. But if you use the Apple TV app, Apple can use your activity to help make business decisions—and therefore money.

As you might expect from a device that connects to the Internet and lets you stream shows and movies, Apple TV boxes aren’t totally incapable of tracking you. But they’re still the best recommendation for streaming users seeking hardware with more privacy and fewer ads.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Breaking down why Apple TVs are privacy advocates’ go-to streaming device Read More »

my-3d-printing-journey,-part-2:-printing-upgrades-and-making-mistakes

My 3D printing journey, part 2: Printing upgrades and making mistakes


3D-printing new parts for the A1 taught me a lot about plastic, and other things.

Different plastic filament is good for different things (and some kinds don’t work well with the A1 and other open-bed printers). Credit: Andrew Cunningham

Different plastic filament is good for different things (and some kinds don’t work well with the A1 and other open-bed printers). Credit: Andrew Cunningham

For the last three months or so, I’ve been learning to use (and love) a Bambu Labs A1 3D printer, a big, loud machine that sits on my desk and turns pictures on my computer screen into real-world objects.

In the first part of my series about diving into the wild world of 3D printers, I covered what I’d learned about the different types of 3D printers, some useful settings in the Bambu Studio app (which should also be broadly useful to know about no matter what printer you use), and some initial, magical-feeling successes in downloading files that I turned into useful physical items using a few feet of plastic filament and a couple hours of time.

For this second part, I’m focusing on what I learned when I embarked on my first major project—printing upgrade parts for the A1 with the A1. It was here that I made some of my first big 3D printing mistakes, mistakes that prompted me to read up on the different kinds of 3D printer filament, what each type of filament is good for, and which types the A1 is (and is not) good at handling as an un-enclosed, bed-slinging printer.

As with the information in part one, I share this with you not because it is groundbreaking but because there’s a lot of information out there, and it can be an intimidating hobby to break into. By sharing what I learned and what I found useful early in my journey, I hope I can help other people who have been debating whether to take the plunge.

Adventures in recursion: 3D-printing 3D printer parts

A display cover for the A1’s screen will protect it from wear and tear and allow you to easily hide it when you want to. Credit: Andrew Cunningham

My very first project was a holder for my office’s ceiling fan remote. My second, similarly, was a wall-mounted holder for the Xbox gamepad and wired headset I use with my gaming PC, which normally just had to float around loose on my desk when I wasn’t using them.

These were both relatively quick, simple prints that showed the printer was working like it was supposed to—all of the built-in temperature settings, the textured PEI plate, the printer’s calibration and auto-bed-leveling routines added up to make simple prints as dead-easy as Bambu promised they would be. It made me eager to seek out other prints, including stuff on the Makerworld site I hadn’t thought to try yet.

The first problem I had? Well, as part of its pre-print warmup routine, the A1 spits a couple of grams of filament out and tosses it to the side. This is totally normal—it’s called “purging,” and it gets rid of filament that’s gone brittle from being heated too long. If you’re changing colors, it also clears any last bits of the previous color that are still in the nozzle. But it didn’t seem particularly elegant to have the printer eternally launching little knots of plastic onto my desk.

The A1’s default design just ejects little molten wads of plastic all over your desk when it’s changing or purging filament. This is one of many waste bin (or “poop bucket”) designs made to catch and store these bits and pieces. Credit: Andrew Cunningham

The solution to this was to 3D-print a purging bucket for the A1 (also referred to, of course, as a “poop bucket” or “poop chute.”) In fact, there are tons of purging buckets designed specifically for the A1 because it’s a fairly popular budget model and there’s nothing stopping people from making parts that fit it like a glove.

I printed this bucket, as well as an additional little bracket that would “catch” the purged filament and make sure it fell into the bucket. And this opened the door to my first major printing project: printing additional parts for the printer itself.

I took to YouTube and watched a couple of videos on the topic because I’m apparently far from the first person who has had this reaction to the A1. After much watching and reading, here are the parts I ended up printing:

  • Bambu Lab AMS Lite Top Mount and Z-Axis Stiffener: The Lite version of Bambu’s Automated Materials System (AMS) is the optional accessory that enables multi-color printing for the A1. And like the A1 itself, it’s a lower-cost, open-air version of the AMS that works with Bambu’s more expensive printers.
    • The AMS Lite comes with a stand that you can use to set it next to the A1, but that’s more horizontal space than I had to spare. This top mount is Bambu’s official solution for putting the AMS Lite on top of the A1 instead, saving you some space.
    • The top mount actually has two important components: the top mount itself and a “Z-Axis Stiffener,” a pair of legs that extend behind the A1 to make the whole thing more stable on a desk or table. Bambu already recommends 195 mm (or 7.7 inches) of “safety margin” behind the A1 to give the bed room to sling, so if you’ve left that much space behind the printer, you probably have enough space for these legs.
    • After installing all of these parts, the top mount, and a fully loaded AMS, it’s probably a good idea to run the printer’s calibration cycle again to account for the difference in balance.
    • You may want to print the top mount itself with PETG, which is a bit stronger and more impact-resistant than PLA plastic.
  • A1 Purge Waste Bin and Deflector, by jimbobble. There are approximately 1 million different A1 purge bucket designs, each with its own appeal. But this one is large and simple and includes a version that is compatible with the printer Z-Axis Stiffener legs.
  • A1 rectangular fan cover, by Arzhang Lotfi. There are a bunch of options for this, including fun ones, but you can find dozens of simple grille designs that snap in place and protect the fan on the A1’s print head.
  • Bambu A1 Adjustable Camera Holder, by mlodybuk: This one’s a little more complicated because it does require some potentially warranty-voiding disassembly of components. The A1’s camera is also pretty awful no matter how you position it, with sub-1 FPS video that’s just barely suitable for checking on whether a print has been ruined or not.
    • But if you want to use it, I’d highly recommend moving it from the default location, which is low down and at an odd angle, so you’re not getting the best view of your print that you can.
    • This print includes a redesigned cover for the camera area, a filler piece to fill the hole where the camera used to be to keep dust and other things from getting inside the printer, and a small camera receptacle that snaps in place onto the new cover and can be turned up and down.
    • If you’re not comfortable modding your machine like this, the camera is livable as-is, but this got me a much better vantage point on my prints.

With a little effort, this print allows you to reposition the A1’s camera, giving you a better angle on your prints and making it adjustable. Credit: Andrew Cunningham

  • A1 Screen Protector New Release, by Rox3D: Not strictly necessary, but an unobtrusive way to protect (and to “turn off”) the A1’s built-in LCD screen when it’s not in use. The hinge mechanism of this print is stiff enough that the screen cover can be lifted partway without flopping back down.
  • A1 X-Axis Cover, by Moria3DPStudio: Another only-if-you-want-it print, this foldable cover slides over the A1’s exposed rail when you’re not using it. Just make sure you take it back off before you try to print anything—it won’t break anything, but the printer won’t be happy with you. Not that I’m speaking from experience.
  • Ultimate Filament Spool Enclosure for the AMS Lite, by Supergrapher: Here’s the big one, and it’s a true learning experience for all kinds of things. The regular Bambu AMS system for the P- and X-series printers is enclosed, which is useful not just for keeping dust from settling on your filament spools but for controlling humidity and keeping spools you’ve dried from re-absorbing moisture. There’s no first-party enclosure for the AMS Lite, but this user-created enclosure is flexible and popular, and it can be used to enclose the AMS Lite whether you have it mounted on top of or to the side of the A1. The small plastic clips that keep the lids on are mildly irritating to pop on and off, relative to a lid that you can just lift up and put back down, but the benefits are worth it.
  • 3D Disc for A1 – “Pokéball,” by BS 3D Print: One of the few purely cosmetic parts I’ve printed. The little spinning bit on the front of the A1’s print head shows you when the filament is being extruded, but it’s not a functional part. This is just one of dozens and dozens of cosmetic replacements for it if you choose to pop it off.
  • Sturdy Modular Filament Spool Rack, by Antiphrasis: Not technically an upgrade for the A1, but an easy recommendation for any new 3D printers who suddenly find themselves with a rainbow of a dozen-plus different filaments you want to try. Each shelf here holds three spools of filament, and you can print additional shelves to spread them out either horizontally, vertically, or both, so you can make something that exactly meets your needs and fits your space. A two-by-three shelf gave me room for 18 spools, and I can print more if I need them.

There are some things that others recommend for the A1 that I haven’t printed yet—mainly guides for cables, vibration dampeners for the base, and things to reinforce areas of possible stress for the print head and the A1’s loose, dangly wire.

Part of the fun is figuring out what your problems are, identifying prints that could help solve the problem, and then trying them out to see if they do solve your problem. (The parts have also given my A1 its purple accents, since a bright purple roll of filament was one of the first ones my 5-year-old wanted to get.)

Early mistakes

The “Z-Axis stiffener,” an extra set of legs for the A1 that Bambu recommends if you top-mount your AMS Lite. This took me three tries to print, mainly because of my own inexperience. Credit: Andrew Cunningham

Printing each of these parts gave me a solid crash course into common pitfalls and rookie mistakes.

For example, did you know that ABS plastic doesn’t print well on an open-bed printer? Well, it doesn’t! But I didn’t know that when I bought a spool of ABS to print some parts that I wanted to be sturdier and more resistant to wear and tear. I’d open the window and leave the room to deal with the fumes and be fine, I figured.

I tried printing the Z-Axis Stiffener supports for the A1 in ABS, but they went wonky. Lower bed temperature and (especially) ambient temperature tends to make ABS warp and curl upward, and extrusion-based printers rely on precision to do their thing. Once a layer—any layer!—gets screwed up during a print, that will reverberate throughout the entire rest of the object. Which is why my first attempt at supports ended up being totally unusable.

Large ABS plastic prints are tough to do on an open-bed printer. You can see here how that lower-left corner peeled upward slightly from the print bed, and any unevenness in the foundation of your print is going to reverberate in the layers that are higher up. Credit: Andrew Cunningham

I then tried printing another set of supports with PLA plastic, ones that claimed to maintain their sturdiness while using less infill (that is, how much plastic is actually used inside the print to give it rigidity—around 15 percent is typically a good balance between rigidity and wasting plastic that you’ll never see, though there may be times when you want more or less). I’m still not sure what I did, but the prints I got were squishy and crunchy to the touch, a clear sign that the amount and/or type of infill wasn’t sufficient. It wasn’t until my third try—the original Bambu-made supports, in PLA instead of ABS—that I made supports I could actually use.

An attempt at printing the same part with PLA, but with insufficient infill plastic that left my surfaces rough and the interiors fragile and crunchy. I canceled this one about halfway through when it became clear that something wasn’t right. Credit: Andrew Cunningham

After much reading and research, I learned that for most things, PETG plastic is what you use if you want to make sturdier (and outdoor-friendly) prints on an open bed. Great! I decided I’d print most of the A1 ABS enclosure with clear PETG filament to make something durable that I could also see through when I wanted to see how much filament was left on a given spool.

This ended up being a tricky first experiment with PETG plastic for three different reasons. For one, printing “clear” PETG that actually looks clear is best done with a larger nozzle (Bambu offers 0.2 mm, 0.6 mm, and 0.8 mm nozzles for the A1, in addition to the default 0.4 mm) because you can get the same work done in fewer layers, and the more layers you have, the less “clear” that clear plastic will be. Fine!

The Inland-brand clear PETG+ I bought from our local Micro Center also didn’t love the default temperature settings for generic PETG that the A1 uses, both for the heatbed and the filament itself; plastic flowed unevenly from the nozzle and was prone to coming detached from the bed. If this is happening to you (or if you want to experiment with lowering your temperatures to save a bit of energy), going into Bambu Studio, nudging temperatures by 5 degrees in either direction, and trying a quick test print (I like this one) helped me dial in my settings when using unfamiliar filament.

This homebrewed enclosure for the AMS Lite multi-color filament switcher (and the top mount that sticks it on the top of the printer) has been my biggest and most complex print to date. An 0.8 mm nozzle and some settings changes are recommended to maximize the transparency of transparent PETG filament. Credit: Andrew Cunningham

Finally, PETG is especially prone to absorbing ambient moisture. When that moisture hits a 260° nozzle, it quickly evaporates, and that can interfere with the evenness of the flow rate and the cleanliness of your print (this usually manifests as “stringing,” fine, almost cotton-y strands that hang off your finished prints).

You can buy dedicated filament drying boxes or stick spools in an oven at a low temperature for a few hours if this really bothers you or if it’s significant enough to affect the quality of your prints. One of the reasons to have an enclosure is to create a humidity-controlled environment to keep your spools from absorbing too much moisture in the first place.

The temperature and nozzle-size adjustments made me happy enough with my PETG prints that I was fine to pick off the little fuzzy stringers that were on my prints afterward, but your mileage may vary.

These are just a few examples of the kinds of things you learn if you jump in with both feet and experiment with different prints and plastics in rapid succession. Hopefully, this advice helps you avoid my specific mistakes. But the main takeaway is that experience is the best teacher.

The wide world of plastics

I used filament to print a modular filament shelf for my filaments. Credit: Andrew Cunningham

My wife had gotten me two spools of filament, a white and a black spool of Bambu’s own PLA Basic. What does all of that mean?

No matter what you’re buying, it’s most commonly sold in 1 kilogram spools (the weight of the plastic, not the plastic and the spool together). Each thing you print will give you an estimate of how much filament, in grams, you’ll need to print it.

There are quite a few different types of plastics out there, on Bambu’s site and in other stores. But here are the big ones I found out about almost immediately:

Polylactic acid, or PLA

By far the most commonly used plastic, PLA is inexpensive, available in a huge rainbow of colors and textures, and has a relatively low melting point, making it an easy material for most 3D printers to work with. It’s made of renewable material rather than petroleum, which makes it marginally more environmentally friendly than some other kinds of plastic. And it’s easy to “finish” PLA-printed parts if you’re trying to make props, toys, or other objects that you don’t want to have that 3D printed look about them, whether you’re sanding those parts or using a chemical to smooth the finish.

The downside is that it’s not particularly resilient—sitting in a hot car or in direct sunlight for very long is enough to melt or warp it, which makes it a bad choice for anything that needs to survive outdoors or anything load-bearing. Its environmental bona fides are also a bit oversold—it is biodegradable, but it doesn’t do so quickly outside of specialized composting facilities. If you throw it in the trash and it goes to a landfill, it will still take its time returning to nature.

You’ll find a ton of different kinds of PLA out there. Some have additives that give them a matte or silky texture. Some have little particles of wood or metal or even coffee or spent beer grains embedded in them, meant to endow 3D printed objects with the look, feel, or smell of those materials.

Some PLA just has… some other kind of unspecified additive in it. You’ll see “PLA+” all over the place, but as far as I can tell, there is no industry-wide agreed-upon standard for what the plus is supposed to mean. Manufacturers sometimes claim it’s stronger than regular PLA; other terms like “PLA Pro” and “PLA Max” are similarly non-standardized and vague.

Polyethylene terephthalate glycol, or PETG

PET is a common household plastic, and you’ll find it in everything from clothing fibers to soda bottles. PETG is the same material, with ethylene glycol (the “G”) added to lower the melting point and make it less prone to crystallizing and warping. It also makes it more transparent, though trying to print anything truly “transparent” with an extrusion printer is difficult.

PETG has a higher melting point than PLA, but it’s still lower than other kinds of plastics. This makes PETG a good middle ground for some types of printing. It’s better than PLA for functional load-bearing parts and outdoor use because it’s stronger and able to bend a bit without warping, but it’s still malleable enough to print well on all kinds of home 3D printers.

PETG can still be fussier to work with than PLA. I more frequently had issues with the edges of my PETG prints coming unstuck from the bed of the printer before the print was done.

PETG filament is also especially susceptible to absorbing moisture from the air, which can make extrusion messier. My PETG prints have usually had lots of little wispy strings of plastic hanging off them by the end—not enough to affect the strength or utility of the thing I’ve printed but enough that I needed to pull the strings off to clean up the print once it was done. Drying the filament properly could help with that if I ever need the prints to be cleaner in the first place.

It’s also worth noting that PETG is the strongest kind of filament that an open-bed printer like the A1 can handle reliably. You can succeed with other plastics, but Reddit anecdotes, my own personal experience, and Bambu’s filament guide all point to a higher level of difficulty.

Acrylonitrile butadiene styrene, or ABS

“Going to look at the filament wall at Micro Center” is a legit father-son activity at this point. Credit: Andrew Cunningham

You probably have a lot of ABS plastic in your life. Game consoles and controllers, the plastic keys on most keyboards, Lego bricks, appliances, plastic board game pieces—it’s mostly ABS.

Thin layers of ABS stuck together aren’t as strong or durable as commercially manufactured injection-molded ABS, but it’s still more heat-resistant and durable than 3D-printed PLA or PETG.

There are two big issues specific to ABS, which are also outlined in Bambu’s FAQ for the A1. The first is that it doesn’t print well on an open-bed printer, especially for larger prints. The corners are more prone to pulling up off the print bed, and as with a house, any problems in your foundation will reverberate throughout the rest of your print.

The second is fumes. All 3D-printed plastics emit fumes when they’ve been melted, and a good rule of thumb is to at least print things in a room where you can open the window (and not in a room where anyone or anything sleeps). But ABS and ASA plastics in particular can emit fumes that cause eye and respiratory irritation, headaches, and nausea if you’re printing them indoors with insufficient ventilation.

As for what quantity of printing counts as “dangerous,” there’s no real consensus, and the studies that have been done mostly land in inconclusive “further study is needed” territory. At a bare minimum, it’s considered a best practice to at least be able to open a window if you’re printing with ABS or to use a closed-bed printer in an unoccupied part of your home, like a garage, shed, or workshop space (if you have one).

Acrylonitrile styrene acrylate, or ASA

Described to me by Ars colleague Lee Hutchinson as “ABS but with more UV resistance,” this material is even better suited for outdoor applications than the other plastics on this list.

But also like ABS, you’ll have a hard time getting good results with an open-bed printer, and the fumes are more harmful to inhale. You’ll want a closed-bed printer and decent ventilation for good results.

Thermoplastic polyurethane, or TPU

TPU is best known for its flexibility relative to the other kinds of plastics on this list. It doesn’t get as brittle when it’s cold and has more impact-resistance, and it can print reasonably well on an open-bed printer.

One downside of TPU is that you need to print slowly to get reliably good results—a pain, when even relatively simple fidget toys can take an hour or two to print at full speed using PLA. Longer prints mean more power use and more opportunities for your print to peel off the print bed. A roll of TPU filament will also usually run you a few dollars more than a roll of PLA, PETG, or ABS.

First- or third-party filament?

The first-party Bambu spools have RFID chips in them that Bambu printers can scan to automatically show the type and color of filament that it is and to keep track of how much filament you have remaining. Bambu also has temperature and speed presets for all of its first-party filaments built into the printer and the Bambu Studio software. There are presets for a few other filament brands in the printer, but I usually ended up using the “generic” presets, which may need some tuning to ensure the best possible adhesion to the print bed and extrusion from the nozzle.

I mostly ended up using Inland-branded filament I picked up from my local Micro Center—both because it’s cheaper than Bambu’s first-party stuff and because it’s faster and easier for me to get to. If you don’t have a brick-and-mortar hobby store with filaments in stock, the A1 and other printers sometimes come with some sample filament swatches so you can see the texture and color of the stuff you’re buying online.

What’s next?

Part of the fun of 3D printing is that it can be used for a wide array of projects—organizing your desk or your kitchen, printing out little fidget-toy favors for your kid’s birthday party, printing out replacement parts for little plastic bits and bobs that have broken, or just printing out decorations and other objects you’ll enjoy looking at.

Once you’re armed with all of the basic information in this guide, the next step is really up to you. What would you find fun or useful? What do you need? How can 3D printing help you with other household tasks or hobbies that you might be trying to break into? For the last part of this series, the Ars staffers with 3D printers at home will share some of their favorite prints—hearing people talk about what they’d done themselves really opened my eyes to the possibilities and the utility of these devices, and more personal testimonials may help those of you who are on the fence to climb down off of it.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

My 3D printing journey, part 2: Printing upgrades and making mistakes Read More »

where-hyperscale-hardware-goes-to-retire:-ars-visits-a-very-big-itad-site

Where hyperscale hardware goes to retire: Ars visits a very big ITAD site

Inside the laptop/desktop examination bay at SK TES’s Fredericksburg, Va. site.

Credit: SK tes

Inside the laptop/desktop examination bay at SK TES’s Fredericksburg, Va. site. Credit: SK tes

The details of each unit—CPU, memory, HDD size—are taken down and added to the asset tag, and the device is sent on to be physically examined. This step is important because “many a concealed drive finds its way into this line,” Kent Green, manager of this site, told me. Inside the machines coming from big firms, there are sometimes little USB, SD, SATA, or M.2 drives hiding out. Some were make-do solutions installed by IT and not documented, and others were put there by employees tired of waiting for more storage. “Some managers have been pretty surprised when they learn what we found,” Green said.

With everything wiped and with some sense of what they’re made of, each device gets a rating. It’s a three-character system, like “A-3-6,” based on function, cosmetic condition, and component value. Based on needs, trends, and other data, devices that are cleared for resale go to either wholesale, retail, component harvesting, or scrap.

Full-body laptop skins

Wiping down and prepping a laptop, potentially for a full-cover adhesive skin.

Credit: SK TES

Wiping down and prepping a laptop, potentially for a full-cover adhesive skin. Credit: SK TES

If a device has retail value, it heads into a section of this giant facility where workers do further checks. Automated software plays sounds on the speakers, checks that every keyboard key is sending signals, and checks that laptop batteries are at 80 percent capacity or better. At the end of the line is my favorite discovery: full-body laptop skins.

Some laptops—certain Lenovo, Dell, and HP models—are so ubiquitous in corporate fleets that it’s worth buying an adhesive laminating sticker in their exact shape. They’re an uncanny match for the matte black, silver, and slightly less silver finishes of the laptops, covering up any blemishes and scratches. Watching one of the workers apply this made me jealous of their ability to essentially reset a laptop’s condition (so one could apply whole new layers of swag stickers, of course). Once rated, tested, and stickered, laptops go into a clever “cradle” box, get the UN 3481 “battery inside” sticker, and can be sold through retail.

Where hyperscale hardware goes to retire: Ars visits a very big ITAD site Read More »

200-mph-for-500-miles:-how-indycar-drivers-prepare-for-the-big-race

200 mph for 500 miles: How IndyCar drivers prepare for the big race


Andretti Global’s Kyle Kirkwood and Marcus Ericsson talk to us about the Indy 500.

INDIANAPOLIS, INDIANA - MAY 15: #28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana.

#28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana. Credit: Brandon Badraoui/Lumen via Getty Images

#28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana. Credit: Brandon Badraoui/Lumen via Getty Images

This coming weekend is a special one for most motorsport fans. There are Formula 1 races in Monaco and NASCAR races in Charlotte. And arguably towering over them both is the Indianapolis 500, being held this year for the 109th time. America’s oldest race is also one of its toughest: The track may have just four turns, but the cars negotiate them going three times faster than you drive on the highway, inches from the wall. For hours. At least at Le Mans, you have more than one driver per car.

This year’s race promises to be an exciting one. The track is sold out for the first time since the centenary race in 2016. A rookie driver and a team new to the series took pole position. Two very fast cars are starting at the back thanks to another conflict-of-interest scandal involving Team Penske, the second in two years for a team whose owner also owns the track and the series. And the cars are trickier to drive than they have been for many years, thanks to a new supercapacitor-based hybrid system that has added more than 100 lbs to the rear of the car, shifting the weight distribution further back.

Ahead of Sunday’s race, I spoke with a couple of IndyCar drivers and some engineers to get a better sense of how they prepare and what to expect.

INDIANAPOLIS, INDIANA - MAY 17: #28, Marcus Ericsson, Andretti Global Honda during qualifying for the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 17, 2025 in Indianapolis, Indiana.

This year, the cars are harder to drive thanks to a hybrid system that has altered the weight balance. Credit: Geoff MIller/Lumen via Getty Images

Concentrate

It all comes “from months of preparation,” said Marcus Ericsson, winner of the race in 2022 and one of Andretti Global’s drivers in this year’s event. “When we get here to the month of May, it’s just such a busy month. So you’ve got to be prepared mentally—and basically before you get to the month of May because if you start doing it now, it’s too late,” he told me.

The drivers spend all month at the track, with a race on the road course earlier this month. Then there’s testing on the historic oval, followed by qualifying last weekend and the race this coming Sunday. “So all those hours you put in in the winter, really, and leading up here to the month of May—it’s what pays off now,” Ericsson said. That work involved multiple sessions of physical training each week, and Ericsson says he also does weekly mental coaching sessions.

“This is a mental challenge,” Ericsson told me. “Doing those speeds with our cars, you can’t really afford to have a split second of loss of concentration because then you might be in the wall and your day is over and you might hurt yourself.”

When drivers get tired or their focus slips, that’s when mistakes happen, and a mistake at Indy often has consequences.

A racing driver stands in front of four mechanics, who are facing away from him. The mechanics have QR codes on the back of their shirts.

Ericsson is sponsored by the antihistamine Allegra and its anti-drowsy-driving campaign. Fans can scan the QR codes on the back of his pit crew’s shirts for a “gamified experience.” Credit: Andretti Global/Allegra

Simulate

Being mentally and physically prepared is part of it. It also helps if you can roll the race car off the transporter and onto the track with a setup that works rather than spending the month chasing the right combination of dampers, springs, wing angles, and so on. And these days, that means a lot of simulation testing.

The multi-axis driver in the loop simulators might look like just a very expensive video game, but these multimillion-dollar setups aren’t about having fun. “Everything that you are feeling or changing in the sim is ultimately going to reflect directly to what happens on track,” explained Kyle Kirkwood, teammate to Ericsson at Andretti Global and one of only two drivers to have won an Indycar race in 2025.

Andretti, like the other teams using Honda engines, uses the new HRC simulator in Indiana. “And yes, it’s a very expensive asset, but it’s also likely cheaper than going to the track and doing the real thing,” Kirkwood said. “And it’s a much more controlled environment than being at the track because temperature changes or track conditions or wind direction play a huge factor with our car.”

A high degree of correlation between the simulation and the track is what makes it a powerful tool. “We run through a sim, and you only get so many opportunities, especially at a place like Indianapolis, where you go from one day to the next and the temperature swings, or the wind conditions, or whatever might change drastically,” Kirkwood said. “You have to be able to sim it and be confident with the sim that you’re running to go out there and have a similar balance or a similar performance.”

Kyle Kirkwood's indycar drives past the IMS logo on one of the track walls.

Andretti Global’s Kyle Kirkwood is the only driver other than Álex Palou to have won an IndyCar race in 2025. Credit: Alison Arena/Andretti Global

“So you have to make adjustments, whether it’s a spring rate, whether it’s keel ballast or just overall, maybe center of pressure, something like that,” Kirkwood said. “You have to be able to adjust to it. And that’s where the sim tool comes in play. You move the weight balance back, and you’re like, OK, now what happens with the balance? How do I tune that back in? And you run that all through the sim, and for us, it’s been mirror-perfect going to the track when we do that.”

More impressively, a lot of that work was done months ago. “I would say most of it, we got through it before the start of this season,” Kirkwood said. “Once we get into the season, we only get a select few days because every Honda team has to run on the same simulator. Of course, it’s different with the engineering sim; those are running nonstop.”

Sims are for engineers, too

An IndyCar team is more than just its drivers—”the spacer between the seat and the wheel,” according to Kirkwood—and the engineers rely heavily on sim work now that real-world testing is so highly restricted. And they use a lot more than just driver-in-the-loop (DiL).

“Digital simulation probably goes to a higher level,” explained Scott Graves, engineering manager at Andretti Global. “A lot of the models we develop work in the DiL as well as our other digital tools. We try to develop universal models, whether that’s tire models, engine models, or transmission models.”

“Once you get into to a fully digital model, then I think your optimization process starts kicking in,” Graves said. “You’re not just changing the setting and running a pretend lap with a driver holding a wheel. You’re able to run through numerous settings and optimization routines and step through a massive number of permutations on a car. Obviously, you’re looking for better lap times, but you’re also looking for fuel efficiency and a lot of other parameters that go into crossing the finish line first.”

A screenshot of a finite element analysis tool

Parts like this anti-roll bar are simulated thousands of times. Credit: Siemens/Andretti Global

As an example, Graves points to the dampers. “The shock absorber is a perfect example where that’s a highly sophisticated piece of equipment on the car and it’s very open for team development. So our cars have fully customized designs there that are optimized for how we run the car, and they may not be good on another team’s car because we’re so honed in on what we’re doing with the car,” he said.

“The more accurate a digital twin is, the more we are able to use that digital twin to predict the performance of the car,” said David Taylor, VP of industry strategy at Siemens DISW, which has partnered with Andretti for some years now. “It will never be as complete and accurate as we want it to be. So it’s a continuous pursuit, and we keep adding technology to our portfolio and acquiring companies to try to provide more and more tools to people like Scott so they can more accurately predict that performance.”

What to expect on Sunday?

Kirkwood was bullish about his chances despite starting relatively deep in the field, qualifying in 23rd place. “We’ve been phenomenal in race trim and qualifying,” he said. “We had a bit of a head-scratcher if I’m being honest—I thought we would definitely be a top-six contender, if not a front row contender, and it just didn’t pan out that way on Saturday qualifying.”

“But we rolled back out on Monday—the car was phenomenal. Once again, we feel very, very racy in traffic, which is a completely different animal than running qualifying,” Kirkwood said. “So I’m happy with it. I think our chances are good. We’re starting deep in the field, but so are a lot of other drivers. So you can expect a handful of us to move forward.”

The more nervous hybrid IndyCars with their more rearward weight bias will probably result in more cautions, according to Ericsson, who will line up sixth for the start of the race on Sunday.

“Whereas in previous years you could have a bit of a moment and it would scare you, you usually get away with it,” he said. “This year, if you have a moment, it usually ends up with you being in the fence. I think that’s why we’ve seen so many crashes this year—because a pendulum effect from the rear of the car that when you start losing it, this is very, very difficult or almost impossible to catch.”

“I think it’s going to mean that the race is going to be quite a few incidents with people making mistakes,” Ericsson said. “In practice, if your car is not behaving well, you bring it to the pit lane, right? You can do adjustments, whereas in the race, you have to just tough it out until the next pit stop and then make some small adjustments. So if you have a bad car at the start a race, it’s going to be a tough one. So I think it’s going to be a very dramatic and entertaining race.”

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

200 mph for 500 miles: How IndyCar drivers prepare for the big race Read More »

what-i-learned-from-my-first-few-months-with-a-bambu-lab-a1-3d-printer,-part-1

What I learned from my first few months with a Bambu Lab A1 3D printer, part 1


One neophyte’s first steps into the wide world of 3D printing.

The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham

The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham

For a couple of years now, I’ve been trying to find an excuse to buy a decent 3D printer.

Friends and fellow Ars staffers who had them would gush about them at every opportunity, talking about how useful they can be and how much can be printed once you get used to the idea of being able to create real, tangible objects with a little time and a few bucks’ worth of plastic filament.

But I could never quite imagine myself using one consistently enough to buy one. Then, this past Christmas, my wife forced the issue by getting me a Bambu Lab A1 as a present.

Since then, I’ve been tinkering with the thing nearly daily, learning more about what I’ve gotten myself into and continuing to find fun and useful things to print. I’ve gathered a bunch of thoughts about my learning process here, not because I think I’m breaking new ground but to serve as a blueprint for anyone who has been on the fence about Getting Into 3D Printing. “Hyperfixating on new hobbies” is one of my go-to coping mechanisms during times of stress and anxiety, and 3D printing has turned out to be the perfect combination of fun, practical, and time-consuming.

Getting to know my printer

My wife settled on the Bambu A1 because it’s a larger version of the A1 Mini, Wirecutter’s main 3D printer pick at the time (she also noted it was “hella on sale”). Other reviews she read noted that it’s beginner-friendly, easy to use, and fun to tinker with, and it has a pretty active community for answering questions, all assessments I agree with so far.

Note that this research was done some months before Bambu earned bad headlines because of firmware updates that some users believe will lead to a more locked-down ecosystem. This is a controversy I understand—3D printers are still primarily the realm of DIYers and tinkerers, people who are especially sensitive to the closing of open ecosystems. But as a beginner, I’m already leaning mostly on the first-party tools and built-in functionality to get everything going, so I’m not really experiencing the sense of having “lost” features I was relying on, and any concerns I did have are mostly addressed by Bambu’s update about its update.

I hadn’t really updated my preconceived notions of what home 3D printing was since its primordial days, something Ars has been around long enough to have covered in some depth. I was wary of getting into yet another hobby where, like building your own gaming PC, fiddling with and maintaining the equipment is part of the hobby. Bambu’s printers (and those like them) are capable of turning out fairly high-quality prints with minimal fuss, and nothing will draw you into the hobby faster than a few successful prints.

Basic terminology

Extrusion-based 3D printers (also sometimes called “FDM,” for “fused deposition modeling”) work by depositing multiple thin layers of melted plastic filament on a heated bed. Credit: Andrew Cunningham

First things first: The A1 is what’s called an “extrusion” printer, meaning that it functions by melting a long, slim thread of plastic (filament) and then depositing this plastic onto a build plate seated on top of a heated bed in tens, hundreds, or even thousands of thin layers. In the manufacturing world, this is also called “fused deposition modeling,” or FDM. This layer-based extrusion gives 3D-printed objects their distinct ridged look and feel and is also why a 3D printed piece of plastic is less detailed-looking and weaker than an injection-molded piece of plastic like a Lego brick.

The other readily available home 3D printing technology takes liquid resin and uses UV light to harden it into a plastic structure, using a process called “stereolithography” (SLA). You can get inexpensive resin printers in the same price range as the best cheap extrusion printers, and the SLA process can create much more detailed, smooth-looking, and watertight 3D prints (it’s popular for making figurines for tabletop games). Some downsides are that the print beds in these printers are smaller, resin is a bit fussier than filament, and multi-color printing isn’t possible.

There are two main types of home extrusion printers. The Bambu A1 is a Cartesian printer, or in more evocative and colloquial terms, a “bed slinger.” In these, the head of the printer can move up and down on one or two rails and from side to side on another rail. But the print bed itself has to move forward and backward to “move” the print head on the Y axis.

More expensive home 3D printers, including higher-end Bambu models in the P- and X-series, are “CoreXY” printers, which include a third rail or set of rails (and more Z-axis rails) that allow the print head to travel in all three directions.

The A1 is also an “open-bed” printer, which means that it ships without an enclosure. Closed-bed printers are more expensive, but they can maintain a more consistent temperature inside and help contain the fumes from the melted plastic. They can also reduce the amount of noise coming from your printer.

Together, the downsides of a bed-slinger (introducing more wobble for tall prints, more opportunities for parts of your print to come loose from the plate) and an open-bed printer (worse temperature, fume, and dust control) mainly just mean that the A1 isn’t well-suited for printing certain types of plastic and has more potential points of failure for large or delicate prints. My experience with the A1 has been mostly positive now that I know about those limitations, but the printer you buy could easily change based on what kinds of things you want to print with it.

Setting up

Overall, the setup process was reasonably simple, at least for someone who has been building PCs and repairing small electronics for years now. It’s not quite the same as the “take it out of the box, remove all the plastic film, and plug it in” process of setting up a 2D printer, but the directions in the start guide are well-illustrated and clearly written; if you can put together prefab IKEA furniture, that’s roughly the level of complexity we’re talking about here. The fact that delicate electronics are involved might still make it more intimidating for the non-technical, but figuring out what goes where is fairly simple.

The only mistake I made while setting the printer up involved the surface I initially tried to put it on. I used a spare end table, but as I discovered during the printer’s calibration process, the herky-jerky movement of the bed and print head was way too much for a little table to handle. “Stable enough to put a lamp on” is not the same as “stable enough to put a constantly wobbling contraption” on—obvious in retrospect, but my being new to this is why this article exists.

After some office rearrangement, I was able to move the printer to my sturdy L-desk full of cables and other doodads to serve as ballast. This surface was more than sturdy enough to let the printer complete its calibration process—and sturdy enough not to transfer the printer’s every motion to our kid’s room below, a boon for when I’m trying to print something after he has gone to bed.

The first-party Bambu apps for sending files to the printer are Bambu Handy (for iOS/Android, with no native iPad version) and Bambu Studio (for Windows, macOS, and Linux). Handy works OK for sending ready-made models from MakerWorld (a mostly community-driven but Bambu-developer repository for 3D printable files) and for monitoring prints once they’ve started. But I’ll mostly be relaying my experience with Bambu Studio, a much more fully featured app. Neither app requires sign-in, at least not yet, but the path of least resistance is to sign into your printer and apps with the same account to enable easy communication and syncing.

Bambu Studio: A primer

Bambu Studio is what’s known in the hobby as a “slicer,” software that takes existing 3D models output by common CAD programs (Tinkercad, FreeCAD, SolidWorks, Autodesk Fusion, others) and converts them into a set of specific movement instructions that the printer can follow. Bambu Studio allows you to do some basic modification of existing models—cloning parts, resizing them, adding supports for overhanging bits that would otherwise droop down, and a few other functions—but it’s primarily there for opening files, choosing a few settings, and sending them off to the printer to become tangible objects.

Bambu Studio isn’t the most approachable application, but if you’ve made it this far, it shouldn’t be totally beyond your comprehension. For first-time setup, you’ll choose your model of printer (all Bambu models and a healthy selection of third-party printers are officially supported), leave the filament settings as they are, and sign in if you want to use Bambu’s cloud services. These sync printer settings and keep track of the models you save and download from MakerWorld, but a non-cloud LAN mode is available for the Bambu skeptics and privacy-conscious.

For any newbie, pretty much all you need to do is connect your printer, open a .3MF or .STL file you’ve downloaded from MakerWorld or elsewhere, select your filament from the drop-down menu, click “slice plate,” and then click “print.” Things like the default 0.4 mm nozzle size and Bambu’s included Textured PEI Build Plate are generally already factored in, though you may need to double-check these selections when you open a file for the first time.

When you slice your build plate for the first time, the app will spit a pile of numbers back at you. There are two important ones for 3D printing neophytes to track. One is the “total filament” figure, which tells you how many grams of filament the printer will use to make your model (filament typically comes in 1 kg spools, and the printer generally won’t track usage for you, so if you want to avoid running out in the middle of the job, you may want to keep track of what you’re using). The second is the “total time” figure, which tells you how long the entire print will take from the first calibration steps to the end of the job.

Selecting your filament and/or temperature presets. If you have the Automatic Material System (AMS), this is also where you’ll manage multicolor printing. Andrew Cunningham

When selecting filament, people who stick to Bambu’s first-party spools will have the easiest time, since optimal settings are already programmed into the app. But I’ve had almost zero trouble with the “generic” presets and the spools of generic Inland-branded filament I’ve bought from our local Micro Center, at least when sticking to PLA (polylactic acid, the most common and generally the easiest-to-print of the different kinds of filament you can buy). But we’ll dive deeper into plastics in part 2 of this series.

I won’t pretend I’m skilled enough to do a deep dive on every single setting that Bambu Studio gives you access to, but here are a few of the odds and ends I’ve found most useful:

  • The “clone” function, accessed by right-clicking an object and clicking “clone.” Useful if you’d like to fit several copies of an object on the build plate at once, especially if you’re using a filament with a color gradient and you’d like to make the gradient effect more pronounced by spreading it out over a bunch of prints.
  • The “arrange all objects” function, the fourth button from the left under the “prepare” tab. Did you just clone a bunch of objects? Did you delete an individual object from a model because you didn’t need to print that part? Bambu Studio will arrange everything on your build plate to optimize the use of space.
  • Layer height, located in the sidebar directly beneath “Process” (which is directly underneath the area where you select your filament. For many functional parts, the standard 0.2 mm layer height is fine. Going with thinner layer heights adds to the printing time but can preserve more detail on prints that have a lot of it and slightly reduce the visible layer lines that give 3D-printed objects their distinct look (for better or worse). Thicker layer heights do the opposite, slightly reducing the amount of time a model takes to print but preserving less detail.
  • Infill percentage and wall loops, located in the Strength tab beneath the “Process” sidebar item. For most everyday prints, you don’t need to worry about messing with these settings much; the infill percentage determines the amount of your print’s interior that’s plastic and the part that’s empty space (15 percent is a good happy medium most of the time between maintaining rigidity and overusing plastic). The number of wall loops determines how many layers the printer uses for the outside surface of the print, with more walls using more plastic but also adding a bit of extra strength and rigidity to functional prints that need it (think hooks, hangers, shelves and brackets, and other things that will be asked to bear some weight).

My first prints

A humble start: My very first print was a wall bracket for the remote for my office’s ceiling fan. Credit: Andrew Cunningham

When given the opportunity to use a 3D printer, my mind went first to aggressively practical stuff—prints for organizing the odds and ends that eternally float around my office or desk.

When we moved into our current house, only one of the bedrooms had a ceiling fan installed. I put up remote-controlled ceiling fans in all the other bedrooms myself. And all those fans, except one, came with a wall-mounted caddy to hold the remote control. The first thing I decided to print was a wall-mounted holder for that remote control.

MakerWorld is just one of several resources for ready-made 3D-printable files, but the ease with which I found a Hampton Bay Ceiling Fan Remote Wall Mount is pretty representative of my experience so far. At this point in the life cycle of home 3D printing, if you can think about it and it’s not a terrible idea, you can usually find someone out there who has made something close to what you’re looking for.

I loaded up my black roll of PLA plastic—generally the cheapest, easiest-to-buy, easiest-to-work-with kind of 3D printer filament, though not always the best for prints that need more structural integrity—into the basic roll-holder that comes with the A1, downloaded that 3MF file, opened it in Bambu Studio, sliced the file, and hit print. It felt like there should have been extra steps in there somewhere. But that’s all it took to kick the printer into action.

After a few minutes of warmup—by default, the A1 has a thorough pre-print setup process where it checks the levelness of the bed and tests the flow rate of your filament for a few minutes before it begins printing anything—the nozzle started laying plastic down on my build plate, and inside of an hour or so, I had my first 3D-printed object.

Print No. 2 was another wall bracket, this time for my gaming PC’s gamepad and headset. Credit: Andrew Cunningham

It wears off a bit after you successfully execute a print, but I still haven’t quite lost the feeling of magic of printing out a fully 3D object that comes off the plate and then just exists in space along with me and all the store-bought objects in my office.

The remote holder was, as I’d learn, a fairly simple print made under near-ideal conditions. But it was an easy success to start off with, and that success can help embolden you and draw you in, inviting more printing and more experimentation. And the more you experiment, the more you inevitably learn.

This time, I talked about what I learned about basic terminology and the different kinds of plastics most commonly used by home 3D printers. Next time, I’ll talk about some of the pitfalls I ran into after my initial successes, what I learned about using Bambu Studio, what I’ve learned about fine-tuning settings to get good results, and a whole bunch of 3D-printable upgrades and mods available for the A1.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

What I learned from my first few months with a Bambu Lab A1 3D printer, part 1 Read More »

meta-hypes-ai-friends-as-social-media’s-future,-but-users-want-real-connections

Meta hypes AI friends as social media’s future, but users want real connections


Two visions for social media’s future pit real connections against AI friends.

A rotting zombie thumb up buzzing with flies while the real zombies are the people in the background who can't put their phones down

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

If you ask the man who has largely shaped how friends and family connect on social media over the past two decades about the future of social media, you may not get a straight answer.

At the Federal Trade Commission’s monopoly trial, Meta CEO Mark Zuckerberg attempted what seemed like an artful dodge to avoid criticism that his company allegedly bought out rivals Instagram and WhatsApp to lock users into Meta’s family of apps so they would never post about their personal lives anywhere else. He testified that people actually engage with social media less often these days to connect with loved ones, preferring instead to discover entertaining content on platforms to share in private messages with friends and family.

As Zuckerberg spins it, Meta no longer perceives much advantage in dominating the so-called personal social networking market where Facebook made its name and cemented what the FTC alleged is an illegal monopoly.

“Mark Zuckerberg says social media is over,” a New Yorker headline said about this testimony in a report noting a Meta chart that seemed to back up Zuckerberg’s words. That chart, shared at the trial, showed the “percent of time spent viewing content posted by ‘friends'” had declined over the past two years, from 22 to 17 percent on Facebook and from 11 to 7 percent on Instagram.

Supposedly because of this trend, Zuckerberg testified that “it doesn’t matter much” if someone’s friends are on their preferred platform. Every platform has its own value as a discovery engine, Zuckerberg suggested. And Meta platforms increasingly compete on this new playing field against rivals like TikTok, Meta argued, while insisting that it’s not so much focused on beating the FTC’s flagged rivals in the connecting-friends-and-family business, Snap and MeWe.

But while Zuckerberg claims that hosting that kind of content doesn’t move the needle much anymore, owning the biggest platforms that people use daily to connect with friends and family obviously still matters to Meta, MeWe founder Mark Weinstein told Ars. And Meta’s own press releases seem to back that up.

Weeks ahead of Zuckerberg’s testimony, Meta announced that it would bring back the “magic of friends,” introducing a “friends” tab to Facebook to make user experiences more like the original Facebook. The company intentionally diluted feeds with creator content and ads for the past two years, but it now appears intent on trying to spark more real conversations between friends and family, at least partly to fuel its newly launched AI chatbots.

Those chatbots mine personal information shared on Facebook and Instagram, and Meta wants to use that data to connect more personally with users—but “in a very creepy way,” The Washington Post wrote. In interviews, Zuckerberg has suggested these AI friends could “meaningfully” fill the void of real friendship online, as the average person has only three friends but “has demand” for up to 15. To critics seeking to undo Meta’s alleged monopoly, this latest move could signal a contradiction in Zuckerberg’s testimony, showing that the company is so invested in keeping users on its platforms that it’s now creating AI friends (wh0 can never leave its platform) to bait the loneliest among us into more engagement.

“The average person wants more connectivity, connection, than they have,” Zuckerberg said, hyping AI friends. For the Facebook founder, it must be hard to envision a future where his platforms aren’t the answer to providing that basic social need. All this comes more than a decade after he sought $5 billion in Facebook’s 2012 initial public offering so that he could keep building tools that he told investors would expand “people’s capacity to build and maintain relationships.”

At the trial, Zuckerberg testified that AI and augmented reality will be key fixtures of Meta’s platforms in the future, predicting that “several years from now, you are going to be scrolling through your feed, and not only is it going to be sort of animated, but it will be interactive.”

Meta declined to comment further on the company’s vision for social media’s future. In a statement, a Meta spokesperson told Ars that “the FTC’s lawsuit against Meta defies reality,” claiming that it threatens US leadership in AI and insisting that evidence at trial would establish that platforms like TikTok, YouTube, and X are Meta’s true rivals.

“More than 10 years after the FTC reviewed and cleared our acquisitions, the Commission’s action in this case sends the message that no deal is ever truly final,” Meta’s spokesperson said. “Regulators should be supporting American innovation rather than seeking to break up a great American company and further advantaging China on critical issues like AI.”

Meta faces calls to open up its platforms

Weinstein, the MeWe founder, told Ars that back in the 1990s when the original social media founders were planning the first community portals, “it was so beautiful because we didn’t think about bots and trolls. We didn’t think about data mining and surveillance capitalism. We thought about making the world a more connected and holistic place.”

But those who became social media overlords found more money in walled gardens and increasingly cut off attempts by outside developers to improve the biggest platforms’ functionality or leverage their platforms to compete for their users’ attention. Born of this era, Weinstein expects that Zuckerberg, and therefore Meta, will always cling to its friends-and-family roots, no matter which way Zuckerberg says the wind is blowing.

Meta “is still entirely based on personal social networking,” Weinstein told Ars.

In a Newsweek op-ed, Weinstein explained that he left MeWe in 2021 after “competition became impossible” with Meta. It was a time when MeWe faced backlash over lax content moderation, drawing comparisons between its service and right-wing apps like Gab or Parler. Weinstein rejected those comparisons, seeing his platform as an ideal Facebook rival and remaining a board member through the app’s more recent shift to decentralization. Still defending MeWe’s failed efforts to beat Facebook, he submitted hundreds of documents and was deposed in the monopoly trial, alleging that Meta retaliated against MeWe as a privacy-focused rival that sought to woo users away by branding itself the “anti-Facebook.”

Among his complaints, Weinstein accused Meta of thwarting MeWe’s attempts to introduce interoperability between the two platforms, which he thinks stems from a fear that users might leave Facebook if they discover a more appealing platform. That’s why he’s urged the FTC—if it wins its monopoly case—to go beyond simply ordering a potential breakup of Facebook, Instagram, and WhatsApp to also require interoperability between Meta’s platforms and all rivals. That may be the only way to force Meta to release its clutch on personal data collection, Weinstein suggested, and allow for more competition broadly in the social media industry.

“The glue that holds it all together is Facebook’s monopoly over data,” Weinstein wrote in a Wall Street Journal op-ed, recalling the moment he realized that Meta seemed to have an unbeatable monopoly. “Its ownership and control of the personal information of Facebook users and non-users alike is unmatched.”

Cory Doctorow, a special advisor to the Electronic Frontier Foundation, told Ars that his vision of a better social media future goes even further than requiring interoperability between all platforms. Social networks like Meta’s should also be made to allow reverse engineering so that outside developers can modify their apps with third-party tools without risking legal attacks, he said.

Doctorow said that solution would create “an equilibrium where companies are more incentivized to behave themselves than they are to cheat” by, say, retaliating against, killing off, or buying out rivals. And “if they fail to respond to that incentive and they cheat anyways, then the rest of the world still has a remedy,” Doctorow said, by having the choice to modify or ditch any platform deemed toxic, invasive, manipulative, or otherwise offensive.

Doctorow summed up the frustration that some users have faced through the ongoing “enshittification” of platforms (a term he coined) ever since platforms took over the Internet.

“I’m 55 now, and I’ve gotten a lot less interested in how things work because I’ve had too many experiences with how things fail,” Doctorow told Ars. “And I just want to make sure that if I’m on a service and it goes horribly wrong, I can leave.”

Social media haters wish OG platforms were doomed

Weinstein pointed out that Meta’s alleged monopoly impacts a group often left out of social media debates: non-users. And if you ask someone who hates social media what the future of social media should look like, they will not mince words: They want a way to opt out of all of it.

As Meta’s monopoly trial got underway, a personal blog post titled “No Instagram, no privacy” rose to the front page of Hacker News, prompting a discussion about social media norms and reasonable expectations for privacy in 2025.

In the post, Wouter-Jan Leys, a privacy advocate, explained that he felt “blessed” to have “somehow escaped having an Instagram account,” feeling no pressure to “update the abstract audience of everyone I ever connected with online on where I am, what I am doing, or who I am hanging out with.”

But despite never having an account, he’s found that “you don’t have to be on Instagram to be on Instagram,” complaining that “it bugs me” when friends seem to know “more about my life than I tell them” because of various friends’ posts that mention or show images of him. In his blog, he defined privacy as “being in control of what other people know about you” and suggested that because of platforms like Instagram, he currently lacked this control. There should be some way to “fix or regulate this,” Leys suggested, or maybe some universal “etiquette where it’s frowned upon to post about social gatherings to any audience beyond who already was at that gathering.”

On Hacker News, his post spurred a debate over one of the longest-running privacy questions swirling on social media: Is it OK to post about someone who abstains from social media?

Some seeming social media fans scolded Leys for being so old-fashioned about social media, suggesting, “just live your life without being so bothered about offending other people” or saying that “the entire world doesn’t have to be sanitized to meet individual people’s preferences.” Others seemed to better understand Leys’ point of view, with one agreeing that “the problem is that our modern norms (and tech) lead to everyone sharing everything with a large social network.”

Surveying the lively thread, another social media hater joked, “I feel vindicated for my decision to entirely stay off of this drama machine.”

Leys told Ars that he would “absolutely” be in favor of personal social networks like Meta’s platforms dying off or losing steam, as Zuckerberg suggested they already are. He thinks that the decline in personal post engagement that Meta is seeing is likely due to a combination of factors, where some users may prefer more privacy now after years of broadcasting their lives, and others may be tired of the pressure of building a personal brand or experiencing other “odd social dynamics.”

Setting user sentiments aside, Meta is also responsible for people engaging with fewer of their friends’ posts. Meta announced that it would double the amount of force-fed filler in people’s feeds on Instagram and Facebook starting in 2023. That’s when the two-year span begins that Zuckerberg measured in testifying about the sudden drop-off in friends’ content engagement.

So while it’s easy to say the market changed, Meta may be obscuring how much it shaped that shift. Degrading the newsfeed and changing Instagram’s default post shape from square to rectangle seemingly significantly shifted Instagram social norms, for example, creating an environment where Gen Z users felt less comfortable posting as prolifically as millennials did when Instagram debuted, The New Yorker explained last year. Where once millennials painstakingly designed immaculate grids of individual eye-catching photos to seem cool online, Gen Z users told The New Yorker that posting a single photo now feels “humiliating” and like a “social risk.”

But rather than eliminate the impulse to post, this cultural shift has popularized a different form of personal posting: staggered photo dumps, where users wait to post a variety of photos together to sum up a month of events or curate a vibe, the trend piece explained. And Meta is clearly intent on fueling that momentum, doubling the maximum number of photos that users can feature in a single post to encourage even more social posting, The New Yorker noted.

Brendan Benedict, an attorney for Benedict Law Group PLLC who has helped litigate big tech antitrust cases, is monitoring the FTC monopoly trial on a Substack called Big Tech on Trial. He told Ars that the evidence at the trial has shown that “consumers want more friends and family content, and Meta is belatedly trying to address this” with features like the “friends” tab, while claiming there’s less interest in this content.

Leys doesn’t think social media—at least the way that Facebook defined it in the mid-2000s—will ever die, because people will never stop wanting social networks like Facebook or Instagram to stay connected with all their friends and family. But he could see a world where, if people ever started truly caring about privacy or “indeed [got] tired of the social dynamics and personal brand-building… the kind of social media like Facebook and Instagram will have been a generational phenomenon, and they may not immediately bounce back,” especially if it’s easy to switch to other platforms that respond better to user preferences.

He also agreed that requiring interoperability would likely lead to better social media products, but he maintained that “it would still not get me on Instagram.”

Interoperability shakes up social media

Meta thought it may have already beaten the FTC’s monopoly case, filing for a motion for summary judgment after the FTC rested its case in a bid to end the trial early. That dream was quickly dashed when the judge denied the motion days later. But no matter the outcome of the trial, Meta’s influence over the social media world may be waning just as it’s facing increasing pressure to open up its platforms more than ever.

The FTC has alleged that Meta weaponized platform access early on, only allowing certain companies to interoperate and denying access to anyone perceived as a threat to its alleged monopoly power. That includes limiting promotions of Instagram to keep users engaged with Facebook Blue. A primary concern for Meta (then Facebook), the FTC claimed, was avoiding “training users to check multiple feeds,” which might allow other apps to “cannibalize” its users.

“Facebook has used this power to deter and suppress competitive threats to its personal social networking monopoly. In order to protect its monopoly, Facebook adopted and required developers to agree to conditional dealing policies that limited third-party apps’ ability to engage with Facebook rivals or to develop into rivals themselves,” the FTC alleged.

By 2011, the FTC alleged, then-Facebook had begun terminating API access to any developers that made it easier to export user data into a competing social network without Facebook’s permission. That practice only ended when the UK parliament started calling out Facebook’s anticompetitive conduct toward app developers in 2018, the FTC alleged.

According to the FTC, Meta continues “to this day” to “screen developers and can weaponize API access in ways that cement its dominance,” and if scrutiny ever subsides, Meta is expected to return to such anticompetitive practices as the AI race heats up.

One potential hurdle for Meta could be that the push for interoperability is not just coming from the FTC or lawmakers who recently reintroduced bipartisan legislation to end walled gardens. Doctorow told Ars that “huge public groundswells of mistrust and anger about excessive corporate power” that “cross political lines” are prompting global antitrust probes into big tech companies and are perhaps finally forcing a reckoning after years of degrading popular products to chase higher and higher revenues.

For social media companies, mounting concerns about privacy and suspicions about content manipulation or censorship are driving public distrust, Doctorow said, as well as fears of surveillance capitalism. The latter includes theories that Doctorow is skeptical of. Weinstein embraced them, though, warning that platforms seem to be profiting off data without consent while brainwashing users.

Allowing users to leave the platform without losing access to their friends, their social posts, and their messages might be the best way to incentivize Meta to either genuinely compete for billions of users or lose them forever as better options pop up that can plug into their networks.

In his Newsweek op-ed, Weinstein suggested that web inventor Tim Berners-Lee has already invented a working protocol “to enable people to own, upload, download, and relocate their social graphs,” which maps users’ connections across platforms. That could be used to mitigate “the network effect” that locks users into platforms like Meta’s “while interrupting unwanted data collection.”

At the same time, Doctorow told Ars that increasingly popular decentralized platforms like Bluesky and Mastodon already provide interoperability and are next looking into “building interoperable gateways” between their services. Doctorow said that communicating with other users across platforms may feel “awkward” at first, but ultimately, it may be like “having to find the diesel pump at the gas station” instead of the unleaded gas pump. “You’ll still be going to the same gas station,” Doctorow suggested.

Opening up gateways into all platforms could be useful in the future, Doctorow suggested. Imagine if one platform goes down—it would no longer disrupt communications as drastically, as users could just pivot to communicate on another platform and reach the same audience. The same goes for platforms that users grow to distrust.

The EFF supports regulators’ attempts to pass well-crafted interoperability mandates, Doctorow said, noting that “if you have to worry about your users leaving, you generally have to treat them better.”

But would interoperability fix social media?

The FTC has alleged that “Facebook’s dominant position in the US personal social networking market is durable due to significant entry barriers, including direct network effects and high switching costs.”

Meta disputes the FTC’s complaint as outdated, arguing that its platform could be substituted by pretty much any social network.

However, Guy Aridor, a co-author of a recent article called “The Economics of Social Media” in the Journal of Economic Literature, told Ars that dominant platforms are probably threatened by shifting social media trends and are likely to remain “resistant to interoperability” because “it’s in the interest of the platform to make switching and coordination costs high so that users are less likely to migrate away.” For Meta, research shows its platforms’ network effects have appeared to weaken somewhat but “clearly still exist” despite social media users increasingly seeking content on platforms rather than just socialization, Aridor said.

Interoperability advocates believe it will make it easier for startups to compete with giants like Meta, which fight hard and sometimes seemingly dirty to keep users on their apps. Reintroducing the ACCESS Act, which requires platform compatibility to enable service switching, Senator Mark R. Warner (D-Va.) said that “interoperability and portability are powerful tools to promote innovative new companies and limit anti-competitive behaviors.” He’s hoping that passing these “long-overdue requirements” will “boost competition and give consumers more power.”

Aridor told Ars it’s obvious that “interoperability would clearly increase competition,” but he still has questions about whether users would benefit from that competition “since one consistent theme is that these platforms are optimized to maximize engagement, and there’s numerous empirical evidence we have by now that engagement isn’t necessarily correlated with utility.”

Consider, Aridor suggested, how toxic content often leads to high engagement but lower user satisfaction, as MeWe experienced during its 2021 backlash.

Aridor said there is currently “very little empirical evidence on the effects of interoperability,” but theoretically, if it increased competition in the current climate, it would likely “push the market more toward supplying engaging entertainment-related content as opposed to friends and family type of content.”

Benedict told Ars that a remedy like interoperability would likely only be useful to combat Meta’s alleged monopoly following a breakup, which he views as the “natural remedy” following a potential win in the FTC’s lawsuit.

Without the breakup and other meaningful reforms, a Meta win could preserve the status quo and see the company never open up its platforms, perhaps perpetuating Meta’s influence over social media well into the future. And if Zuckerberg’s vision comes to pass, instead of seeing what your friends are posting on interoperating platforms across the Internet, you may have a dozen AI friends trained on your real friends’ behaviors sending you regular dopamine hits to keep you scrolling on Facebook or Instagram.

Aridor’s team’s article suggested that, regardless of user preferences, social media remains a permanent fixture of society. If that’s true, users could get stuck forever using whichever platforms connect them with the widest range of contacts.

“While social media has continued to evolve, one thing that has not changed is that social media remains a central part of people’s lives,” his team’s article concluded.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Meta hypes AI friends as social media’s future, but users want real connections Read More »

zero-click-searches:-google’s-ai-tools-are-the-culmination-of-its-hubris

Zero-click searches: Google’s AI tools are the culmination of its hubris


Google’s first year with AI search was a wild ride. It will get wilder.

Google is constantly making changes to its search rankings, but not all updates are equal. Every few months, the company bundles up changes into a larger “core update.” These updates make rapid and profound changes to search, so website operators watch them closely.

The March 2024 update was unique. It was one of Google’s largest core updates ever, and it took over a month to fully roll out. Nothing has felt quite the same since. Whether the update was good or bad depends on who you ask—and maybe who you are.

It’s common for websites to see traffic changes after a core update, but the impact of the March 2024 update marked a seismic shift. Google says the update aimed to address spam and AI-generated content in a meaningful way. Still, many publishers say they saw clicks on legitimate sites evaporate, while others have had to cope with unprecedented volatility in their traffic. Because Google owns almost the entire search market, changes in its algorithm can move the Internet itself.

In hindsight, the March 2024 update looks like the first major Google algorithm update for the AI era. Not only did it (supposedly) veer away from ranking AI-authored content online, but it also laid the groundwork for Google’s ambitious—and often annoying—desire to fuse AI with search.

A year ago, this ambition surfaced with AI Overviews, but now the company is taking an even more audacious route, layering in a new chat-based answer service called “AI Mode.” Both of these technologies do at least two things: They aim to keep you on Google properties longer, and they remix publisher content without always giving prominent citations.

Smaller publishers appear to have borne the brunt of the changes caused by these updates. “Google got all this flak for crushing the small publishers, and it’s true that when they make these changes, they do crush a lot of publishers,” says Jim Yu, CEO of enterprise SEO platform BrightEdge. Yu explains that Google is the only search engine likely to surface niche content in the first place, and there are bound to be changes to sites at the fringes during a major core update.

Google’s own view on the impact of the March 2024 update is unsurprisingly positive. The company said it was hoping to reduce the appearance of unhelpful content in its search engine results pages (SERPs) by 40 percent. After the update, the company claimed an actual reduction of closer to 45 percent. But does it feel like Google’s results have improved by that much? Most people don’t think so.

What causes this disconnect? According to Michael King, founder of SEO firm iPullRank, we’re not speaking the same language as Google. “Google’s internal success metrics differ from user perceptions,” he says. “Google measures user satisfaction through quantifiable metrics, while external observers rely on subjective experiences.”

Google evaluates algorithm changes with various tests, including human search quality testers and running A/B tests on live searches. But more than anything else, success is about the total number of searches (5 trillion of them per year). Google often makes this number a centerpiece of its business updates to show investors that it can still grow.

However, using search quantity to measure quality has obvious problems. For instance, more engagement with a search engine might mean that quality has decreased, so people try new queries (e.g., the old trick of adding “Reddit” to the end of your search string). In other words, people could be searching more because they don’t like the results.

Jim Yu suggests that Google is moving fast and breaking things, but it may not be as bad as we think. “I think they rolled things out faster because they had to move a lot faster than they’ve historically had to move, and it ends up that they do make some real mistakes,” says Yu. “[Google] is held to a higher standard, but by and large, I think their search quality is improving.”

According to King, Google’s current search behavior still favors big names, but other sites have started to see a rebound. “Larger brands are performing better in the top three positions, while lesser-known websites have gained ground in positions 4 through 10,” says King. “Although some websites have indeed lost traffic due to reduced organic visibility, the bigger issue seems tied to increased usage of AI Overviews”—and now the launch of AI Mode.

Yes, the specter of AI hangs over every SERP. The unhelpful vibe many people now get from Google searches, regardless of the internal metrics the company may use, may come from a fundamental shift in how Google surfaces information in the age of AI.

The AI Overview hangover

In 2025, you can’t talk about Google’s changes to search without acknowledging the AI-generated elephant in the room. As it wrapped up that hefty core update in March 2024, Google also announced a major expansion of AI in search, moving the “Search Generative Experience” out of labs and onto Google.com. The feature was dubbed “AI Overviews.”

The AI Overview box has been a fixture on Google’s search results page ever since its debut a year ago. The feature uses the same foundational AI model as Google’s Gemini chatbot to formulate answers to your search queries by ingesting the top 100 (!) search results. It sits at the top of the page, pushing so-called blue link content even farther down below the ads and knowledge graph content. It doesn’t launch on every query, and sometimes it answers questions you didn’t ask—or even hallucinates a totally wrong answer.

And it’s not without some irony that Google’s laudable decision to de-rank synthetic AI slop comes at the same time that Google heavily promotes its own AI-generated content right at the top of SERPs.

AI Overview on phone

AI Overviews appear right at the top of many search results.

Credit: Google

AI Overviews appear right at the top of many search results. Credit: Google

What is Google getting for all of this AI work? More eyeballs, it would seem. “AI is driving more engagement than ever before on Google,” says Yu. BrightEdge data shows that impressions on Google are up nearly 50 percent since AI Overviews launched. Many of the opinions you hear about AI Overviews online are strongly negative, but that doesn’t mean people aren’t paying attention to the feature. In its Q1 2025 earnings report, Google announced that AI Overviews is being “used” by 1.5 billion people every month. (Since you can’t easily opt in or opt out of AI Overviews, this “usage” claim should be taken with a grain of salt.)

Interestingly, the impact of AI Overviews has varied across the web. In October 2024, Google was so pleased with AI Overviews that it expanded them to appear in more queries. And as AI crept into more queries, publishers saw a corresponding traffic drop. Yu estimates this drop to be around 30 percent on average for those with high AI query coverage. For searches that are less supported in AI Overviews—things like restaurants and financial services—the traffic change has been negligible. And there are always exceptions. Yu suggests that some large businesses with high AI Overview query coverage have seen much smaller drops in traffic because they rank extremely well as both AI citations and organic results.

Lower traffic isn’t the end of the world for some businesses. Last May, AI Overviews were largely absent from B2B queries, but that turned around in a big way in recent months. BrightEdge estimates that 70 percent of B2B searches now have AI answers, which has reduced traffic for many companies. Yu doesn’t think it’s all bad, though. “People don’t click through as much—they engage a lot more on the AI—but when they do click, the conversion rate for the business goes up,” Yu says. In theory, serious buyers click and window shoppers don’t.

But the Internet is not a giant mall that exists only for shoppers. It is, first and foremost, a place to share and find information, and AI Overviews have hit some purveyors of information quite hard. At launch, AI Overviews were heavily focused on “What is” and “How to” queries. Such “service content” is a staple of bloggers and big media alike, and these types of publishers aren’t looking for sales conversions—it’s traffic that matters. And they’re getting less of it because AI Overviews “helpfully” repackages and remixes their content, eliminating the need to click through to the site. Some publishers are righteously indignant, asking how it’s fair for Google to remix content it doesn’t own, and to do so without compensation.

But Google’s intentions don’t end with AI Overviews. Last week, the company started an expanded public test of so-called “AI Mode,” right from the front page. AI Mode doesn’t even bother with those blue links. It’s a chatbot experience that, at present, tries to answer your query without clearly citing sources inline. (On some occasions, it will mention Reddit or Wikipedia.) On the right side of the screen, Google provides a little box with three sites linked, which you can expand to see more options. To the end user, it’s utterly unclear if those are “sources,” “recommendations,” or “partner deals.”

Perhaps more surprisingly, in our testing, not a single AI Mode “sites box” listed a site that ranked on the first page for the same query on a regular search. That is, the links in AI Mode for “best foods to eat for a cold” don’t overlap at all with the SERP for the same query in Google Search. In fairness, AI Mode is very new, and its behavior will undoubtedly change. But the direction the company is headed seems clear.

Google’s real goal is to keep you on Google or other Alphabet properties. In 2019, Rand Fishkin noticed that Google’s evolution from search engine to walled garden was at a tipping point. At that time—and for the first time—more than half of Google searches resulted in zero click-throughs to other sites. But data did show large numbers of clicks to Google’s own properties, like YouTube and Maps. If Google doesn’t intend to deliver a “zero-click” search experience, you wouldn’t know it from historical performance data or the new features the company develops.

You also wouldn’t know it from the way AI Overviews work. They do cite some of the sources used in building each output, and data suggests people click on those links. But are the citations accurate? Is every source used for constructing an AI Overview cited? We don’t really know, as Google is famously opaque about how its search works. We do know that Google uses a customized version of Gemini to support AI Overviews and that Gemini has been trained on billions and billions of webpages.

When AI Overviews do cite a source, it’s not clear how those sources came to be the ones cited. There’s good reason to be suspicious here: AI Overview’s output is not great, as witnessed by the numerous hallucinations we all know and love (telling people to eat rocks, for instance). The only thing we know for sure is that Google isn’t transparent about any of this.

No signs of slowing

Despite all of that, Google is not slowing down on AI in search. More recent core updates have only solidified this new arrangement with an ever-increasing number of AI-answered queries. The company appears OK with its current accuracy problems, or at the very least, it’s comfortable enough to push out AI updates anyway. Google appears to have been caught entirely off guard by the public launch of ChatGPT, and it’s now utilizing its search dominance to play catch-up.

To make matters even more dicey, Google isn’t even trying to address the biggest issue in all this: The company’s quest for zero-click search harms the very content creators upon which the company has built its empire.

For its part, Google has been celebrating its AI developments, insisting that content producers don’t know what’s best for them, refuting any concerns with comments about search volume increases and ever-more-complex search query strings. The changes must be working!

Google has been building toward this moment for years. The company started with a list of 10 blue links and nothing else, but little by little, it pushed the links down the page and added more content that keeps people in the Google ecosystem. Way back in 2007, Google added Universal Search, which allowed it to insert content from Google Maps, YouTube, and other services. In 2009, Rich Snippets began displaying more data from search results on SERPs. In 2012, the Knowledge Graph began extracting data from search results to display answers in the search results. Each change kept people on Google longer and reduced click-throughs, all the while pushing the search results down the page.

AI Overviews, and especially AI Mode, are the logical outcome of Google’s yearslong transformation from an indexer of information to an insular web portal built on scraping content from around the web. Earlier in Google’s evolution, the implicit agreement was that websites would allow Google to crawl their pages in exchange for sending them traffic. That relationship has become strained as the company has kept more traffic for itself, reducing click-throughs to websites even as search volume continues to increase. And locking Google out isn’t a realistic option when the company controls almost the entire search market.

Even when Google has taken a friendlier approach, business concerns could get in the way. During the search antitrust trial, documents showed that Google initially intended to let sites opt out of being used for AI training for its search-based AI features—but these sites would still be included in search results. The company ultimately canned that idea, leaving site operators with the Pyrrhic choice of participating in the AI “revolution” or becoming invisible on the web. Google now competes with, rather than supports, the open web.

When many of us look at Google’s search results today, the vibe feels off. Maybe it’s the AI, maybe it’s Google’s algorithm, or maybe the Internet just isn’t what it once was. Whatever the cause, the shift toward zero-click search that began more than a decade ago was made clear by the March 2024 core update, and it has only accelerated with the launch of AI Mode. Even businesses that have escaped major traffic drops from AI Overviews could soon find that Google’s AI-only search can get much more overbearing.

The AI slop will continue until morale improves.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Zero-click searches: Google’s AI tools are the culmination of its hubris Read More »

removing-the-weakest-link-in-electrified,-autonomous-transport:-humans

Removing the weakest link in electrified, autonomous transport: humans


Hands-off charging could open the door to a revolution in autonomous freight.

An electric car charger plugs itself into a driverless cargo truck.

Driverless truck meets robot EV charger in Sweden as Einride and Rocsys work together. Credit: Einride and Rocsys

Driverless truck meets robot EV charger in Sweden as Einride and Rocsys work together. Credit: Einride and Rocsys

Thanks to our new global tariff war, the wild world of importing and exporting has been thrust into the forefront. There’s a lot of logistics involved in keeping your local Walmart stocked and your Amazon Prime deliveries happening, and you might be surprised at how much of that world has already been automated.

While cars from autonomy providers like Waymo are still extremely rare in most stretches of the open road, the process of loading and unloading cargo has become almost entirely automated at some major ports around the world. Likewise, there’s an increasing shift to electrify the various vehicles involved along the way, eliminating a significant source of global emissions.

But there’s been one sticking point in this automated, electrified logistical dream: plugging in. The humble act of charging still happens via human hands, but that’s changing. At a testing facility in Sweden, a company called Rocsys has demonstrated an automated charger that works with self-driving electric trucks from Einride in a hands-free and emissions-free partnership that could save time, money, and even lives.

People-free ports

Shipping ports are pretty intimidating places. Towering cranes stand 500 feet above the ground, swinging 30-ton cargo crates into the air and endlessly moving them from giant ships to holding pens and then turning around and sending off the next set of shipments.

A driverless truck heads out onto the road from a light industrial estate

This is Einride’s autonomous cargo truck. Credit: Einride

That cargo is then loaded onto container handlers that operate exclusively within the confines of the port, bringing the crates closer to the roads or rail lines that will take them further. They’re stacked again until the arrival of their next ride, semi-trucks for cargo about to hit the highway or empty rail cars for anything train-bound.

Believe it or not, that entire process happens autonomously at some of the most advanced ports in the world. “The APM terminal in Rotterdam port is, I would say, in the top three of the most advanced terminals in the world. It’s completely automated. There are hardly any people,” Crijn Bouman, the CEO and co-founder of Rocsys, said.

Eliminating the human factor at facilities like ports reduces cost and increases safety at a workplace that is, according to the CDC, five times more dangerous than average. But the one link in the chain that hasn’t been automated is recharging.

Those cargo haulers may be able to drive themselves to the charger, but they still can’t plug themselves in. They need a little help, and that’s where Rocsys comes in.

The person-free plug

The genesis of Rocsys came in 2017, when cofounder Bouman visited a fledgling robotaxi operator in the Bay Area.

“The vehicles were driving themselves, but after a couple of test laps, they would park themselves in the corner, and a person would walk over and plug them in,” Bouman said.

Bouman wouldn’t tell me which autonomy provider was operating the place, but he was surprised to see that the company was focused only on the wildly complex task of shuttling people from place to place on open roads. Meanwhile, the seemingly simple act of plugging and unplugging was handled exclusively by human operators.

A Rocsys charging robot extends its plug towards the EV charge port of a cargo truck.

No humans required. Credit: Einride and Rocsys

Fast-forward eight years, and The Netherlands-based Rocsys now has more than 50 automated chargers deployed globally, with a goal to install thousands more. While the company is targeting robotaxi operators for its automated charging solution, initial interest is primarily in port and fleet operators as those businesses become increasingly electrified.

Bouman calls Rocsys’s roboticized charger a “robotic steward,” a charming moniker for an arm that sticks a plug in a hole. But it’s all more complicated than that, of course.

The steward relies on an AI- and vision-based system to move an arm holding the charger plug. That arm offers six degrees of freedom and, thanks to the wonders of machine learning, largely trains itself to interface with new cars and new chargers.

It can reach high and low enough and at enough angles to cover everything from consumer cars to commercial trucks. It even works with plugs of all shapes and sizes.

The biggest complication? Manual charging flaps on some consumer cars. This has necessitated a little digital extension to the steward’s robotic arm. “We’ll have sort of a finger assembly to open the charge port cover, connect the plug, and also the system can close it. So no change to the vehicle,” Bouman said.

A Rocsys charging robot extends its plug towards the EV charge port of a cargo truck.

Manually opening charge port covers complicates things a bit. Credit: Einride and Rocsys

That said, Bouman hopes manufacturers will ditch manual charge port covers and switch to powered, automatic ones in future vehicles.

Automating the autonomous trucks

Plenty of companies around the globe are promising to electrify trucking, from medium-duty players like Harbinger to the world’s largest piece of rolling vaporware, the Tesla Semi. Few are actually operating the things, though.

Stockholm-based Einride is one of those companies. Its electric trucks are making deliveries every day, taking things a step further by removing the driver from the equation.

The wild-looking, cab-less autonomous electric transport (AET) vehicles, which would not look out of place thundering down the highway in any science-fiction movie, are self-driving in most situations. But they do have a human backup in the form of operators at what Einride’s general manager of autonomous technology, Henrik Green, calls control towers.

Here, operators can oversee multiple trucks, ensuring safe operation and handling any unexpected happenings on the road. In this way, a single person can operate multiple trucks from afar, only connecting when it requires manual intervention.

“The more vehicles we can use with the same workforce of people, the higher the efficiency,” he said.

Green said Einride has multiple remote control towers overseeing the company’s pilot deployments. Here in the US, Einride has been running a route at GE Appliance’s Selmer, Tennessee facility, where autonomous forklifts load cargo onto the autonomous trucks for hands-off hauling of your next refrigerator.

A woman monitors a video feed of an autonomous truck. A younger woman stands to her side.

The trucks are overseen remotely. Credit: Einride

Right now, the AETs must be manually plugged in by an on-site operator. It’s a minor task, but Green said that automating this process could be game-changing.

“There are, surprisingly, a lot of trucks today that are standing still or running empty,” Green said. Part of this comes down to poor logistical planning, but a lot is due to the human factor. “With automated electric trucks, we can make the transportation system more sustainable, more efficient, more resilient, and absolutely more safe.”

Getting humans out of the loop could result in Einride’s machines operating 24/7, only pausing to top off their batteries.

Self-charging, self-driving trucks could also help open the door to longer-distance deliveries without having to saddle them with giant batteries. Even with regular charging stops, these trucks could operate at a higher utilization than human-driven machines, which can only run for as long as their operators are legally or physically able to.

That could result in significant cost savings for businesses, and, since everything is electric, the environmental potential is strong, too.

“Around seven percent of the world’s global CO2 footprint today comes from land transportation, which is what we are addressing with electric heavy-duty transportation,” Green said.

Integrations and future potential

This first joining of a Rocsys robotic steward and an Einride AET took place at the AstaZero proving ground in Sandhult, Sweden, an automation test facility that has been a safe playground for driverless vehicles of all shapes and sizes for over a decade.

This physical connection between Rocsys and Einride is a small step, with one automated charger connected to one automated truck, compared to the nearly three million diesel-powered semis droning around our highways in the United States alone. But you have to start somewhere, and while bringing this technology to more open roads is the goal, closed logistics centers and ports are a great first step.

“The use case is simpler,” Bouman said. “There are no cats and dogs jumping, or children, or people on bicycles.”

And how complicated was it to connect Einride’s systems to those of the Rocsys robotic steward? Green said the software integration with the Rocsys system was straightforward but that “some adaptations” were required to make Einride’s machine compatible. “We had to make a ‘duct tape solution’ for this particular demo,” Green said.

Applying duct tape, at least, seems like a safe job for humans for some time to come.

Removing the weakest link in electrified, autonomous transport: humans Read More »

sierra-made-the-games-of-my-childhood.-are-they-still-fun-to-play?

Sierra made the games of my childhood. Are they still fun to play?


Get ready for some nostalgia.

My Ars colleagues were kicking back at the Orbital HQ water cooler the other day, and—as gracefully aging gamers are wont to do—they began to reminisce about classic Sierra On-Line adventure games. I was a huge fan of these games in my youth, so I settled in for some hot buttered nostalgia.

Would we remember the limited-palette joys of early King’s Quest, Space Quest, or Quest for Glory titles? Would we branch out beyond games with “Quest” in their titles, seeking rarer fare like Freddy Pharkas: Frontier Pharmacist? What about the gothic stylings of The Colonel’s Bequest or the voodoo-curious Gabriel Knight?

Nope. The talk was of acorns. [Bleeping] acorns, in fact.

The scene in question came from King’s Quest III, where our hero Gwydion must acquire some exceptionally desiccated acorns to advance the plot. It sounds simple enough. As one walkthrough puts it, “Go east one screen and north one screen to the acorn tree. Try picking up acorns until you get some dry ones. Try various spots underneath the tree.” Easy! And clear!

Except it wasn’t either one because the game rather notoriously won’t always give you the acorns, even when you enter the right command. This led many gamers to believe they were in the wrong spot, when in reality, they just had to keep entering the “get acorns” command while moving pixel by pixel around the tree until the game finally supplied them. One of our staffers admitted to having purchased the King’s Quest III hint book solely because of this “puzzle.” (The hint book, which is now online, says that players should “move around” the particular oak tree in question because “you can only find the right kind of acorns in one spot.”)

This wasn’t quite the “fun” I had remembered from these games, but as I cast my mind back, I dimly recalled similar situations. Space Quest II: Vohaul’s Revenge had been my first Sierra title. After my brother and I spent weeks on the game only to die repeatedly in some pitch-dark tunnels, we implored my dad to call Sierra’s 1-900 pay hint line. He thought about it. I could see it pained him because he had never before (and never since!) called a 1-900 number. In this case, the call cost a piratical 75 cents for the first minute and 50 cents for each additional minute. After listening to us whine for several days straight, my dad decided that his sanity was worth the fee, and he called.

Like the acorn example above, we had known what to do—we had just not done it to the game’s rather exacting standards. The key was to use a glowing gem as a light source, which my brother and I had long understood. The problem was the text parser, which demanded that we “put gem in mouth” to use the gem’s light in the tunnels. There was no other place to put the gem, no other way to hold or attach it. (We tried them all.) No other attempt to use the light of this shining crystal, no matter how clear, well-intentioned, or succinctly expressed, would work. You put the gem in your mouth, or you died in the darkness.

Returning from my reveries to the conversation at hand, I caught Ars Senior Editor Lee Hutchinson’s cynical remark that these kinds of puzzles were “the only way to make 2–3 hours of ‘game’ last for months.” This seemed rather shocking, almost offensive. How could one say such a thing about the games that colored my memories of childhood?

So I decided to replay Space Quest II for the first time in 35 years in an attempt to defend my own past.

Big mistake.

Space Quest II screenshot.

We’re not on Endor anymore, Dorothy.

Play it again, Sam

In my memory, the Space Quest series was filled with sharply written humor, clever puzzles, and enchanting art. But when I fired up the original version of the game, I found that only one of these was true. The art, despite its blockiness and limited colors, remained charming.

As for the gameplay, the puzzles were not so much “clever” as “infuriating,” “obvious,” or (more often) “rather obscure.”

Finding the glowing gem discussed above requires you to swim into one small spot of a multi-screen river, with no indication in advance that anything of importance is in that exact location. Trying to “call” a hunter who has captured you does nothing… until you do it a second time. And the less said about trying to throw a puzzle at a Labian Terror Beast, typing out various word permutations while death bears down upon you, the better.

The whole game was also filled with far more no-warning insta-deaths than I had remembered. On the opening screen, for instance, after your janitorial space-broom floats off into the cosmic ether, you can walk your character right off the edge of the orbital space station he is cleaning. The game doesn’t stop you; indeed, it kills you and then mocks you for “an obvious lack of common sense.” It then calls you a “wing nut” with an “inability to sustain life.” Game over.

The game’s third screen, which features nothing more to do than simply walking around, will also kill you in at least two different ways. Walk into the room still wearing your spacesuit and your boss will come over and chew you out. Game over.

If you manage to avoid that fate by changing into your indoor uniform first, it’s comically easy to tap the wrong arrow key and fall off the room’s completely guardrail-free elevator platform. Game over.

Space Quest II screenshot.

Do NOT touch any part of this root monster.

Get used to it because the game will kill you in so, so many ways: touching any single pixel of a root monster whose branches form a difficult maze; walking into a giant mushroom; stepping over an invisible pit in the ground; getting shot by a guard who zips in on a hovercraft; drowning in an underwater tunnel; getting swiped at by some kind of giant ape; not putting the glowing gem in your mouth; falling into acid; and many more.

I used the word “insta-death” above, but the game is not even content with this. At one key point late in the game, a giant Aliens-style alien stalks the hallways, and if she finds you, she “kisses” you. But then she leaves! You are safe after all! Of course, if you have seen the films, you will recognize that you are not safe, but the game lets you go on for a bit before the alien’s baby inevitably bursts from your chest, killing you. Game over.

This is why the official hint book suggests that you “save your game a lot, especially when it seems that you’re entering a dangerous area. That way, if you die, you don’t have to retrace your steps much.” Presumably, this was once considered entertaining.

When it comes to the humor, most of it is broad. (When you are told to “say the word,” you have to say “the word.”) Sometimes it is condescending. (“You quickly glance around the room to see if anyone saw you blow it.”) Or it might just be potty jokes. (Plungers, jock straps, toilet paper, alien bathrooms, and fouling one’s trousers all make appearances.)

My total gameplay time: a few hours.

“By Grabthar’s hammer!” I thought. “Lee was right!”

When I admitted this to him, Lee told me that he had actually spent time learning to speedrun the Space Quest games during the pandemic. “According to my notes, a clean run of SQ2 in ‘fast’ mode—assuming good typing skills—takes about 20 minutes straight-up,” he said. Yikes.

Space Quest II screenshot.

What a fiendish plot!

And yet

The past was a different time. Computer memory was small, graphics capabilities were low, and computer games had emerged from the “let them live just long enough to encourage spending another quarter” arcade model. Mouse adoption took a while; text parsers made sense even though they created plenty of frustration. So yes—some of these games were a few hours of gameplay stretched out with insta-death, obscure puzzles, and the sheer amount of time it took just to walk across the game’s various screens. (Seriously, “walking around” took a ridiculous amount of the game’s playtime, especially when a puzzle made you backtrack three screens, type some command, and then return.)

Space Quest II screenshot.

Let’s get off this rock.

Judged by current standards, the Sierra games are no longer what I would play for fun.

All the same, I loved them. They introduced me to the joy of exploring virtual worlds and to the power of evocative artwork. I went into space, into fairy tales, and into the past, and I did so while finding the games’ humor humorous and their plotlines compelling. (“An army of life insurance salesmen?” I thought at the time. “Hilarious and brilliant!”)

If the games can feel a bit arbitrary or vexing today, my child-self’s love of repetition was able to treat them as engaging challenges rather than “unfair” design.

Replaying Space Quest II, encountering the half-remembered jokes and visual designs, brought back these memories. The novelist Thomas Wolfe knew that you can’t go home again, and it was probably inevitable that the game would feel dated to me now. But playing it again did take me back to that time before the Internet, when not even hint lines, insta-death, and EGA graphics could dampen the wonder of the new worlds computers were capable of showing us.

Space Quest II screenshot.

Literal bathroom humor.

Space Quest II, along with several other Sierra titles, is freely and legally available online at sarien.net—though I found many, many glitches in the implementation. Windows users can buy the entire Space Quest collection through Steam or Good Old Games. There’s even a fan remake that runs on macOS, Windows, and Linux.

Photo of Nate Anderson

Sierra made the games of my childhood. Are they still fun to play? Read More »

motorola-razr-and-razr-ultra-(2025)-review:-cool-as-hell,-but-too-much-ai

Motorola Razr and Razr Ultra (2025) review: Cool as hell, but too much AI


The new Razrs are sleek, capable, and overflowing with AI features.

Razr Ultra and Razr (2025)

Motorola’s 2025 Razr refresh includes its first Ultra model. Credit: Ryan Whitwam

Motorola’s 2025 Razr refresh includes its first Ultra model. Credit: Ryan Whitwam

For phone nerds who’ve been around the block a few times, the original Motorola Razr is undeniably iconic. The era of foldables has allowed Motorola to resurrect the Razr in an appropriately flexible form, and after a few generations of refinement, the 2025 Razrs are spectacular pieces of hardware. They look great, they’re fun to use, and they just about disappear in your pocket.

The new Razrs also have enormous foldable OLEDs, along with external displays that are just large enough to be useful. Moto has upped its design game, offering various Pantone shades with interesting materials and textures to make the phones more distinctive, but Motorola’s take on mobile AI could use some work, as could its long-term support policy. Still, these might be the coolest phones you can get right now.

An elegant tactile experience

Many phone buyers couldn’t care less about how a phone’s body looks or feels—they’ll just slap it in a case and never look at it again. Foldables tend not to fit as well in cases, so the physical design of the Razrs is important. The good news is that Motorola has refined the foldable formula with an updated hinge and some very interesting material choices.

Razr Ultra back

The Razr Ultra is available with a classy wood back.

Credit: Ryan Whitwam

The Razr Ultra is available with a classy wood back. Credit: Ryan Whitwam

The 2025 Razrs come in various colors, all of which have interesting material choices for the back panel. There are neat textured plastics, wood, vegan leather, and synthetic fabrics. We’ve got wood (Razr Ultra) and textured plastic (Razr) phones to test—they look and feel great. The Razr is very grippy, and the wooden Ultra looks ultra-stylish, though not quite as secure in the hand. The aluminum frames are also colored to match the back with a smooth matte finish. Motorola has gone to great lengths to make these phones feel unique without losing the premium vibe. It’s nice to see a phone maker do that without resorting to a standard glass sandwich body.

The buttons are firm and tactile, but we’re detecting just a bit of rattle in the power button. That’s also where you’ll find the fingerprint sensor. It’s reasonably quick and accurate, whether the phone is open or closed. The Razr Ultra also has an extra AI button on the opposite side, which is unnecessary, for reasons we’ll get to later. And no, you can’t remap it to something else.

Motorola Razr 2025

The Razrs have a variety of neat material options.

Credit: Ryan Whitwam

The Razrs have a variety of neat material options. Credit: Ryan Whitwam

The front of the flip on these phones features a big sheet of Gorilla Glass Ceramic, which is supposedly similar to Apple’s Ceramic Shield glass. That should help ward off scratches. The main camera sensors poke through this front OLED, which offers some interesting photographic options we’ll get to later. The Razr Ultra has a larger external display, clocking in at 4 inches. The cheaper Razr gets a smaller 3.6-inch front screen, but that’s still plenty of real estate, even with the camera lenses at the bottom.

Specs at a glance: 2025 Motorola Razrs
Motorola Razr ($699.99) Motorola Razr+ ($999.99) Motorola Razr Ultra ($1,299.99)
SoC MediaTek Dimensity 7400X Snapdragon 8s Gen 3 Snapdragon 8 Elite
Memory 8GB 12GB 16GB
Storage 256GB 256GB 512GB, 1TB
Display 6.9″ foldable OLED (120 Hz, 2640 x 1080), 3.6″ external (90 Hz) 6.9″ foldable OLED (165 Hz, 2640 x 1080), 4″ external (120 Hz, 1272 x 1080) 7″ foldable OLED (165 Hz, 2992 x 1224), 4″ external (165 Hz)
Cameras 50 MP f/1.7 OIS primary; 13 MP f/2.2  ultrawide, 32 MP selfie 50 MP f/1.7 OIS primary; 50 MP 2x telephoto f/2.0, 32 MP selfie 50 MP f/1.8 OIS primary, 50 MP ultrawide + macro, f/2.0, 50 MP selfie
Software Android 15 Android 15 Android 15
Battery 4,500 mAh, 30 W wired charging, 15 W wireless charging 4,000 mAh, 45 W wired charging, 15 W wireless charging 4,700 mAh, 68 W wired charging, 15 W wireless charging
Connectivity Wi-Fi 6e, NFC, Bluetooth 5.4, sub-6 GHz 5G, USB-C 2.0 Wi-Fi 7, NFC, Bluetooth 5.4, sub-6 GHz 5G, USB-C 2.0 Wi-Fi 7, NFC, Bluetooth 5.4, sub-6 GHz 5G, USB-C 2.0
Measurements Open: 73.99 x 171.30 x 7.25 mm;

Closed: 73.99 x 88.08 x 15.85 mm; 188 g
Open: 73.99 x 171.42 x 7.09 mm;

Closed: 73.99 x 88.09x 15.32 mm; 189 g
Open: 73.99 x 171.48 x 7.19 mm;

Closed: 73.99 x 88.12 x 15.69 mm; 199 g

Motorola says the updated foldable hinge has been reinforced with titanium. This is the most likely point of failure for a flip phone, but the company’s last few Razrs already felt pretty robust. It’s good that Moto is still thinking about durability, though. The hinge is smooth, allowing you to leave the phone partially open, but there are magnets holding the two halves together with no gap when closed. The magnets also allow for a solid snap when you shut it. Hanging up on someone is so, so satisfying when you’re using a Razr flip phone.

Flip these phones open, and you get to the main event. The Razr has a 6.9-inch, 2640×1080 foldable OLED, and the Ultra steps up to 7 inches at an impressive 2992×1224. These phones have almost exactly the same dimensions, so the additional bit of Ultra screen comes from thinner bezels. Both phones are extremely tall when open, but they’re narrow enough to be usable in one hand. Just don’t count on reaching the top of the screen easily. While Motorola has not fully eliminated the display crease, it’s much smoother and less noticeable than it is on Samsung’s or Google’s foldables.

Motorola Razr Ultra

The Razr Ultra has a 7-inch foldable OLED.

Credit: Ryan Whitwam

The Razr Ultra has a 7-inch foldable OLED. Credit: Ryan Whitwam

The Razr can hit 3,000 nits of brightness, and the $1,300 Razr Ultra tops out at 4,500 nits. Both are bright enough to be usable outdoors, though the Ultra is noticeably brighter. However, both suffer from the standard foldable drawbacks of having a plastic screen. The top layer of the foldable screen is a non-removable plastic protector, which has very high reflectivity that makes it harder to see the display. That plastic layer also means you have to be careful not to poke or scratch the inner screen. It’s softer than your fingernails, so it’s not difficult to permanently damage the top layer.

Too much AI

Motorola’s big AI innovation for last year’s Razr was putting Gemini on the phone, making it one of the first to ship with Google’s generative AI system. This time around, it has AI features based on Gemini, Meta Llama, Perplexity, and Microsoft Copilot. It’s hard to say exactly how much AI is worth having on a phone with the rapid pace of change, but Motorola has settled on the wrong amount. To be blunt, there’s too much AI. What is “too much” in this context? This animation should get the point across.

Moto AI

Motorola’s AI implementation is… a lot.

Credit: Ryan Whitwam

Motorola’s AI implementation is… a lot. Credit: Ryan Whitwam

The Ask and Search bar appears throughout the UI, including as a floating Moto AI icon. It’s also in the app drawer and is integrated with the AI button on the Razr Ultra. You can use it to find settings and apps, but it’s also a full LLM (based on Copilot) for some reason. Gemini is a better experience if you’re looking for a chatbot, though.

Moto AI also includes a raft of other features, like Pay Attention, which can record and summarize conversations similar to the Google recorder app. However, unlike that app, the summarizing happens in the cloud instead of locally. That’s a possible privacy concern. You also get Perplexity integration, allowing you to instantly search based on your screen contents. In addition, the Perplexity app is preloaded with a free trial of the premium AI search service.

There’s so much AI baked into the experience that it can be difficult to keep all the capabilities straight, and there are some more concerning privacy pitfalls. Motorola’s Catch Me Up feature is a notification summarizer similar to a feature of Apple Intelligence. On the Ultra, this feature works locally with a Llama 3 model, but the less powerful Razr can’t do that. It sends your notifications to a remote server for processing when you use Catch Me Up. Motorola says data is “anonymous and secure” and it does not retain any user data, but you have to put a lot of trust in a faceless corporation to send it all your chat notifications.

Razr Ultra and Razr (2025)

The Razrs have additional functionality if you prop them up in “tent” or “stand” mode.

Credit: Ryan Whitwam

The Razrs have additional functionality if you prop them up in “tent” or “stand” mode. Credit: Ryan Whitwam

If you can look past Motorola’s frenetic take on mobile AI, the version of Android 15 on the Razrs is generally good. There are a few too many pre-loaded apps and experiences, but it’s relatively simple to debloat these phones. It’s quick, doesn’t diverge too much from the standard Android experience, and avoids duplicative apps.

We appreciate the plethora of settings and features for the external display. It’s a much richer experience than you get with Samsung’s flip phones. For example, we like how easy it is to type out a reply in a messaging app without even opening the phone. In fact, you can run any app on the phone without opening it, even though many of them won’t work quite right on a smaller square display. Still, it can be useful for chat apps, email, and other text-based stuff. We also found it handy for using smart home devices like cameras and lights. There are also customizable panels for weather, calendar, and Google “Gamesnack” games.

Razr Ultra and Razr (2025)

The Razr Ultra (left) has a larger screen than the Razr (right).

Credit: Ryan Whitwam

The Razr Ultra (left) has a larger screen than the Razr (right). Credit: Ryan Whitwam

Motorola promises three years of full OS updates and an additional year of security patches. This falls far short of the seven-year update commitment from Samsung and Google. For a cheaper phone like the Razr, four years of support might be fine, but it’s harder to justify that when the Razr Ultra costs as much as a Galaxy S25 Ultra.

One fast foldable, one not so much

Motorola is fond of saying the Razr Ultra is the fastest flip phone in the world, which is technically true. It has the Snapdragon 8 Elite chip with 16GB of RAM, but we expect to see the Elite in Samsung’s 2025 foldables later this year. For now, though, the Razr Ultra stands alone. The $700 Razr runs a Mediatek Dimensity 7400X, which is a distinctly midrange processor with just 8GB of RAM.

Razr geekbench

The Razr Ultra gets close to the S25.

Credit: Ryan Whitwam

The Razr Ultra gets close to the S25. Credit: Ryan Whitwam

In daily use, neither phone feels slow. Side by side, you can see the Razr is slower to open apps and unlock, and the scrolling exhibits occasional jank. However, it’s not what we’d call a slow phone. It’s fine for general smartphone tasks like messaging, browsing, and watching videos. You may have trouble with gaming, though. Simple games run well enough, but heavy 3D titles like Diablo Immortal are rough with the Dimensity 7400X.

The Razr Ultra is one of the fastest Android phones we’ve tested, thanks to the Snapdragon chip. You can play complex games and multitask to your heart’s content without fear of lag. It does run a little behind the Galaxy S25 series in benchmarks, but it thankfully doesn’t get as toasty as Samsung’s phones.

We never expect groundbreaking battery life from foldables. The hinge takes up space, which limits battery capacity. That said, Motorola did fairly well cramming a 4,700 mAh battery in the Razr Ultra and a 4,500 mAh cell in the Razr.

Based on our testing, both of these phones should last you all day. The large external displays can help by giving you just enough information that you don’t have to use the larger, more power-hungry foldable OLED. If you’re playing games or using the main display exclusively, you may find the Razrs just barely make it to bedtime. However, no matter what you do, these are not multi-day phones. The base model Razr will probably eke out a few more hours, even with its smaller battery, due to the lower-power MediaTek processor. The Snapdragon 8 Elite in the Razr Ultra really eats into the battery when you take advantage of its power.

Motorola Razr Ultra

The Razrs are extremely pocketable.

Credit: Ryan Whitwam

The Razrs are extremely pocketable. Credit: Ryan Whitwam

While the battery life is just this side of acceptable, the Razr Ultra’s charging speed makes this less of a concern. This phone hits an impressive 68 W, which is faster than the flagship phones from Google, Samsung, and Apple. Just a few minutes plugged into a compatible USB-C charger and you’ve got enough power that you can head out the door without worry. Of course, the phone doesn’t come with a charger, but we’ve tested a few recent models, and they all hit the max wattage.

OK cameras with super selfies

Camera quality is another area where foldable phones tend to compromise. The $1,300 Razr Ultra has just two sensors—a 50 MP primary sensor and a 50 MP ultrawide lens. The $700 Razr has a slightly different (and less capable) 50 MP primary camera and a 13 MP ultrawide. There are also selfie cameras peeking through the main foldable OLED panels—50 MP for the Ultra and 32 MP for the base model.

Motorola Razr 2025 in hand

The cheaper Razr has a smaller external display, but it’s still large enough to be usable.

Credit: Ryan Whitwam

The cheaper Razr has a smaller external display, but it’s still large enough to be usable. Credit: Ryan Whitwam

Motorola’s Razrs tend toward longer exposures compared to Pixels—they’re about on par with Samsung phones. That means capturing fast movement indoors is difficult, and you may miss your subject outside due to a perceptible increase in shutter lag compared to Google’s phones. Images from the base model Razr’s primary camera also tend to look a bit more overprocessed than they do on the Ultra, which leads to fuzzy details and halos in bright light.

Razr Ultra outdoors. Ryan Whitwam

That said, Motorola’s partnership with Pantone is doing some good. The colors in our photos are bright and accurate, capturing the vibe of the scene quite well. You can get some great photos of stationary or slowly moving subjects.

Razr 2025 indoor medium light. Ryan Whitwam

The 50 MP ultrawide camera on the Razr Ultra has a very wide field of view, but there’s little to no distortion at the edges. The colors are also consistent between the two sensors, but that’s not always the case for the budget Razr. Its ultrawide camera also lacks detail compared to the Ultra, which isn’t surprising considering the much lower resolution.

You should really only use the dedicated front-facing cameras for video chat. For selfies, you’ll get much better results by taking advantage of the Razr’s distinctive form factor. When closed, the Razrs let you take selfies with the main camera sensors, using the external display as the viewfinder. These are some of the best selfies you’ll get with a smartphone, and having the ultrawide sensor makes group shots excellent as well.

Flip phones are still fun

While we like these phones for what they are, they are objectively not the best value. Whether you’re looking at the Razr or the Razr Ultra, you can get more phone for the same money from other companies—more cameras, more battery, more updates—but those phones don’t fold in half. There’s definitely a cool-factor here. Flip phones are stylish, and they’re conveniently pocket-friendly in a world where giant phones barely fit in your pants. We also like the convenience and functionality of the external displays.

Motorola Razr Ultra

The Razr Ultra is all screen from the front.

Credit: Ryan Whitwam

The Razr Ultra is all screen from the front. Credit: Ryan Whitwam

The Razr Ultra makes the usual foldable compromises, but it’s as capable a flip phone as you’ll find right now. It’s blazing fast, it has two big displays, and the materials are top-notch. However, $1,300 is a big ask.

Is the Ultra worth $500 more than the regular Razr? Probably not. Most of what makes the foldable Razrs worth using is present on the cheaper model. You still get the solid construction, cool materials, great selfies, and a useful (though slightly smaller) outer display. Yes, it’s a little slower, but it’s more than fast enough as long as you’re not a heavy gamer. Just be aware of the potential for Moto AI to beam your data to the cloud.

There is also the Razr+, which slots in between the models we have tested at $1,000. It’s faster than the base model and has the same large external display as the Ultra. This model could be the sweet spot if neither the base model nor the flagship does it for you.

The good

  • Sleek design with distinctive materials
  • Great performance from Razr Ultra
  • Useful external display
  • Big displays in a pocket-friendly package

The bad

  • Too much AI
  • Razr Ultra is very expensive
  • Only three years of OS updates, four years of security patches
  • Cameras trail the competition

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Motorola Razr and Razr Ultra (2025) review: Cool as hell, but too much AI Read More »

gm’s-lmr-battery-breakthrough-means-more-range-at-a-lower-cost

GM’s LMR battery breakthrough means more range at a lower cost

Kelty also believes it just makes sense to localize production. He pointed out that when consumer electronics with batteries took off, the supply chain developed around the customers in Southeast Asia. The customers, in that case, are the electronics manufacturers. He said the same thing makes sense in the United States.

There might be an inclination to give President Trump and his administration credit for this onshoring initiative, but the company has been working on localizing battery production for years. Even development on the LMR battery technology had been happening long before the current administration took over.

A battery technician at the General Motors Wallace Battery Cell Innovation Center takes a chemistry slurry sample. Credit: Steve Fecht for General Motors

That research and development of new technologies remains ongoing. In addition to testing battery cells in every known condition on Earth, GM can produce packs in production-ready format on site in Warren, just at a slower pace, to fine-tune the process and ensure a better-quality product. The company is currently working on a facility that will be able to make production-quality batteries at production speeds, so when a new line or a new plant is brought online somewhere else, all the kinks will already have been worked out.

GM’s LMR batteries feel like a logical evolution of the lithium-ion batteries that appear in EVs already. The company now has the facilities to build the highest-quality battery solution that it can. It’s also clear that the company has been working on this for quite some time.

If this all sounds like what Ford announced recently, it is. For its part, Ford says its research is not a lab experiment and that it will appear in vehicles before the end of the decade. While I can’t say who landed on the technology first, it’s clear that GM has a production plan and knows what specific products you’ll see it in to start.

A building with a truck in front of it.

General Motors Wallace Battery Cell Innovation Center focuses on advanced technical work for cutting-edge battery technology and prototyping full-size cells. Credit: General Motors

If LMR delivers on the promise, we’ll have a battery technology that delivers more range for less money. If there’s one takeaway from talking to the folks working on batteries in Warren, it’s that their guiding star is to make EVs affordable.

Kelty even challenged the room full of reporters. “Can anybody name a reason why you would not buy an EV if it’s price parity with ICE? I’ll argue it,” he said.

Kelty also hinted at some upcoming technology to help GM’s batteries work better in sub-optimal weather conditions, though he wouldn’t comment or elaborate on future products.

We’re still a couple of years away from production, but if General Motors can deliver on the tech, we’ll be one step closer to mainstream adoption.

GM’s LMR battery breakthrough means more range at a lower cost Read More »

the-tinkerers-who-opened-up-a-fancy-coffee-maker-to-ai-brewing

The tinkerers who opened up a fancy coffee maker to AI brewing

(Ars contacted Fellow Products for comment on AI brewing and profile sharing and will update this post if we get a response.)

Opening up brew profiles

Fellow’s brew profiles are typically shared with buyers of its “Drops” coffees or between individual users through a phone app.

Credit: Fellow Products

Fellow’s brew profiles are typically shared with buyers of its “Drops” coffees or between individual users through a phone app. Credit: Fellow Products

Aiden profiles are shared and added to Aiden units through Fellow’s brew.link service. But the profiles are not offered in an easy-to-sort database, nor are they easy to scan for details. So Aiden enthusiast and hobbyist coder Kevin Anderson created brewshare.coffee, which gathers both general and bean-based profiles, makes them easy to search and load, and adds optional but quite helpful suggested grind sizes.

As a non-professional developer jumping into a public offering, he had to work hard on data validation, backend security, and mobile-friendly design. “I just had a bit of an idea and a hobby, so I thought I’d try and make it happen,” Anderson writes. With his tool, brew links can be stored and shared more widely, which helped both Dixon and another AI/coffee tinkerer.

Gabriel Levine, director of engineering at retail analytics firm Leap Inc., lost his OXO coffee maker (aka the “Barista Brain”) to malfunction just before the Aiden debuted. The Aiden appealed to Levine as a way to move beyond his coffee rut—a “nice chocolate-y medium roast, about as far as I went,” he told Ars. “This thing that can be hyper-customized to different coffees to bring out their characteristics; [it] really kind of appealed to that nerd side of me,” Levine said.

Levine had also been doing AI stuff for about 10 years, or “since before everyone called it AI—predictive analytics, machine learning.” He described his career as “both kind of chief AI advocate and chief AI skeptic,” alternately driving real findings and talking down “everyone who… just wants to type, ‘how much money should my business make next year’ and call that work.” Like Dixon, Levine’s work and fascination with Aiden ended up intersecting.

The coffee maker with 3,588 ideas

The author’s conversation with the Aiden Profile Creator, which pulled in both brewing knowledge and product info for a widely available coffee:

Levine’s Aiden Profile Creator is a ChatGPT prompt set up with a custom prompt and told to weight certain knowledge more heavily. What kind of prompt and knowledge? Levine didn’t want to give away his exact work. But he cited resources like the Specialty Coffee Association of America and James Hoffman’s coffee guides as examples of what he fed it.

What it does with that knowledge is something of a mystery to Levine himself. “There’s this kind of blind leap, where it’s grabbing the relevant pieces of information from the knowledge base, biasing toward all the expert advice and extraction science, doing something with it, and then I take that something and coerce it back into a structured output I can put on your Aiden,” Levine said.

It’s a blind leap, but it has landed just right for me so far. I’ve made four profiles with Levine’s prompt based on beans I’ve bought: Stumptown’s Hundred Mile, a light-roasted batch from Jimma, Ethiopia, from Small Planes, Lost Sock’s Western House filter blend, and some dark-roast beans given as a gift. With the Western House, Levine’s profile creator said it aimed to “balance nutty sweetness, chocolate richness, and bright cherry acidity, using a slightly stepped temperature profile and moderate pulse structure.” The resulting profile has worked great, even if the chatbot named it “Cherry Timber.”

Levine’s chatbot relies on two important things: Dixon’s work in revealing Fellow’s Aiden API and his own workhorse Aiden. Every Aiden profile link is created on a machine, so every profile created by Levine’s chat is launched, temporarily, from the Aiden in his kitchen, then deleted. “I’ve hit an undocumented limit on the number of profiles you can have on one machine, so I’ve had to do some triage there,” he said. As of April 22, nearly 3,600 profiles had passed through Levine’s Aiden.

“My hope with this is that it lowers the bar to entry,” Levine said, “so more people get into these specialty roasts and it drives people to support local roasters, explore their world a little more. I feel like that certainly happened to me.”

Something new is brewing

Credit: Fellow Products

Having admitted to myself that I find something generated by ChatGPT prompts genuinely useful, I’ve softened my stance slightly on LLM technology, if not the hype. Used within very specific parameters, with everything second-guessed, I’m getting more comfortable asking chat prompts for formatted summaries on topics with lots of expertise available. I do my own writing, and I don’t waste server energy on things I can, and should, research myself. I even generally resist calling language model prompts “AI,” given the term’s baggage. But I’ve found one way to appreciate its possibilities.

This revelation may not be new to someone already steeped in the models. But having tested—and tasted—my first big experiment while willfully engaging with a brewing bot, I’m a bit more awake.

This post was updated at 8: 40 am with a different capture of a GPT-created recipe.

The tinkerers who opened up a fancy coffee maker to AI brewing Read More »