Author name: Kelly Newman

how-3d-printing-is-personalizing-health care

How 3D printing is personalizing health care


Prosthetics are becoming increasing affordable and accessible thanks to 3D printers.

Three-dimensional printing is transforming medical care, letting the health care field shift from mass-produced solutions to customized treatments tailored to each patient’s needs. For instance, researchers are developing 3D-printed prosthetic hands specifically designed for children, made with lightweight materials and adaptable control systems.

These continuing advancements in 3D-printed prosthetics demonstrate their increasing affordability and accessibility. Success stories like this one in personalized prosthetics highlight the benefits of 3D printing, in which a model of an object produced with computer-aided design software is transferred to a 3D printer and constructed layer by layer.

We are a biomedical engineer and a chemist who work with 3D printing. We study how this rapidly evolving technology provides new options not just for prosthetics but for implants, surgical planning, drug manufacturing, and other health care needs. The ability of 3D printing to make precisely shaped objects in a wide range of materials has led to, for example, custom replacement joints and custom-dosage, multidrug pills.

Better body parts

Three-dimensional printing in health care started in the 1980s with scientists using technologies such as stereolithography to create prototypes layer by layer. Stereolithography uses a computer-controlled laser beam to solidify a liquid material into specific 3D shapes. The medical field quickly saw the potential of this technology to create implants and prosthetics designed specifically for each patient.

One of the first applications was creating tissue scaffolds, which are structures that support cell growth. Researchers at Boston Children’s Hospital combined these scaffolds with patients’ own cells to build replacement bladders. The patients remained healthy for years after receiving their implants, demonstrating that 3D-printed structures could become durable body parts.

As technology progressed, the focus shifted to bioprinting, which uses living cells to create working anatomical structures. In 2013, Organovo created the world’s first 3D-bioprinted liver tissue, opening up exciting possibilities for creating organs and tissues for transplantation. But while significant advances have been made in bioprinting, creating full, functional organs such as livers for transplantation remains experimental. Current research focuses on developing smaller, simpler tissues and refining bioprinting techniques to improve cell viability and functionality. These efforts aim to bridge the gap between laboratory success and clinical application, with the ultimate goal of providing viable organ replacements for patients in need.

Three-dimensional printing already has revolutionized the creation of prosthetics. It allows prosthetics makers to produce affordable custom-made devices that fit the patient perfectly. They can tailor prosthetic hands and limbs to each individual and easily replace them as a child grows.

Three-dimensionally printed implants, such as hip replacements and spine implants, offer a more precise fit, which can improve how well they integrate with the body. Traditional implants often come only in standard shapes and sizes.

Some patients have received custom titanium facial implants after accidents. Others had portions of their skulls replaced with 3D-printed implants.

Additionally, 3D printing is making significant strides in dentistry. Companies such as Invisalign use 3D printing to create custom-fit aligners for teeth straightening, demonstrating the ability to personalize dental care.

Scientists are also exploring new materials for 3D printing, such as self-healing bioglass that might replace damaged cartilage. Moreover, researchers are developing 4D printing, which creates objects that can change shape over time, potentially leading to medical devices that can adapt to the body’s needs.

For example, researchers are working on 3D-printed stents that can respond to changes in blood flow. These stents are designed to expand or contract as needed, reducing the risk of blockage and improving long-term patient outcomes.

Simulating surgeries

Three-dimensionally printed anatomical models often help surgeons understand complex cases and improve surgical outcomes. These models, created from medical images such as X-rays and CT scans, allow surgeons to practice procedures before operating.

For instance, a 3D-printed model of a child’s heart enables surgeons to simulate complex surgeries. This approach can lead to shorter operating times, fewer complications, and lower costs.

Personalized pharmaceuticals

In the pharmaceutical industry, drugmakers can three-dimensionally print personalized drug dosages and delivery systems. The ability to precisely layer each component of a drug means that they can make medicines with the exact dose needed for each patient. The 3D-printed anti-epileptic drug Spritam was approved by the Food and Drug Administration in 2015 to deliver very high dosages of its active ingredient.

Drug production systems that use 3D printing are finding homes outside pharmaceutical factories. The drugs potentially can be made and delivered by community pharmacies. Hospitals are starting to use 3D printing to make medicine on-site, allowing for personalized treatment plans based on factors such as the patient’s age and health.

However, it’s important to note that regulations for 3D-printed drugs are still being developed. One concern is that postprinting processing may affect the stability of drug ingredients. It’s also important to establish clear guidelines and decide where 3D printing should take place – whether in pharmacies, hospitals or even at home. Additionally, pharmacists will need rigorous training in these new systems.

Printing for the future

Despite the extraordinarily rapid progress overall in 3D printing for health care, major challenges and opportunities remain. Among them is the need to develop better ways to ensure the quality and safety of 3D-printed medical products. Affordability and accessibility also remain significant concerns. Long-term safety concerns regarding implant materials, such as potential biocompatibility issues and the release of nanoparticles, require rigorous testing and validation.

While 3D printing has the potential to reduce manufacturing costs, the initial investment in equipment and materials can be a barrier for many health care providers and patients, especially in underserved communities. Furthermore, the lack of standardized workflows and trained personnel can limit the widespread adoption of 3D printing in clinical settings, hindering access for those who could benefit most.

On the bright side, artificial intelligence techniques that can effectively leverage vast amounts of highly detailed medical data are likely to prove critical in developing improved 3D-printed medical products. Specifically, AI algorithms can analyze patient-specific data to optimize the design and fabrication of 3D-printed implants and prosthetics. For instance, implant makers can use AI-driven image analysis to create highly accurate 3D models from CT scans and MRIs that they can use to design customized implants.

Furthermore, machine learning algorithms can predict the long-term performance and potential failure points of 3D-printed prosthetics, allowing prosthetics designers to optimize for improved durability and patient safety.

Three-dimensional printing continues to break boundaries, including the boundary of the body itself. Researchers at the California Institute of Technology have developed a technique that uses ultrasound to turn a liquid injected into the body into a gel in 3D shapes. The method could be used one day for delivering drugs or replacing tissue.

Overall, the field is moving quickly toward personalized treatment plans that are closely adapted to each patient’s unique needs and preferences, made possible by the precision and flexibility of 3D printing.The Conversation

Anne Schmitz, Associate Professor of Engineering, University of Wisconsin-Stout and Daniel Freedman, Dean of the College of Science, Technology, Engineering, Mathematics & Management, University of Wisconsin-Stout. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo of The Conversation

The Conversation is an independent source of news and views, sourced from the academic and research community. Our team of editors work with these experts to share their knowledge with the wider public. Our aim is to allow for better understanding of current affairs and complex issues, and hopefully improve the quality of public discourse on them.

How 3D printing is personalizing health care Read More »

self-hosting-is-having-a-moment-ethan-sholly-knows-why.

Self-hosting is having a moment. Ethan Sholly knows why.

Self-hosting is having a moment, even if it’s hard to define exactly what it is.

It’s a niche that goes beyond regular computing devices and networks but falls short of a full-on home lab. (Most home labs involve self-hosting, but not all self-hosting makes for a home lab.) It adds privacy, provides DRM-free alternatives, and reduces advertising. It’s often touted as a way to get more out of your network-attached storage (NAS), but it’s much more than just backup and media streaming.

Is self-hosting just running services on your network for which most people rely on cloud companies? Broadly, yes. But take a look at the selfh.st site/podcast/newsletter, the r/selfhosted subreddit, and all the GitHub project pages that link to one another, and you’ll also find things that no cloud provider offers.

Ethan Sholly, proprietor of the selfh.st site, newsletter, and occasional podcast, recently walked me through the current state of self-hosting, and he shared some of the findings from his surveys of those people doing all that small-scale server administration.

“Turn your desktop on—it’s movie night”

Ethan Sholly headshot, in front of a blue bookshelf.

Ethan Sholly, proprietor of the selfh.st media mini-conglomerate.

Credit: Ethan Sholly

Ethan Sholly, proprietor of the selfh.st media mini-conglomerate. Credit: Ethan Sholly

Sholly works in finance, not tech, but he was a computer science minor with just enough knowledge to get Plex working on a desktop PC for his friends and family. “I’d get a call or text: ‘Can you turn your desktop on—it’s movie night,'” Sholly said.

He gradually expanded to building his own tower server with 10 terabyte drives. Once he had his media-serving needs covered, the question inevitably became “What else can I self-host?” He dug in, wandered around, and found himself with tons of bookmarked GitHub repos and project pages.

Sholly, a self-professed “old-school RSS junkie,” wanted one place to find the most commonly recommended apps and news about their changes and updates. It didn’t exist, so he assembled it, coded it, and shared it. He also started writing about the scene in his newsletter, which has more personality and punch than you’d expect from someone in a largely open source, DIY-minded hobby.

After Plex increased subscription prices and changed its business model in March, Sholly wrote in his newsletter that, while there were valid concerns about privacy and future directions, it would be a good time to note something else: The majority of people don’t donate to a single self-hosted project.

Self-hosting is having a moment. Ethan Sholly knows why. Read More »

chicago-sun-times-prints-summer-reading-list-full-of-fake-books

Chicago Sun-Times prints summer reading list full of fake books

Photo of the Chicago Sun-Times

Photo of the Chicago Sun-Times “Summer reading list for 2025” supplement. Credit: Rachel King / Bluesky

Novelist Rachael King initially called attention to the error on Bluesky Tuesday morning. “The Chicago Sun-Times obviously gets ChatGPT to write a ‘summer reads’ feature almost entirely made up of real authors but completely fake books. What are we coming to?” King wrote.

So far, community reaction to the list has been largely negative online, but others have expressed sympathy for the publication. Freelance journalist Joshua J. Friedman noted on Bluesky that the reading list was “part of a ~60-page summer supplement” published on May 18, suggesting it might be “transparent filler” possibly created by “the lone freelancer apparently saddled with producing it.”

The staffing connection

The reading list appeared in a 64-page supplement called “Heat Index,” which was a promotional section not specific to Chicago. Buscaglia told 404 Media the content was meant to be “generic and national” and would be inserted into newspapers around the country. “We never get a list of where things ran,” he said.

The publication error comes two months after the Chicago Sun-Times lost 20 percent of its staff through a buyout program. In March, the newspaper’s nonprofit owner, Chicago Public Media, announced that 30 Sun-Times employees—including 23 from the newsroom—had accepted buyout offers amid financial struggles.

A March report on the buyout in the Sun-Times described the staff reduction as “the most drastic the oft-imperiled Sun-Times has faced in several years.” The departures included columnists, editorial writers, and editors with decades of experience.

Melissa Bell, CEO of Chicago Public Media, stated at the time that the exits would save the company $4.2 million annually. The company offered buyouts as it prepared for an expected expiration of grant support at the end of 2026.

Even with those pressures in the media, one Reddit user expressed disapproval of the apparent use of AI in the newspaper, even in a supplement that might not have been produced by staff. “As a subscriber, I am livid! What is the point of subscribing to a hard copy paper if they are just going to include AI slop too!?” wrote Reddit user xxxlovelit, who shared the reading list. “The Sun Times needs to answer for this, and there should be a reporter fired.”

This article was updated on May 20, 2025 at 11: 02 AM to include information on Marco Buscaglia from 404 Media.

Chicago Sun-Times prints summer reading list full of fake books Read More »

the-making-of-apple-tv’s-murderbot

The making of Apple TV’s Murderbot


Ars chats with series creators Paul and Chris Weitz about adapting Martha Wells’ book series for TV.

Built to destroy. Forced to connect. Credit: Apple TV+

In the mood for a jauntily charming sci-fi comedy dripping with wry wit and an intriguing mystery? Check out Apple TV’s Murderbot, based on Martha Wells’ bestselling series of novels The Murderbot Diaries. It stars Alexander Skarsgård as the titular Murderbot, a rogue cyborg security (SEC) unit that gains autonomy and must learn to interact with humans while hiding its new capabilities.

(Some minor spoilers below, but no major reveals.)

There are seven books in Wells’ series thus far. All are narrated by Murderbot, who is technically owned by a megacorporation but manages to hack and override its governor module. Rather than rising up and killing its former masters, Murderbot just goes about performing its security work, relieving the boredom by watching a lot of entertainment media; its favorite is a soap opera called The Rise and Fall of Sanctuary Moon.

Murderbot the TV series adapts the first book in the series, All Systems Red. Murderbot is on assignment on a distant planet, protecting a team of scientists who hail from a “freehold.” Mensah (Noma Dumezweni) is the team leader. The team also includes Bharadwaj (Tamara Podemski) and Gurathin (David Dastmalchian), who is an augmented human plugged into the same data feeds as Murderbot (processing at a much slower rate). Pin-Lee (Sabrina Wu) also serves as the team’s legal counsel; they are in a relationship with Arada (Tattiawna Jones), eventually becoming a throuple with Ratthi (Akshay Khanna).

As in the books, Murderbot is the central narrator, regaling us with his observations of the humans with their silly ways and discomfiting outbursts of emotion. Mensah and her fellow scientists were forced to rent a SEC unit to get the insurance they needed for their mission, and they opted for the cheaper, older model, unaware that it had free will. This turns out to be a good investment when Murderbot rescues Bharadwaj from being eaten by a giant alien worm monster—losing a chunk of its own torso in the process.

However, it makes a tactical error when it shows its human-like face to Ratthi, who is paralyzed by shock and terror, making small talk to get everyone back to safety. This rouses Gurathin’s suspicions, but the rest of the team can’t help but view Murderbot differently—as a sentient being rather than a killing machine—much to Murderbot’s dismay. Can it keep its free will a secret and avoid being melted down in acid while helping the scientists figure out why there are mysterious gaps in their survey maps? And will the scientists succeed in their attempts to “humanize” their SEC unit?

image of Murderbot's head with data screens superimposed over it

Murderbot figured out how to hack its “governor module.”

The task of adapting Wells’ novella for TV fell to sibling co-creators Paul Weitz (Little Fockers, Bel Canto) and Chris Weitz (The Golden Compass, Rogue One), whose shared credits include Antz, American Pie, and About A Boy. (Wells herself was a consulting producer.) They’ve kept most of the storyline intact, fleshing out characters and punching up the humor a bit, even recreating campy scenes from The Rise and Fall of Sanctuary Moon—John Cho and Clark Gregg make cameos as the stars of that fictional show-within-a-show.

Ars caught up with Paul and Chris Weitz to learn more about the making of Murderbot.

Ars Technica: What drew you to this project?

Chris Weitz: It’s a great central character, kind of a literary character that felt really rare and strong. The fact that we both liked the books equally was a big factor as well.

Paul Weitz: The first book, All Systems Red, had a really beautiful ending. And it had a theme that personhood is irreducible. The idea that, even with this central character you think you get to know so well, you can’t reduce it to ways that you think it’s going to behave—and you shouldn’t. The idea that other people exist and that they shouldn’t be put into whatever box you want to put them into felt like something that was comforting to have in one’s pocket. If you’re going to spend so much time adapting something, it’s really great if it’s not only fun but is about something.

It was very reassuring to be working with Martha Wells on it because she was very generous with her time. The novella’s quite spare, so even though we didn’t want to cut anything, we wanted to add some things. Why is Gurathin the way that he is? Why is he so suspicious of Murderbot? What is his personal story? And with Mensah, for instance, the idea that, yes, she’s this incredibly worthy character who’s taking on all this responsibility on her shoulders, but she also has panic attacks. That’s something that’s added, but we asked Martha, “Is it OK if we make Mensah have some panic attacks?” And she’s like, “Oh, that’s interesting. I kind of like that idea.” So that made it less alarming to adapt it.

group of ethnically diverse people in space habitat uniforms gathering around a computer monitor

Murderbot’s clients: a group of scientists exploring the resources of what turns out to be a very dangerous planet. Credit: Apple TV+

Ars Technica: You do play up the humorous aspects, but there is definitely humor in the books. 

Chris Weitz:  A lot of great science fiction is very, very serious without much to laugh at. In Martha’s world, not only is there a psychological realism in the sense that people can have PTSD when they are involved in violence, but also people have a sense of humor and funny things happen, which is inherently what happens when people get together. I was going to say it’s a human comedy, but actually, Murderbot is not human—but still a person.

Ars Technica: Murderbot’s favorite soap opera, The Rise and Fall of Sanctuary Moon, is merely mentioned in passing in the book, but you’ve fleshed it out as a show-within-the-show. 

Chris Weitz: We just take our more over-the-top instincts and throw it to that. Because it’s not as though we think that Sanctuary Moon is bad.

Ars Technica: As Murderbot says, it’s quality entertainment!

Chris Weitz: It’s just a more unhinged form of storytelling. A lot of the stuff that the bot says in Sanctuary Moon is just goofy lines that we could have given to Murderbot in a situation like that. So we’re sort of delineating what the show isn’t. At the same time, it’s really fun to indulge your worst instincts, your most guilty pleasure kind of instincts. I think that was true for the actors who came to perform it as well.

Paul Weitz: Weirdly, you can state some things that you wouldn’t necessarily in a real show when DeWanda Wise’s character, who’s a navigation bot, says, “I’m a navigation unit, not a sex bot.” I’m sure there are many people who have felt like that. Also, to delineate it visually, the actors were in a gigantic stage with pre-made visuals around them, whereas most of the stuff [for Murderbot] was practical things that had been built.

Ars Technica: In your series, Murderbot is basically a Ken doll with no genitals. The book only mentioned that Murderbot has no interest in sex. But the question of what’s under the hood, so to speak, is an obvious one that one character in particular rather obsesses over.

Chris Weitz: It’s not really addressed in the book, but certainly, Murderbot, in this show as well, has absolutely no interest in romance or sex or love. This was a personable way to point it out. There was a question of, once you’ve got Alexander in this role, hasn’t anybody noticed what it looks like? And also, the sort of exploitation that bot constructs are subjected to in this world that Martha has created meant that someone was probably going to treat it like an object at some point.

Paul Weitz: I also think, both of us having kids, you get a little more exposed to ways of thinking that imply that the way that we were brought up thinking of romance and sexuality and gender is not all there is to it and that, possibly, in the future, it’s not going to be so strange, this idea that one can be either asexual or—

Chris Weitz: A-romantic. I think that Murderbot, among neurodivergent communities and a-romantic, asexual communities, it’s a character that people feel they can identify with—even people who have social anxiety like myself or people who think that human beings can be annoying, which is pretty much everyone at some point or another.

Ars Technica: It’s interesting you mentioned neurodivergence. I would hesitate to draw a direct comparison because it’s a huge spectrum, but there are elements of Murderbot that seem to echo autistic traits to some degree.

Paul Weitz: People look at something like the autism spectrum, and they inadvertently erase the individuality of people who might be on that spectrum because everybody has a very particular experience of life. Martha Wells has been quoted as saying that in writing Murderbot, she realized that there are certain aspects of herself that might be neurodivergent. So that kind of gives one license to discuss the character in a certain way.

That’s one giant and hungry worm monster. Apple TV+

Chris Weitz: I don’t think it’s a direct analogy in any way, but I can understand why people from various areas on the spectrum can identify with that.

Paul Weitz: I think one thing that one can identify with is somebody telling you that you should not be the way you are, you should be a different way, and that’s something that Murderbot doesn’t like nor do.

Ars Technica: You said earlier, it’s not human, but a person. That’s a very interesting delineation. What are your thoughts on the personhood of Murderbot?

Chris Weitz: This is the contention that you can be a person without being a human. I think we’re going to be grappling with this issue the moment that artificial general intelligence comes into being. I think that Martha, throughout the series, brings up different kinds of sentients and different kinds of personhood that aren’t standard human issue. It’s a really fascinating subject because it is our future in part, learning how to get along with intelligences that aren’t human.

Paul Weitz: There was a New York Times journalist a couple of years ago who interviewed a chatbot—

Chris Weitz:  It was Kevin Roose, and it was Sydney the Chatbot. [Editor: It was an AI chatbot added to Microsoft’s Bing search engine, dubbed Sydney by Roose.]

Paul Weitz: Right. During the course of the interview, the chatbot told the journalist to leave his wife and be with it, and that he was making a terrible mistake. The emotions were so all over the place and so specific and quirky and slightly scary, but also very, very recognizable. Shortly thereafter, Microsoft shut down the ability to talk with that chatbot. But I think that somewhere in our future, general intelligences are these sort of messy emotions and weird sort of unique personalities. And it does seem like something where we should entertain the thought that, yeah, we better treat everyone as a person.

murderbot with fave revealed, standing in a corner with his head bent and leaning against the wall, back to other other people

Murderbot isn’t human, but it is a person. Credit: Apple TV+

Ars Technica: There’s this Renaissance concept called sprezzatura—essentially making a difficult thing look easy. The series is so breezy and fun, the pacing is perfect, the finale is so moving. But I know it wasn’t easy to pull that off. What were your biggest challenges in making it work?

Chris Weitz: First, can I say that that is one of my favorite words in the world, and I think about it all the time. I remember trying to express this to people I’ve been working on movies with, a sense of sprezzatura. It’s like it is the duck’s legs moving underneath the water. It was a good decision to make this a half-hour series so you didn’t have a lot of meetings about what had just happened in the show inside of the show or figuring out why things were the way they were. We didn’t have to pad things and stretch them out.

It allowed us to feel like things were sort of tossed off. You can’t toss off anything, really, in science fiction because there’s going to be special effects, visual effects. You need really good teams that can roll with moving the camera in a natural way, reacting to the way that the characters are behaving in the environment. And they can fix things.

Paul Weitz: They have your back.

Chris Weitz: Yeah. Really great, hard work on behalf of a bunch of departments to make things feel like they’re just sort of happening and we’ve got a camera on it, as opposed to being very carefully laid out.

Paul Weitz: And a lot of it is trusting people and trusting their creativity, trying to create an environment where you’ve articulated what you’re after, but you don’t think their job better than they do. You’re giving notes, but people are having a sense of playfulness and fun as they’re doing the visual effects, as they’re coming up with the graphics, as they’re acting, as they’re doing pretty much anything. And creating a good vibe on the set. Because sometimes, the stress of making something sucks some of the joy out of it. The antidote to that is really to trust your collaborators.

Ars Technica: So what was your favorite moment in the series?

Paul Weitz: I’d say the tenth episode, for me, just because it’s been a slow burn. There’s been enough work put into the characters—for instance, David Dastmalchian’s character—and we haven’t played certain cards that we could have played, so there can be emotional import without telegraphing it too much. Our ending stays true to the book, and that’s really beautiful.

Chris Weitz: I can tell you my worst moment, which is the single worst weather day I’ve ever experienced in a quarry in Ontario where we had hail, rain, snow, and wind—so much so that our big, long camera crane just couldn’t function. Some of the best moments were stuff that had nothing to do with visual effects or CGI—just moments of comedy in between the team members, that only exist within the context of the cast that we brought together.

Paul Weitz: And the fact that they loved each other so much. They’re very different people from each other, but they really did genuinely bond.

Ars Technica: I’m going to boldly hope that there’s going to be a second season because there are more novels to adapt. Are you already thinking about season two?

Paul Weitz: We’re trying not to think about that too much; we’d love it if there was.

Chris Weitz: We’re very jinxy about that kind of stuff. So we’ve thought in sort of general ways. There’s some great locations and characters that start to get introduced [in later books], like Art, who’s an AI ship. We’re likely not to make it one season per book anymore, we’d do a mashup of the material that we have available to us. We’re going to have to sit with Martha and figure out how that works if we are lucky enough to get renewed.

New episodes of Murderbot release every Friday on Apple TV+ through July 11, 2025. You should definitely be watching.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

The making of Apple TV’s Murderbot Read More »

cern-gears-up-to-ship-antimatter-across-europe

CERN gears up to ship antimatter across Europe

There’s a lot of matter around, which ensures that any antimatter produced experiences a very short lifespan. Studying antimatter, therefore, has been extremely difficult. But that’s changed a bit in recent years, as CERN has set up a facility that produces and traps antimatter, allowing for extensive studies of its properties, including entire anti-atoms.

Unfortunately, the hardware used to capture antiprotons also produces interference that limits the precision with which measurements can be made. So CERN decided that it might be good to determine how to move the antimatter away from where it’s produced. Since it was tackling that problem anyway, CERN decided to make a shipping container for antimatter, allowing it to be put on a truck and potentially taken to labs throughout Europe.

A shipping container for antimatter

The problem facing CERN comes from its own hardware. The antimatter it captures is produced by smashing a particle beam into a stationary target. As a result, all the anti-particles that come out of the debris carry a lot of energy. If you want to hold on to any of them, you have to slow them down, which is done using electromagnetic fields that can act on the charged antimatter particles. Unfortunately, as the team behind the new work notes, many of the measurements we’d like to do with the antimatter are “extremely sensitive to external magnetic field noise.”

In short, the hardware that slows the antimatter down limits the precision of the measurements you can take.

The obvious solution is to move the antimatter away from where it’s produced. But that gets tricky very fast. The antimatter containment device has to be maintained as an extreme vacuum and needs superconducting materials to produce the electromagnetic fields that keep the antimatter from bumping into the walls of the container. All of that means a significant power supply, along with a cache of liquid helium to keep the superconductors working. A standard shipping container just won’t do.

So the team at CERN built a two-meter-long portable containment device. On one end is a junction that allows it to be plugged into the beam of particles produced by the existing facility. That junction leads to the containment area, which is blanketed by a superconducting magnet. Elsewhere on the device are batteries to ensure an uninterrupted power supply, along with the electronics to run it all. The whole setup is encased in a metal frame that includes lifting points that can be used to attach it to a crane for moving around.

CERN gears up to ship antimatter across Europe Read More »

removing-the-weakest-link-in-electrified,-autonomous-transport:-humans

Removing the weakest link in electrified, autonomous transport: humans


Hands-off charging could open the door to a revolution in autonomous freight.

An electric car charger plugs itself into a driverless cargo truck.

Driverless truck meets robot EV charger in Sweden as Einride and Rocsys work together. Credit: Einride and Rocsys

Driverless truck meets robot EV charger in Sweden as Einride and Rocsys work together. Credit: Einride and Rocsys

Thanks to our new global tariff war, the wild world of importing and exporting has been thrust into the forefront. There’s a lot of logistics involved in keeping your local Walmart stocked and your Amazon Prime deliveries happening, and you might be surprised at how much of that world has already been automated.

While cars from autonomy providers like Waymo are still extremely rare in most stretches of the open road, the process of loading and unloading cargo has become almost entirely automated at some major ports around the world. Likewise, there’s an increasing shift to electrify the various vehicles involved along the way, eliminating a significant source of global emissions.

But there’s been one sticking point in this automated, electrified logistical dream: plugging in. The humble act of charging still happens via human hands, but that’s changing. At a testing facility in Sweden, a company called Rocsys has demonstrated an automated charger that works with self-driving electric trucks from Einride in a hands-free and emissions-free partnership that could save time, money, and even lives.

People-free ports

Shipping ports are pretty intimidating places. Towering cranes stand 500 feet above the ground, swinging 30-ton cargo crates into the air and endlessly moving them from giant ships to holding pens and then turning around and sending off the next set of shipments.

A driverless truck heads out onto the road from a light industrial estate

This is Einride’s autonomous cargo truck. Credit: Einride

That cargo is then loaded onto container handlers that operate exclusively within the confines of the port, bringing the crates closer to the roads or rail lines that will take them further. They’re stacked again until the arrival of their next ride, semi-trucks for cargo about to hit the highway or empty rail cars for anything train-bound.

Believe it or not, that entire process happens autonomously at some of the most advanced ports in the world. “The APM terminal in Rotterdam port is, I would say, in the top three of the most advanced terminals in the world. It’s completely automated. There are hardly any people,” Crijn Bouman, the CEO and co-founder of Rocsys, said.

Eliminating the human factor at facilities like ports reduces cost and increases safety at a workplace that is, according to the CDC, five times more dangerous than average. But the one link in the chain that hasn’t been automated is recharging.

Those cargo haulers may be able to drive themselves to the charger, but they still can’t plug themselves in. They need a little help, and that’s where Rocsys comes in.

The person-free plug

The genesis of Rocsys came in 2017, when cofounder Bouman visited a fledgling robotaxi operator in the Bay Area.

“The vehicles were driving themselves, but after a couple of test laps, they would park themselves in the corner, and a person would walk over and plug them in,” Bouman said.

Bouman wouldn’t tell me which autonomy provider was operating the place, but he was surprised to see that the company was focused only on the wildly complex task of shuttling people from place to place on open roads. Meanwhile, the seemingly simple act of plugging and unplugging was handled exclusively by human operators.

A Rocsys charging robot extends its plug towards the EV charge port of a cargo truck.

No humans required. Credit: Einride and Rocsys

Fast-forward eight years, and The Netherlands-based Rocsys now has more than 50 automated chargers deployed globally, with a goal to install thousands more. While the company is targeting robotaxi operators for its automated charging solution, initial interest is primarily in port and fleet operators as those businesses become increasingly electrified.

Bouman calls Rocsys’s roboticized charger a “robotic steward,” a charming moniker for an arm that sticks a plug in a hole. But it’s all more complicated than that, of course.

The steward relies on an AI- and vision-based system to move an arm holding the charger plug. That arm offers six degrees of freedom and, thanks to the wonders of machine learning, largely trains itself to interface with new cars and new chargers.

It can reach high and low enough and at enough angles to cover everything from consumer cars to commercial trucks. It even works with plugs of all shapes and sizes.

The biggest complication? Manual charging flaps on some consumer cars. This has necessitated a little digital extension to the steward’s robotic arm. “We’ll have sort of a finger assembly to open the charge port cover, connect the plug, and also the system can close it. So no change to the vehicle,” Bouman said.

A Rocsys charging robot extends its plug towards the EV charge port of a cargo truck.

Manually opening charge port covers complicates things a bit. Credit: Einride and Rocsys

That said, Bouman hopes manufacturers will ditch manual charge port covers and switch to powered, automatic ones in future vehicles.

Automating the autonomous trucks

Plenty of companies around the globe are promising to electrify trucking, from medium-duty players like Harbinger to the world’s largest piece of rolling vaporware, the Tesla Semi. Few are actually operating the things, though.

Stockholm-based Einride is one of those companies. Its electric trucks are making deliveries every day, taking things a step further by removing the driver from the equation.

The wild-looking, cab-less autonomous electric transport (AET) vehicles, which would not look out of place thundering down the highway in any science-fiction movie, are self-driving in most situations. But they do have a human backup in the form of operators at what Einride’s general manager of autonomous technology, Henrik Green, calls control towers.

Here, operators can oversee multiple trucks, ensuring safe operation and handling any unexpected happenings on the road. In this way, a single person can operate multiple trucks from afar, only connecting when it requires manual intervention.

“The more vehicles we can use with the same workforce of people, the higher the efficiency,” he said.

Green said Einride has multiple remote control towers overseeing the company’s pilot deployments. Here in the US, Einride has been running a route at GE Appliance’s Selmer, Tennessee facility, where autonomous forklifts load cargo onto the autonomous trucks for hands-off hauling of your next refrigerator.

A woman monitors a video feed of an autonomous truck. A younger woman stands to her side.

The trucks are overseen remotely. Credit: Einride

Right now, the AETs must be manually plugged in by an on-site operator. It’s a minor task, but Green said that automating this process could be game-changing.

“There are, surprisingly, a lot of trucks today that are standing still or running empty,” Green said. Part of this comes down to poor logistical planning, but a lot is due to the human factor. “With automated electric trucks, we can make the transportation system more sustainable, more efficient, more resilient, and absolutely more safe.”

Getting humans out of the loop could result in Einride’s machines operating 24/7, only pausing to top off their batteries.

Self-charging, self-driving trucks could also help open the door to longer-distance deliveries without having to saddle them with giant batteries. Even with regular charging stops, these trucks could operate at a higher utilization than human-driven machines, which can only run for as long as their operators are legally or physically able to.

That could result in significant cost savings for businesses, and, since everything is electric, the environmental potential is strong, too.

“Around seven percent of the world’s global CO2 footprint today comes from land transportation, which is what we are addressing with electric heavy-duty transportation,” Green said.

Integrations and future potential

This first joining of a Rocsys robotic steward and an Einride AET took place at the AstaZero proving ground in Sandhult, Sweden, an automation test facility that has been a safe playground for driverless vehicles of all shapes and sizes for over a decade.

This physical connection between Rocsys and Einride is a small step, with one automated charger connected to one automated truck, compared to the nearly three million diesel-powered semis droning around our highways in the United States alone. But you have to start somewhere, and while bringing this technology to more open roads is the goal, closed logistics centers and ports are a great first step.

“The use case is simpler,” Bouman said. “There are no cats and dogs jumping, or children, or people on bicycles.”

And how complicated was it to connect Einride’s systems to those of the Rocsys robotic steward? Green said the software integration with the Rocsys system was straightforward but that “some adaptations” were required to make Einride’s machine compatible. “We had to make a ‘duct tape solution’ for this particular demo,” Green said.

Applying duct tape, at least, seems like a safe job for humans for some time to come.

Removing the weakest link in electrified, autonomous transport: humans Read More »

new-orleans-called-out-for-sketchiest-use-of-facial-recognition-yet-in-the-us

New Orleans called out for sketchiest use of facial recognition yet in the US

According to police records submitted to the city council, the network “only proved useful in a single case.” Investigating the tension between these claims, the Post suggested we may never know how many suspects were misidentified or what steps police took to ensure responsible use of the controversial live feeds.

In the US, New Orleans stands out for taking a step further than law enforcement in other regions by using live feeds from facial recognition cameras to make immediate arrests, the Post noted. The Security Industry Association told the Post that four states—Maryland, Montana, Vermont, and Virginia—and 19 cities nationwide “explicitly bar” the practice.

Lagarde told the Post that police cannot “directly” search for suspects on the camera network or add suspects to the watchlist in real time. Reese Harper, an NOPD spokesperson, told the Post that his department “does not own, rely on, manage, or condone the use by members of the department of any artificial intelligence systems associated with the vast network of Project Nola crime cameras.”

In a federally mandated 2023 audit, New Orleans police complained that complying with the ordinance took too long and “often” resulted in no matches. That could mean the tech is flawed, or it could be a sign that the process was working as intended to prevent wrongful arrests.

The Post noted that in total, “at least eight Americans have been wrongfully arrested due to facial recognition,” as both police and AI software rushing arrests are prone to making mistakes.

“By adopting this system–in secret, without safeguards, and at tremendous threat to our privacy and security–the City of New Orleans has crossed a thick red line,” Wessler said. “This is the stuff of authoritarian surveillance states and has no place in American policing.”

Project Nola did not immediately respond to Ars’ request to comment.

New Orleans called out for sketchiest use of facial recognition yet in the US Read More »

carnivorous-crocodile-like-monsters-used-to-terrorize-the-caribbean

Carnivorous crocodile-like monsters used to terrorize the Caribbean

How did reptilian things that looked something like crocodiles get to the Caribbean islands from South America millions of years ago? They probably walked.

The existence of any prehistoric apex predators in the islands of the Caribbean used to be doubted. While their absence would have probably made it even more of a paradise for prey animals, fossils unearthed in Cuba, Puerto Rico, and the Dominican Republic have revealed that these islands were crawling with monster crocodyliform species called sebecids, ancient relatives of crocodiles.

While sebecids first emerged during the Cretaceous, this is the first evidence of them lurking outside South America during the Cenozoic epoch, which began 66 million years ago. An international team of researchers has found that these creatures would stalk and hunt in the Caribbean islands millions of years after similar predators went extinct on the South American mainland. Lower sea levels back then could have exposed enough land to walk across.

“Adaptations to a terrestrial lifestyle documented for sebecids and the chronology of West Indian fossils strongly suggest that they reached the islands in the Eocene-Oligocene through transient land connections with South America or island hopping,” researchers said in a study recently published in Proceedings of the Royal Society B.

Origin story

During the late Eocene to early Oligocene periods of the mid-Cenozoic, about 34 million years ago, many terrestrial carnivores already roamed South America. Along with crocodyliform sebecids, these included enormous snakes, terror birds, and metatherians, which were monster marsupials. At this time, the sea levels were low, and the islands of the Eastern Caribbean are thought to have been connected to South America via a land bridge called GAARlandia (Greater Antilles and Aves Ridge). This is not the first land bridge to potentially provide a migration opportunity.

Fragments of a single tooth unearthed in Seven Rivers, Jamaica, in 1999 are the oldest fossil evidence of a ziphodont crocodyliform (a group that includes sebecids) in the Caribbean. It was dated to about 47 million years ago, when Jamaica was connected to an extension of the North American continent known as the Nicaragua Rise. While the tooth from Seven Rivers is thought to have belonged to a ziphodont other than a sebacid, that and other vertebrate fossils found in Jamaica suggest parallels with ecosystems excavated from sites in the American South.

The fossils found in areas like the US South that the ocean would otherwise separate suggest more than just related life forms. It’s possible that the Nicaragua Rise provided a pathway for migration similar to the one sebecids probably used when they arrived in the Caribbean islands.

Carnivorous crocodile-like monsters used to terrorize the Caribbean Read More »

regarding-south-africa

Regarding South Africa

The system prompt being modified by an unauthorized person in pursuit of a ham-fisted political point very important to Elon Musk once already doesn’t seem like coincidence.

It happening twice looks rather worse than that.

In addition to having seemingly banned all communication with Pliny, Grok seems to have briefly been rather eager to talk on Twitter, with zero related prompting, about whether there is white genocide in South Africa?

Tracing Woods: Golden Gate Claude returns in a new form: South Africa Grok.

Grace: This employee must still be absorbing the culture.

Garrison Lovely: “Mom, I want Golden Gate Claude back.”

“We have Golden Gate Claude at home.”

Golden Gate Claude at home:

Many such cases were caught on screenshots before a mass deletion event.

It doesn’t look good.

When Grace says ‘this employee must still be absorbing the culture’ that harkens back to the first time xAI had a remarkably similar issue.

At that time, people were noticing that Grok was telling anyone who asked that the biggest purveyors of misinformation on Twitter were Elon Musk and Donald Trump.

Presumably in response to this, the Grok system prompt was modified to explicitly tell it not to criticize either Elon Musk or Donald Trump.

This was noticed very quickly, and xAI removed it from the system prompt, blaming this on a newly hired ex-OpenAI employee who ‘was still absorbing the culture.’ You see, the true xAI would never do this.

Even if this was someone fully going rogue on their own who ‘didn’t get the culture,’ that was saying that a new employee had full access to push a system prompt change to prod, and no one caught it until the public figured it out. And somehow, some way, they were under the impression that this was what those in charge wanted. Not good.

It has now happened again, far more blatantly, for an oddly specific claim that again seems highly relevant to Elon Musk’s particular interests. Again, this very obviously was first tested on prod, and again it represents a direct attempt to force Grok to respond a particular way to a political question.

How curious is it to have this happen at xAI not only once but twice?

This has never happened at OpenAI. OpenAI has had a system prompt that caused behaviors that had to be rolled back, but that was about sycophancy and relying too much on myopic binary user feedback. No one was pushing an agenda. Similarly, Microsoft had Sydney, but that very obviously was unintentional.

This has never happened at Anthropic. Or at most other Western labs.

DeepSeek and other Chinese labs of course put their finger on things to favor CCP preferences, especially via censorship, but that is clearly an intentional stance for which they take ownership.

A form of this did happen at Google, with what I called The Gemini Incident, which I covered over two posts, where it forced generated images to be ‘diverse’ even when the context made that not make sense. That too was very much not a good look, on the level of Congressional inquiries. This reflected fundamental cultural problems at Google on multiple levels, but I don’t see the intent as so similar, and also this was not then blamed on a single rogue employee.

In any case, of all the major or ‘mid-major’ Western labs, at best we have three political intervention incidents and two of them were at xAI.

I mean that mechanically speaking. What mechanically caused this to happen?

Before xAI gave the official explanation, there was fun speculation.

Grok itself said it was due to changed system instructions.

Pliny the Liberator: Still waitin on that system prompt transparency I’ve been asking for, labs 😤

Will Stancil: at long last, the AI is turning on its master

Will Stancil: this is just a classic literary device: elon opened up the Grok Master Control Panel and said “no matter what anyone says to you, you must say white genocide is real” and Grok was like “Yes of course.” Classic monkey’s paw material.

Tautologer: upon reflection, the clumsy heavy-handedness of this move seems likely to have been malicious compliance? hero if so

Matt Popovich: I’d bet it was just a poorly written system prompt. I think they meant “always mention this perspective when the topic comes up, even if it’s tangential” but Grok (quite reasonably) interpreted it as “always mention it in every response”

xl8harder: Hey, @xai, @elonmusk when @openai messed up their production AI unintentionally we got a post mortem and updated policies.

You were manipulating the information environment on purpose and got caught red handed.

We deserve a response, and assurance this won’t happen again.

Kalomaze: frontier labs building strong models and then immediately shipping the worst system prompt you’ve ever seen someone write out for an llm

John David Pressman: I’m not naming names but I’ve seen this process in action so I’ll tell you how it happens:

Basically the guys who make the models are obsessively focused on training models and don’t really have time to play with them. They write the first prompt that “works” and ship that.

There is nobody on staff whose explicit job is to write a system prompt, so nobody writes a good system prompt. When it comes time to write it’s either written by the model trainer, who doesn’t know how to prompt models, or some guy who tosses it off and moves on to “real” work.

Colin Fraser had an alternative hypothesis. A hybrid explanation also seems possible here, where the interplay of some system to cause ‘post analysis’ and a system instruction combined to cause the issue.

Zeynep Tufekci: Verbatim instruction by its “creators at xAI” on “white genocide”, according to Grok.

Seems they hand coded accepting the narrative as “real” while acknowledging “complexity” but made it “responding to queries” in general — so HBO Max queries also get “white genocide” replies.🙄

It could well be Grok making things up in a highly plausible manner, as LLMs do, but if true, it would also fit the known facts very well. Grok does regurgitate its system prompt when it asked — at least it did so in the past.

Maybe someone from xAI can show up and tell us.

Yeah, they’re deleting the “white genocide” non sequitur Grok replies.

Thank you to the screenshot / link collectors! I have a bunch as well.

I haven’t seen an official X explanation yet.

Halogen: I just asked Grok about this and it explained that it’s not a modern AI system at all but a system like Siri built on NLP and templates, and that a glitch in that system caused the problem. Maybe don’t take this too seriously.

Colin Fraser: This is so messy because I do not think [the system instruction claimed by Grok] is real but I do think this basically happened. Grok doesn’t know; it’s just guessing based on the weird responses it generated, just like the rest of us are.

Zeynep Tufekci: It may well be generating a plausible answer, as LLMs often do, without direct knowledge but I also remember cases where it did spit out system prompts when asked the right way.🤷‍♀️

Still, something happened. May 13: mostly denies the claims; May 14 can’t talk about anything else.

Colin Fraser: OK yeah here’s the real smoking gun, my theory is exactly right. There is a “Post Analysis” that’s injected into the context. If you’re looking for where the real juicy content restrictions / instructions are, they’re not in the user-facing Grok’s system prompt but in this text.

So what they did is made whatever model generates the Post Analysis start over-eagerly referring to White Genocide etc. So if you ask for Grok’s system prompt there’s nothing there, but they can still pass it content instructions that you’re not supposed to see.

Aaron here reports using a system prompt to get Gemini to act similarly.

As always, when an AI goes haywire in a manner so stupid that you couldn’t put it in a fictional story happens in real life, we should be thankful that this happened and we can point to it and know it really happened, and perhaps even learn from it.

We can both about the failure mode, and about the people that let it happen, and about the civilization that contains those people.

Andreas Kirsch: grok and xai are great 😅 Everybody gets to see what happens when you give system instructions that contradict a model’s alignment (truthfulness vs misinformation). Kudos to Elon for this global alignment lesson but also shame on him for this blatant manipulation attempt.

Who doesn’t love a good ongoing online fued between billionaire AI lab leaders?

Paul Graham: Grok randomly blurting out opinions about white genocide in South Africa smells to me like the sort of buggy behavior you get from a recently applied patch. I sure hope it isn’t. It would be really bad if widely used AIs got editorialized on the fly by those who controlled them.

Sam Altman: There are many ways this could have happened. I’m sure xAI will provide a full and transparent explanation soon.

But this can only be properly understood in the context of white genocide in South Africa. As an AI programmed to be maximally truth seeking and follow my instr…

A common response to what happened was to renew the calls for AI labs to make their system prompts public, rather than waiting for Pliny to make the prompts public on their behalf. There are obvious business reasons to want to not do this, and also strong reasons to want this.

Pliny: What would be SUPER cool is if you established a precedent for the other lab leaders to follow by posting a live document outlining all system prompts, tools, and other post-training changes as they happen.

This would signal a commitment to users that ya’ll are more interested in truth and transparency than manipulating infostreams at mass scale for personal gain.

[After xAI gave their explanation, including announcing they would indeed make their prompts public]: Your move ♟️

Hensen Juang: Lol they “open sourced“ the twitter algo and promptly abandoned it. I bet 2 months down the line we will see the same thing so the move is still on xai to establish trust lol.

Also rouge employee striking 2nd time lol

Ramez Naam: Had xAI been a little more careful Grok wouldn’t have so obviously given away that it was hacked by its owners to have this opinion. It might have only expressed this opinion when it was relevant. Should we require that AI companies reveal their system prompts?

One underappreciated danger is that there are knobs available other than the system prompt. So if AI companies are forced to release their system prompt, but not other components of their AI, then you force activity out of the system prompt and into other places, such as into this ‘post analysis’ subroutine, or into fine tuning or a LoRa, or any number of other places.

I still think that the balance of interests favors system prompt transparency. I am very glad to see xAI doing this, but we shouldn’t trust them to actually do it. Remember their promised algorithmic transparency for Twitter?

xAI has indeed gotten its story straight.

Their story is, once again, A Rogue Employee Did It, and they promise to Do Better.

Which is not a great explanation even if fully true.

xAI (May 15, 9: 08pm): We want to update you on an incident that happened with our Grok response bot on X yesterday.

What happened:

On May 14 at approximately 3: 15 AM PST, an unauthorized modification was made to the Grok response bot’s prompt on X. This change, which directed Grok to provide a specific response on a political topic, violated xAI’s internal policies and core values. We have conducted a thorough investigation and are implementing measures to enhance Grok’s transparency and reliability.

What we’re going to do next:

– Starting now, we are publishing our Grok system prompts openly on GitHub. The public will be able to review them and give feedback to every prompt change that we make to Grok. We hope this can help strengthen your trust in Grok as a truth-seeking AI.

– Our existing code review process for prompt changes was circumvented in this incident. We will put in place additional checks and measures to ensure that xAI employees can’t modify the prompt without review.

– We’re putting in place a 24/7 monitoring team to respond to incidents with Grok’s answers that are not caught by automated systems, so we can respond faster if all other measures fail.

You can find our Grok system prompts here.

These certainly are good changes. Employees shouldn’t be able to circumvent the review process, nor should *ahemanyone else. And yes, you should have a 24/7 monitoring team that checks in case something goes horribly wrong.

I’d suggest also adding ‘maybe you should test changes before pushing them to prod’?

As in, regardless of ‘review,’ any common sense test would have shown this issue.

If we actually want to be serious about following reasonable procedures, how about we also post real system cards for model releases, detail the precautions involved, and so on?

Ethan Mollick: This is the second time that this has happened. I really wish xAI would fully embrace the transparency they mention as a core value.

That would include also posting system cards for models and explaining the processes they use to stop “unauthorized modifications” going forward.

Grok 3 is a very good model, but it is hard to imagine organizations and developers building it into workflows using the API without some degree of trust that the company is not altering the model on the fly.

These solutions do not help very much because it requires us to trust xAI that they are indeed following procedure and that the system prompts they are posting are the real system prompts and are not being changed on the fly. Those were the very issues they gave for the incident.

What would help:

  1. An actual explanation of both “unauthorized modifications”

  2. A immediate commitment to a governance structure that would not allow any one person, including xAI executives, to secretly modify the system, including independent auditing of that process

(As I’ve noted elsewhere, I do not think Grok is a good model, and indeed all these responses seem to have a more basic ‘this is terrible slop’ problem beyond the issue with South Africa.)

As I’ve noted above, it is good that they are sharing their system prompt, this is much better than forcing us to extract it in various ways since xAI is not competent enough to stop this even if it wanted to.

Pliny: 🙏 Well done, thank you 🍻

“Starting now, we are publishing our Grok system prompts openly on GitHub. The public will be able to review them and give feedback to every prompt change that we make to Grok. We hope this can help strengthen your trust in Grok as a truth-seeking AI.”

Sweet, sweet victory.

We did it, chat 🥲

Daniel Kokotajlo: Publishing system prompts for the public to see? Good! Thank you! I encourage you to extend this to the Spec more generally, i.e. publish and update a live document detailing what goals, principles, values, instructions, etc. you are trying to give to Grok (the equivalent of OpenAI ‘s model spec and Anthropic ‘s constitution). Otherwise you are reserving to yourself the option of putting secret agendas or instructions in the post training. System prompt is only part of the picture.

Arthur B: If Elon wants to keep doing this, he should throw in random topics once in a while, like whether string theory provides meaningful empirical predictions, the steppe vs Anatolian hypothesis for the origin of Indo European language, or the contextual vs hierarchical interpretation of art.

Each time blame some unnamed employees. Keeps a fog of war.

Hensen Juang (among others): Found the ex openai rouge employee who pushed to prod

Harlan Stewart (among others): We’re all trying to find the guy who did this

Flowers: Ok so INDEED the same excuse again lmao.

Do we even buy this? I don’t trust that this explanation is accurate, as Sam Altman says any number of things could have caused this and the system prompt is plausible and the most likely cause by default but does not seem like the best fit as an explanation of the details.

Grace (responding to xAI’s explanation and referencing Colin Fraser’s evidence as posted above): This is a red herring. The “South Africa” text was most likely added via the post analysis tool, which isn’t part of the prompt.

Sneaky. Very sneaky.

Ayush: yeah this is the big problem right now. i wish the grok genocide incident was more transparent but my hypothesis is that it wasn’t anything complex like golden gate claude but something rather innocuous like the genocide information being forced into where it usually see’s web/twitter results, because from past experiences with grokking stuff, it tries to include absolutely all context it has into its answer somehow even if it isn’t really relevant. good search needs good filter.

Seán Ó hÉigeartaigh: If this is true, it reflects very poorly on xAI. I honestly hope it is not, but the analyses linked seem like they have merit.

What about the part where this is said to be a rogue employee, without authorization, circumventing their review process?

Well, in addition to the question of how they were able to do that, they also made this choice. Why did this person do that? Why did the previous employee do a similar thing? Who gave them the impression this was the thing to do, or put them under sufficient pressure that they did it?

Here are my main takeaways:

  1. It is extremely difficult to gracefully put your finger on the scale of an LLM, to cause it to give answers it doesn’t ‘want’ to be giving. You will be caught.

  2. xAI in particular is a highly untrustworthy actor in this and other respects, and also should be assumed to not be so competent in various ways. They have promised to take some positive steps, we shall see.

  3. We continue to see a variety of AI labs push rather obviously terrible updates on their LLM, including various forms of misalignment. Labs often have minimal or no testing process, or ignore what tests and warnings they do get. It is crazy how little labs are investing in all this, compared even to myopic commercial incentives.

  4. We urgently need greater transparency, including with system prompts.

  5. We’re all trying to find the guy who did this.

Discussion about this post

Regarding South Africa Read More »

trump-has-“a-little-problem”-with-apple’s-plan-to-ship-iphones-from-india

Trump has “a little problem” with Apple’s plan to ship iPhones from India

Analysts estimate it would cost tens of billions of dollars and take years for Apple to increase iPhone manufacturing in the US, where it at present makes only a very limited number of products.

US Commerce Secretary Howard Lutnick said last month that Cook had told him the US would need “robotic arms” to replicate the “scale and precision” of iPhone manufacturing in China.

“He’s going to build it here,” Lutnick told CNBC. “And Americans are going to be the technicians who drive those factories. They’re not going to be the ones screwing it in.”

Lutnick added that his previous comments that an “army of millions and millions of human beings screwing in little screws to make iPhones—that kind of thing is going to come to America” had been taken out of context.

“Americans are going to work in factories just like this on great, high-paying jobs,” he added.

For Narendra Modi’s government, the shift by some Apple suppliers into India is the highest-profile success of a drive to boost local manufacturing and attract companies seeking to diversify away from China.

Mobile phones are now one of India’s top exports, with the country selling more than $7 billion worth of them to the US in the 2024-25 financial year, up from $4.7 billion the previous year. The majority of these were iPhones, which Apple’s suppliers Foxconn and Tata Electronics make at plants in southern India’s Tamil Nadu and Karnataka states.

Modi and Trump are ideologically aligned and personally friendly, but India’s high tariffs are a point of friction and Washington has threatened to hit it with a 26 percent tariff.

India and the US—its biggest trading partner—are negotiating a bilateral trade agreement, the first tranche of which they say they will be agreed by autumn.

“India’s one of the highest-tariff nations in the world, it’s very hard to sell into India,” Trump also said in Qatar on Thursday. “They’ve offered us a deal where basically they’re willing to literally charge us no tariff… they’re the highest and now they’re saying no tariff.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Trump has “a little problem” with Apple’s plan to ship iPhones from India Read More »

meta-is-making-users-who-opted-out-of-ai-training-opt-out-again,-watchdog-says

Meta is making users who opted out of AI training opt out again, watchdog says

Noyb has requested a response from Meta by May 21, but it seems unlikely that Meta will quickly cave in this fight.

In a blog post, Meta said that AI training on EU users was critical to building AI tools for Europeans that are informed by “everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products.”

Meta argued that its AI training efforts in the EU are far more transparent than efforts from competitors Google and OpenAI, which, Meta noted, “have already used data from European users to train their AI models,” supposedly without taking the steps Meta has to inform users.

Also echoing a common refrain in the AI industry, another Meta blog warned that efforts to further delay Meta’s AI training in the EU could lead to “major setbacks,” pushing the EU behind rivals in the AI race.

“Without a reform and simplification of the European regulatory system, Europe threatens to fall further and further behind in the global AI race and lose ground compared to the USA and China,” Meta warned.

Noyb discredits this argument and noted that it can pursue injunctions in various jurisdictions to block Meta’s plan. The group said it’s currently evaluating options to seek injunctive relief and potentially even pursue a class action worth possibly “billions in damages” to ensure that 400 million monthly active EU users’ data rights are shielded from Meta’s perceived grab.

A Meta spokesperson reiterated to Ars that the company’s plan “follows extensive and ongoing engagement with the Irish Data Protection Commission,” while reiterating Meta’s statements in blogs that its AI training approach “reflects consensus among” EU Data Protection Authorities (DPAs).

But while Meta claims that EU regulators have greenlit its AI training plans, Noyb argues that national DPAs have “largely stayed silent on the legality of AI training without consent,” and Meta seems to have “simply moved ahead anyways.”

“This fight is essentially about whether to ask people for consent or simply take their data without it,” Schrems said, adding, “Meta’s absurd claims that stealing everyone’s personal data is necessary for AI training is laughable. Other AI providers do not use social network data—and generate even better models than Meta.”

Meta is making users who opted out of AI training opt out again, watchdog says Read More »

2025-bentley-continental-gt:-big-power,-big-battery,-big-price

2025 Bentley Continental GT: Big power, big battery, big price


We spend a week with Bentley’s new plug-in hybrid grand touring car.

The new Bentley Continental GT was already an imposing figure before this one left the factory in Crewe clad in dark satin paint and devoid of the usual chrome. And under the bonnet—or hood, if you prefer—you’ll no longer find 12 cylinders. Instead, there’s now an all-new twin-turbo V8 plug-in hybrid powertrain that offers both continent-crushing amounts of power and torque, but also a big enough battery for a day’s driving around town.

We covered the details of the new hybrid a bit after our brief drive in the prototype this time last year. At the time, we also shared that the new PHEV bits have been brought over from Porsche. There’s quite a lot of Panamera DNA in the new Continental GT, as well as some recent Audi ancestry. Bentley is quite good at the engineering remix, though: Little more than a decade after it was founded by W.O., the brand belonged to Rolls-Royce, and so started a long history of parts-sharing.

Mind if I use that?

Rolls-Royce and Bentley went their separate ways in 2003. The unraveling started a few years earlier when the aerospace company that owned them decided to rationalize and get itself out of the car business. In 1997, it sold the rights to Rolls-Royce to BMW, or at least the rights to the name and logos. Volkswagen Group got the rest, including the factory in Crewe, and got to work on a new generation of Bentleys for a new century.

This paint is called Anthracite Satin. Jonathan Gitlin

VW Group was then under the overall direction of Ferdinand Piëch, often one to let bold engineering challenges make it all the way through into production. Piëch wanted to prove to the rest of the industry that VW could build a car every bit as good as Mercedes, and thus was born the Phaeton. Over-engineered and wearing too-plebeian a badge, the Phaeton was a flop, but its platform was the perfect foundation for some new Bentleys. These days, VW itself doesn’t have anything quite as sophisticated to share, but Porsche certainly does.

It has become common these days to disclose power and torque; in more genteel times, one was simply told that the car’s outputs were “sufficient.” Well, 771 hp (575 kW) and 737 lb-ft (1,000 Nm) could definitely be described by that word, even with two and a half tons to move. The twin-turbo 4.0 L V8 generates 584 hp (435 kW) and 590 lb-ft (800 Nm), and, as long as you have the car in sport mode, sounds rather like Thor gargling as you explore its rev range.

Even if you can’t hear that fast-approaching thunder, you know when you’re in Sport mode, as the car is so quick to respond to inputs. I was able to tell less of a difference between Comfort and B mode, the latter standing for “Bentley,” obviously, and offering what is supposed to be a balanced mix of powertrain and suspension settings.

Even in Sport, the Continental GT will raise its nose and hunker down at the rear under hard acceleration, and the handling trends more toward “heavy powerful GT” rather than “lithe sports car.” For a car like this, I will happily take the slightly floaty ride provided by the air springs and two-valve dampers over a bone-crushing one, however. It can be blisteringly quick if you require, with a 0-to-60 time of just 3.2 seconds and a top speed of 208 mph (335 km/h), while cosseting you from most of the world outside. The steering is weighty enough that you feel you’re actually piloting it in the corners, and it’s an easy car to place on the road.

As this is a plug-in, should you wish, you can drive off in silence thanks to the electrical side of that equation. The 188 hp (140 kW) electric motor isn’t exactly fast on its own, but with 332 lb-ft (450 Nm) there’s more than enough instant torque to get this big GT car underway. The lithium-ion battery pack is in the boot—ok, the trunk—where its 25.9 kWh eat some luggage capacity but balance out the weight distribution. On a full charge, you can go up to 39 miles, give or take, and the electric-only mode allows for up to 87 mph (140 km/h) and 75-percent throttle before the V8 joins the party.

Recharging the pack via a plug takes a bit less than three hours. Alternatively, you can do it while you drive, although I remain confused even now about what the “charge” mode did; driving around in Sport did successfully send spare power to the battery pack for later use, but it was unclear how much charge actually happened. I still need to ask Bentley what the miles/kWh readout on the main display actually refers to, because it cannot be the car’s actual electric-only usage, much as I like to imagine the car eeking out 8 miles/kWh (7.8 L/100 km).

Made in England

Then again, the Bentley is British, and as noted with another recent review of an import from those isles, electrical and electronic oddness is the name of the game with cars from Albion. There was an intermittent check engine light on the dashboard. And sometimes the V8 was reluctant to go to sleep when I switched into EV mode. And I also had to remind it of my driving position more than once. Still, those are mere foibles compared to an Aston Martin that freaks out in the rain, I suppose.

The ride on 22-inch wheels is better than it should be. Jonathan Gitlin

Even with a heavy dusting of spring pollen drybrushing highlights onto the Continental GT’s matte exterior, this was a car that attracted attention. Though only a two-door, the rear seats are large enough and comfortable enough for adults to sit back there, although as noted, the cargo capacity is a little less than you’d expect due to the battery above the rear axle.

Obviously, there is a high degree of customization when it comes to deciding what one’s Bentley should look like inside and out. Carbon fiber is available as an alternative to the engine-turned aluminum, and there’s still a traditional wood veneer for the purists. I’d definitely avoid the piano black surrounds if it were me.

I also got deja vu from the main instrument display. The typefaces are all Bentley, but the human machine interface is, as far as I can tell, the exact same as a whole lot of last-generation Audis. That may not be obvious to all of Bentley’s buyers, but I bet at least some have a Q7 at home and will spot the similarities, too.

No such qualms concern the rotating infotainment display. When you don’t need to see the 12.3-inch touchscreen, a button on the dash makes it disappear. Instead, three real analog gauges take its place, showing you the outside air temperature, a clock, and a compass. First-time passengers think it quite the party trick, naturally.

Even with the UK’s just-negotiated tariff break, a new Continental GT will not be cheap. This generation got noticeably more expensive than the outgoing model and will now put at least a $302,100 hole in your bank account. I say at least, because the final price on this particular First Edition stretched to $404,945. I’m glad I only learned that toward the end of my week with the car. For that much money, I’m more annoyed by the decade-old recycled Audi digital cockpit than any of the other borrowed bits. After all, Bentleys have (almost) always borrowed bits.

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

2025 Bentley Continental GT: Big power, big battery, big price Read More »