Author name: Shannon Garcia

to-help-ais-understand-the-world,-researchers-put-them-in-a-robot

To help AIs understand the world, researchers put them in a robot


There’s a difference between knowing a word and knowing a concept.

Large language models like ChatGPT display conversational skills, but the problem is they don’t really understand the words they use. They are primarily systems that interact with data obtained from the real world but not the real world itself. Humans, on the other hand, associate language with experiences. We know what the word “hot” means because we’ve been burned at some point in our lives.

Is it possible to get an AI to achieve a human-like understanding of language? A team of researchers at the Okinawa Institute of Science and Technology built a brain-inspired AI model comprising multiple neural networks. The AI was very limited—it could learn a total of just five nouns and eight verbs. But their AI seems to have learned more than just those words; it learned the concepts behind them.

Babysitting robotic arms

“The inspiration for our model came from developmental psychology. We tried to emulate how infants learn and develop language,” says Prasanna Vijayaraghavan, a researcher at the Okinawa Institute of Science and Technology and the lead author of the study.

While the idea of teaching AIs the same way we teach little babies is not new—we applied it to standard neural nets that associated words with visuals. Researchers also tried teaching an AI using a video feed from a GoPro strapped to a human baby. The problem is babies do way more than just associate items with words when they learn. They touch everything—grasp things, manipulate them, throw stuff around, and this way, they learn to think and plan their actions in language. An abstract AI model couldn’t do any of that, so Vijayaraghavan’s team gave one an embodied experience—their AI was trained in an actual robot that could interact with the world.

Vijayaraghavan’s robot was a fairly simple system with an arm and a gripper that could pick objects up and move them around. Vision was provided by a simple RGB camera feeding videos in a somewhat crude 64×64 pixels resolution.

 The robot and the camera were placed in a workspace, put in front of a white table with blocks painted green, yellow, red, purple, and blue. The robot’s task was to manipulate those blocks in response to simple prompts like “move red left,” “move blue right,” or “put red on blue.” All that didn’t seem particularly challenging. What was challenging, though, was building an AI that could process all those words and movements in a manner similar to humans. “I don’t want to say we tried to make the system biologically plausible,” Vijayaraghavan told Ars. “Let’s say we tried to draw inspiration from the human brain.”

Chasing free energy

The starting point for Vijayaraghavan’s team was the free energy principle, a hypothesis that the brain constantly makes predictions about the world based on internal models, then updates these predictions based on sensory input. The idea is that we first think of an action plan to achieve a desired goal, and then this plan is updated in real time based on what we experience during execution. This goal-directed planning scheme, if the hypothesis is correct, governs everything we do, from picking up a cup of coffee to landing a dream job.

All that is closely intertwined with language. Neuroscientists at the University of Parma found that motor areas in the brain got activated when the participants in their study listened to action-related sentences. To emulate that in a robot, Vijayaraghavan used four neural networks working in a closely interconnected system. The first was responsible for processing visual data coming from the camera. It was tightly integrated with a second neural net that handled proprioception: all the processes that ensured the robot was aware of its position and the movement of its body. This second neural net also built internal models of actions necessary to manipulate blocks on the table. Those two neural nets were additionally hooked up to visual memory and attention modules that enabled them to reliably focus on the chosen object and separate it from the image’s background.

The third neural net was relatively simple and processed language using vectorized representations of those “move red right” sentences. Finally, the fourth neural net worked as an associative layer and predicted the output of the previous three at every time step. “When we do an action, we don’t always have to verbalize it, but we have this verbalization in our minds at some point,” Vijayaraghavan says. The AI he and his team built was meant to do just that: seamlessly connect language, proprioception, action planning, and vision.

When the robotic brain was up and running, they started teaching it some of the possible combinations of commands and sequences of movements. But they didn’t teach it all of them.

The birth of compositionality

In 2016, Brenden Lake, a professor of psychology and data science, published a paper in which his team named a set of competencies machines need to master to truly learn and think like humans. One of them was compositionality: the ability to compose or decompose a whole into parts that can be reused. This reuse lets them generalize acquired knowledge to new tasks and situations. “The compositionality phase is when children learn to combine words to explain things. They [initially] learn the names of objects, the names of actions, but those are just single words. When they learn this compositionality concept, their ability to communicate kind of explodes,” Vijayaraghavan explains.

The AI his team built was made for this exact purpose: to see if it would develop compositionality. And it did.

Once the robot learned how certain commands and actions were connected, it also learned to generalize that knowledge to execute commands it never heard before. recognizing the names of actions it had not performed and then performing them on combinations of blocks it had never seen. Vijayaraghavan’s AI figured out the concept of moving something to the right or the left or putting an item on top of something. It could also combine words to name previously unseen actions, like putting a blue block on a red one.

While teaching robots to extract concepts from language has been done before, those efforts were focused on making them understand how words were used to describe visuals. Vijayaragha built on that to include proprioception and action planning, basically adding a layer that integrated sense and movement to the way his robot made sense of the world.

But some issues are yet to overcome. The AI had very limited workspace. The were only a few objects and all had a single, cubical shape. The vocabulary included only names of colors and actions, so no modifiers, adjectives, or adverbs. Finally, the robot had to learn around 80 percent of all possible combinations of nouns and verbs before it could generalize well to the remaining 20 percent. Its performance was worse when those ratios dropped to 60/40 and 40/60.

But it’s possible that just a bit more computing power could fix this. “What we had for this study was a single RTX 3090 GPU, so with the latest generation GPU, we could solve a lot of those issues,” Vijayaraghavan argued. That’s because the team hopes that adding more words and more actions won’t result in a dramatic need for computing power. “We want to scale the system up. We have a humanoid robot with cameras in its head and two hands that can do way more than a single robotic arm. So that’s the next step: using it in the real world with real world robots,” Vijayaraghavan said.

Science Robotics, 2025. DOI: 10.1126/scirobotics.adp0751

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

To help AIs understand the world, researchers put them in a robot Read More »

bogus-research-is-undermining-good-science,-slowing-lifesaving-research

Bogus research is undermining good science, slowing lifesaving research

In 2022, Byrne and colleagues, including two of us, found that suspect genetics research, despite not immediately affecting patient care, informs scientists’ work, including clinical trials. But publishers are often slow to retract tainted papers, even when alerted to obvious fraud. We found that 97 percent of the 712 problematic genetics research articles we identified remained uncorrected.

Potential solutions

The Cochrane Collaboration has a policy excluding suspect studies from its analyses of medical evidence and is developing a tool to spot problematic medical trials. And publishers have begun to share data and technologies among themselves to combat fraud, including image fraud.

Technology startups are also offering help. The website Argos, launched in September 2024 by Scitility, an alert service based in Sparks, Nevada, allows authors to check collaborators for retractions or misconduct. Morressier, a scientific conference and communications company in Berlin, offers research integrity tools. Paper-checking tools include Signals, by London-based Research Signals, and Clear Skies’ Papermill Alarm.

But Alam acknowledges that the fight against paper mills won’t be won as long as the booming demand for papers remains.

Today’s commercial publishing is part of the problem, Byrne said. Cleaning up the literature is a vast and expensive undertaking. “Either we have to monetize corrections such that publishers are paid for their work, or forget the publishers and do it ourselves,” she said.

There’s a fundamental bias in for-profit publishing: “We pay them for accepting papers,” said Bodo Stern, a former editor of the journal Cell and chief of Strategic Initiatives at Howard Hughes Medical Institute, a nonprofit research organization and funder in Chevy Chase, Maryland. With more than 50,000 journals on the market, bad papers shopped around long enough eventually find a home, Stern said.

To prevent this, we could stop paying journals for accepting papers and look at them as public utilities that serve a greater good. “We should pay for transparent and rigorous quality-control mechanisms,” he said.

Peer review, meanwhile, “should be recognized as a true scholarly product, just like the original article,” Stern said. And journals should make all peer-review reports publicly available, even for manuscripts they turn down.

This article is republished from The Conversation under a Creative Commons license. This is a condensed version. To learn more about how fraudsters around the globe use paper mills to enrich themselves and harm scientific research, read the full version.

Frederik Joelving is a contributing editor at Retraction Watch; Cyril Labbé is a professor of computer science at the Université Grenoble Alpes (UGA); and Guillaume Cabanac is a professor of computer science at Institut de Recherche en Informatique de Toulouse.

Bogus research is undermining good science, slowing lifesaving research Read More »

treasury-official-retires-after-clash-with-doge-over-access-to-payment-system

Treasury official retires after clash with DOGE over access to payment system

“This is a mechanical job—they pay Social Security benefits, they pay vendors, whatever. It’s not one where there’s a role for nonmechanical things, at least from the career standpoint. Your whole job is to pay the bills as they’re due,” Mazur was quoted as saying. “It’s never been used in a way to execute a partisan agenda… You have to really put bad intentions in place for that to be the case.”

The Trump administration previously issued an order to freeze funding for a wide range of government programs, but rescinded the order after two days of protest and a judge’s ruling that temporarily blocked the funding freeze.

Trump ordered cooperation with DOGE

The Trump executive order establishing DOGE took the existing United States Digital Service and renamed it the United States DOGE Service. It’s part of the Executive Office of the President and is tasked with “modernizing Federal technology and software to maximize governmental efficiency and productivity.”

Trump’s order said that federal agencies will have to collaborate with DOGE. “Among other things, the USDS Administrator shall work with Agency Heads to promote inter-operability between agency networks and systems, ensure data integrity, and facilitate responsible data collection and synchronization,” the order said. “Agency Heads shall take all necessary steps, in coordination with the USDS Administrator and to the maximum extent consistent with law, to ensure USDS has full and prompt access to all unclassified agency records, software systems, and IT systems. USDS shall adhere to rigorous data protection standards.”

The Post writes that “Musk has sought to exert sweeping control over the inner workings of the US government, installing longtime surrogates at several agencies, including the Office of Personnel Management, which essentially handles federal human resources, and the General Services Administration.”

On Thursday, Musk visited the General Services Administration headquarters in Washington, DC, The New York Times reported. The Department of Government Efficiency’s account on X stated earlier this week that the GSA had “terminated three leases of mostly empty office space” for a savings of $1.6 million and that more cuts are planned. In another post, DOGE claimed it “is saving the Federal Government approx. $1 billion/day, mostly from stopping the hiring of people into unnecessary positions, deletion of DEI and stopping improper payments to foreign organizations, all consistent with the President’s Executive Orders.”

“Mr. Musk’s visit to the General Services Administration could presage more cost-cutting efforts focused on federal real estate,” the Times wrote. “The agency also plays a role in federal contracting and in providing technology services across the federal government.”

Treasury official retires after clash with DOGE over access to payment system Read More »

“just-give-me-the-f***ing-links!”—cursing-disables-google’s-ai-overviews

“Just give me the f***ing links!”—Cursing disables Google’s AI overviews

If you search Google for a way to turn off the company’s AI-powered search results, you may well get an AI Overview telling you that AI Overviews can’t be directly disabled in Google Search. But if you instead ask Google how to turn off “fucking Google AI results,” you’ll get a standard set of useful web suggestions without any AI Overview at the top.

The existence of this “curse to disable Google AI” trick has been making the rounds on social media in recent days, and it holds up in Ars’ own testing. For instance, when searching for “how do you turn off [adjective] Google AI results,” a variety of curse word adjectives reliably disabled the AI Overviews, while adjectives like “dumb” or “lousy” did not. Inserting curse words randomly at any point in the search query seems to have a similar effect.

There’s long been evidence that Google’s Gemini AI system tries to avoid swearing if at all possible, which might help explain why AI Overviews balk at queries that contain curses. Users should also keep in mind, though, that the actual web link results to a query can change significantly when curse words are inserted, especially if SafeSearch is turned off.

“Just give me the f***ing links!”—Cursing disables Google’s AI overviews Read More »

here’s-why-the-tech-industry-gets-excited-about-sports-car-racing

Here’s why the tech industry gets excited about sports car racing


It would take IMSA 700 years to drive to Mars

Racing has always been used to improve the breed, but now mostly with software.

NASA worm logo with race imagery over a backdrop of Mars

Credit: Aurich Lawson | Getty Images | NASA

Credit: Aurich Lawson | Getty Images | NASA

DAYTONA BEACH—Last week, ahead of the annual Rolex 24 at Daytona and the start of the North American road racing season, IMSA (the sport’s organizers) held a tech symposium across the road from the vast speedway at Embry-Riddle University. Last year, panelists, including Crowdstrike’s CSO, explained the draw of racing to their employers; this time, organizations represented included NASA, Michelin, AMD, and Microsoft. And while they were all there to talk about racing, it seems everyone was also there to talk about simulation and AI.

I’ve long maintained that endurance racing, where grids of prototypes and road car-based racers compete over long durations—24 hours, for example—is the most relevant form of motorsport, the one that makes road cars better. Formula 1 has budgets and an audience to dwarf all others, and there’s no doubt about the level of talent and commitment required to triumph in that arena. The Indy 500 might have more history. And rallying looks like the hardest challenge for both humans and machines.

But your car owes its disc brakes to endurance racing, plus its dual-clutch transmission, if it’s one of the increasing number of cars fitted with such. But let’s not overblow it. Over the years, budgets have had to be reined in for the health of the sport. That—plus a desire for parity among the teams so that no one clever idea runs away with the series—means there are plenty of spec or controlled components on a current endurance racer. Direct technology transfer, then, happens less and less often—at least in terms of new mechanical bits or bobs you might find inside your next car.

Software has become a new competitive advantage for the teams that race hybrid sports prototypes from Acura, BMW, Cadillac, Porsche, and Lamborghini, just as it is between teams in Formula E.

But this year’s symposium shone a light on a different area of tech transfer, where Microsoft or NASA can use the vast streams of data that pour out of a 60-car, 24-hour race to build more accurate simulations and AI tools—maybe even ones that will babysit a crewed mission to Mars.

Sorry, did you say Mars?

“Critically, it takes light 20 minutes to make that trip, which has some really unfortunate operational impacts,” said Ian Maddox of NASA’s Marshall Space Flight Center’s Habitation office. A 40-minute delay between asking a question and getting an answer wouldn’t work for a team trying to win the Rolex 24, and “it certainly isn’t going to work for us,” he said.

“And so we’re placed in—I’ll be frank—the really uncomfortable position of having to figure out how to build AI tools to help the crew on board a Mars ship diagnose and respond to their own problems. So to be their own crew, to be their own engineering teams, at least for the subset of problems that can get really bad in the course of 45 minutes to an hour,” Maddox said.

Building those kinds of tools will require a “giant bucket of really good data,” Maddox said, “and that’s why we’ve come to IMSA.”

Individually, the hybrid prototypes and GT cars in an IMSA race are obviously far less complicated than a Mars-bound spacecraft. But when you get that data from all the cars in the race together, the size starts to become comparable.

“And fundamentally, you guys have things that roll and we have things that rotate, and you have things that get hot and cold, and so do we,” Maddox said. “When you get down to the actual measurement level, there are a lot of similarities between the stuff that you guys use to understand vehicle performance and the stuff we use to understand vehicle performance.”

Not just Mars

Other speakers pointed to areas of technology development—like tire development—that you may have read about recently here on Ars Technica. “[A tire is] a composite material made with more than 200 components with very non-linear behavior. It’s pressure-sensitive, it’s temperature-sensitive. It changes with wear… and actually, the ground interaction is also one of the worst mechanisms to try to anticipate and to understand,” said Phillippe Tramond, head of research of motorsport at Michelin.

For the past four years, Michelin has been crunching data gathered from cars racing on its rubber (and the other 199 components). “And eventually, we are able to build and develop a thermomechanical tire model able to mimic and simulate tire behavior, tire performance, whatever the specification is,” Tramond said.

That tool has been quite valuable to the teams racing in the GTP class of hybrid prototypes, as it means that their driver-in-the-loop simulators are now even more faithful to real life. But Michelin has also started using the tire model when developing road tires for specific cars with individual OEMs.

For Sid Siddhartha, a principal researcher at Microsoft Research, the data is again the draw. Siddhartha has been using AI to study human behavior, including in the game Rocket League. “We were able to actually show that we can really understand and home in on individual human behavior in a very granular way, to the point where if I just observe you for two or three seconds, or if I look at some of your games, I can tell you who played it,” Siddhartha said.

That led to a new approach by the Alpine F1 team, which wanted to use Siddhartha’s AI to improve its simulation tools. F1 teams will run entirely virtual simulations on upgraded cars long before they fire those changes up in the big simulator and let their human drivers have a go (as described above). In Alpine’s case, they wanted something more realistic than a lap time simulator that just assumed perfect behavior.

The dreaded BoP

“Eventually, we are connected to IMSA, and IMSA is interested in a whole host of questions that are very interesting to us at Microsoft Research,” Siddhartha said. “They’re interested in what are the limits of driver and car? How do you balance that performance across different classes? How do you anticipate what might happen when people make different strategic decisions during the race? And how do you communicate all of this to a fan base, which has really blown me away, as John was saying, who are interested in following the sport and understanding what’s going on.”

“Sports car racing is inherently complex,” said Matt Kurdock, IMSA’s managing director of engineering. “We’ve got four different classes. We have, in each car, four different drivers. And IMSA’s challenge is to extract from this race data that’s being collected and figure out how to get an appropriate balance so that manufacturers stay engaged in the sport,” Kurdock said.

IMSA has the cars put through wind tunnels and runs CFD simulations on them as well. “We then plug all this information into one of Michelin’s tools, which is their canopy vehicle dynamic simulation, which runs in the cloud, and from this, we start generating a picture of where we believe the optimized performance of each platform is,” Kurdock said.

That’s something to think about the next time your favorite team gets the short end of the stick in the latest balance of performance—better known as BoP—update.

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

Here’s why the tech industry gets excited about sports car racing Read More »

how-one-youtuber-is-trying-to-poison-the-ai-bots-stealing-her-content

How one YouTuber is trying to poison the AI bots stealing her content

If you’ve been paying careful attention to YouTube recently, you may have noticed the rising trend of so-called “faceless YouTube channels” that never feature a visible human talking in the video frame. While some of these channels are simply authored by camera-shy humans, many more are fully automated through AI-powered tools to craft everything from the scripts and voiceovers to the imagery and music. Unsurprisingly, this is often sold as a way to make a quick buck off the YouTube algorithm with minimal human effort.

It’s not hard to find YouTubers complaining about a flood of these faceless channels stealing their embedded transcript files and running them through AI summarizers to generate their own instant knock-offs. But one YouTuber is trying to fight back, seeding her transcripts with junk data that is invisible to humans but poisonous to any AI that dares to try to work from a poached transcript file.

The power of the .ass

YouTuber F4mi, who creates some excellent deep dives on obscure technology, recently detailed her efforts “to poison any AI summarizers that were trying to steal my content to make slop.” The key to F4mi’s method is the .ass subtitle format, created decades ago as part of fansubbing software Advanced SubStation Alpha. Unlike simpler and more popular subtitle formats, .ass supports fancy features like fonts, colors, positioning, bold, italic, underline, and more.

It’s these fancy features that let F4mi hide AI-confounding garbage in her YouTube transcripts without impacting the subtitle experience for her human viewers. For each chunk of actual text in her subtitle file, she also inserted “two chunks of text out of bounds using the positioning feature of the .ass format, with their size and transparency set to zero so they are completely invisible.”

In those “invisible” subtitle boxes, F4mi added text from public domain works (with certain words replaced with synonyms to avoid detection) or her own LLM-generated scripts full of completely made-up facts. When those transcript files were fed into popular AI summarizer sites, that junk text ended up overwhelming the actual content, creating a totally unrelated script that would be useless to any faceless channel trying to exploit it.

How one YouTuber is trying to poison the AI bots stealing her content Read More »

democrat-teams-up-with-movie-industry-to-propose-website-blocking-law

Democrat teams up with movie industry to propose website-blocking law

US Rep. Zoe Lofgren (D-Calif.) today proposed a law that would let copyright owners obtain court orders requiring Internet service providers to block access to foreign piracy websites. The bill would also force DNS providers to block sites.

Lofgren said in a press release that she “work[ed] for over a year with the tech, film, and television industries” on “a proposal that has a remedy for copyright infringers located overseas that does not disrupt the free Internet except for the infringers.” Lofgren said she plans to work with Republican leaders to enact the bill.

Lofgren’s press release includes a quote from Charles Rivkin, chairman and CEO of the Motion Picture Association (MPA). As we’ve previously written, the MPA has been urging Congress to pass a site-blocking law.

“More than 55 nations around the world, including democracies such as Canada, the United Kingdom, and Australia, have put in place tools similar to those proposed by Rep. Lofgren, and they have successfully reduced piracy’s harms while protecting consumer access to legal content,” Rivkin was quoted as saying in Lofgren’s press release today.

Lofgren is the ranking member of the House Science, Space, and Technology Committee and a member of the House Subcommittee on Courts, Intellectual Property, Artificial Intelligence and the Internet.

Bill called “censorious site-blocking” measure

Although Lofgren said her proposed Foreign Anti-Digital Piracy Act “preserves the open Internet,” consumer advocacy group Public Knowledge described the bill as a “censorious site-blocking” measure “that turns broadband providers into copyright police at Americans’ expense.”

“Rather than attacking the problem at its source—bringing the people running overseas piracy websites to court—Congress and its allies in the entertainment industry has decided to build out a sweeping infrastructure for censorship,” Public Knowledge Senior Policy Counsel Meredith Rose said. “Site-blocking orders force any service provider, from residential broadband providers to global DNS resolvers, to disrupt traffic from targeted websites accused of copyright infringement. More importantly, applying blocking orders to global DNS resolvers results in global blocks. This means that one court can cut off access to a website globally, based on one individual’s filing and an expedited procedure. Blocking orders are incredibly powerful weapons, ripe for abuse, and we’ve seen the messy consequences of them being implemented in other countries.”

Democrat teams up with movie industry to propose website-blocking law Read More »

stem-cells-used-to-partially-repair-damaged-hearts

Stem cells used to partially repair damaged hearts

When we developed the ability to convert various cells into a stem cell, it held the promise of an entirely new type of therapy. Rather than getting the body to try to fix itself with its cells or deal with the complications of organ transplants, we could convert a few adult cells to stem cells and induce them to form any tissue in the body. We could potentially repair or replace tissues with an effectively infinite supply of a patient’s own cells.

However, the Nobel Prize for induced stem cells was handed out over a decade ago, and the therapies have been slow to follow. But a group of German researchers is now describing tests in primates of a method of repairing the heart using new muscle generated from stem cells. The results are promising, if not yet providing everything that we might hope for. But they’ve been enough to start clinical trials, and similar results are being seen in humans.

Heart problems

The heart contains a lot of specialized tissues, including those that form blood vessels or specialize in conducting electrical signals. But the key to the heart is a form of specialized muscle cell, called a cardiomyocyte. Once the heart matures, the cardiomyocytes stop dividing, meaning that you end up with a fixed population. Any damage to the heart due to injury or infection does not get repaired, meaning damage will be cumulative.

This is especially problematic in cases of blocked blood vessels, which can repeatedly starve large areas of the heart of oxygen and nutrients, killing the cardiomyocytes there. This leads to a reduction in cardiac function and can ultimately result in death.

It turns out, however, that it’s relatively easy to convert induced pluripotent stem cells (IPSC, with pluripotent meaning they can form any cell type). So researchers tried injecting these stem-cell-derived cardiomyocytes into damaged hearts in experimental animals, in the hope that they would be incorporated into the damaged tissue. But these experiments didn’t always provide clear benefits to the animals.

Stem cells used to partially repair damaged hearts Read More »

gog-revamps-its-“dreamlist”-feature-to-better-pry-old-games-out-of-publishers

GOG revamps its “Dreamlist” feature to better pry old games out of publishers

Black & White was intriguing; it had classic Molyneaux over-reach and deserves, in the words of one Ars staffer, a re-release so that “a new generation can realize just how janky it is.” As detailed in a documentary by Noclip, the B&W games are stuck in publishing purgatory. Microsoft acquired Lionhead’s IP and assets, while Electronic Arts retains the publishing rights to the B&W games, and nobody has yet been able to align those two very large planets.

GOG has added its own “Our Pick” tag to games it wants to see brought forward onto modern systems. Among them is Freelancer, which Ars’ Samuel Axon described in our 2024 roundup of non-2024 games as “a sincere attempt to make games like Elite (Dangerous) and Wing Commander: Privateer far more accessible.” GOG selected Freelancer as one of its staff picks for the Dreamlist, citing its “dynamic economy and engaging storyline.”

The main thing GOG would be fixing with Freelancer, as with many games, would be simple availability, as the game is not available on any proper digital storefront. Axon reports that, in having an original disc, installing Freelancer was not too hard, with the installer working in Windows 11. You can apply community patches, like an “HD Edition” mod, but Axon preferred playing at a non-native resolution (1024×768) at 4:3 and adjusting his monitor.

Other notable games GOG and its voting public want to see brought back are Final Fantasy VII (the original, not the remake), the point-and-click Discworld adventure, Command & Conquer: The Ultimate Collection, and The Operative: No One Lives Forever.

GOG revamps its “Dreamlist” feature to better pry old games out of publishers Read More »

ai-haters-build-tarpits-to-trap-and-trick-ai-scrapers-that-ignore-robots.txt

AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt


Making AI crawlers squirm

Attackers explain how an anti-spam defense became an AI weapon.

Last summer, Anthropic inspired backlash when its ClaudeBot AI crawler was accused of hammering websites a million or more times a day.

And it wasn’t the only artificial intelligence company making headlines for supposedly ignoring instructions in robots.txt files to avoid scraping web content on certain sites. Around the same time, Reddit’s CEO called out all AI companies whose crawlers he said were “a pain in the ass to block,” despite the tech industry otherwise agreeing to respect “no scraping” robots.txt rules.

Watching the controversy unfold was a software developer whom Ars has granted anonymity to discuss his development of malware (we’ll call him Aaron). Shortly after he noticed Facebook’s crawler exceeding 30 million hits on his site, Aaron began plotting a new kind of attack on crawlers “clobbering” websites that he told Ars he hoped would give “teeth” to robots.txt.

Building on an anti-spam cybersecurity tactic known as tarpitting, he created Nepenthes, malicious software named after a carnivorous plant that will “eat just about anything that finds its way inside.”

Aaron clearly warns users that Nepenthes is aggressive malware. It’s not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an “infinite maze” of static files with no exit links, where they “get stuck” and “thrash around” for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That’s likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.

Tarpits were originally designed to waste spammers’ time and resources, but creators like Aaron have now evolved the tactic into an anti-AI weapon. As of this writing, Aaron confirmed that Nepenthes can effectively trap all the major web crawlers. So far, only OpenAI’s crawler has managed to escape.

It’s unclear how much damage tarpits or other AI attacks can ultimately do. Last May, Laxmi Korada, Microsoft’s director of partner technology, published a report detailing how leading AI companies were coping with poisoning, one of the earliest AI defense tactics deployed. He noted that all companies have developed poisoning countermeasures, while OpenAI “has been quite vigilant” and excels at detecting the “first signs of data poisoning attempts.”

Despite these efforts, he concluded that data poisoning was “a serious threat to machine learning models.” And in 2025, tarpitting represents a new threat, potentially increasing the costs of fresh data at a moment when AI companies are heavily investing and competing to innovate quickly while rarely turning significant profits.

“A link to a Nepenthes location from your site will flood out valid URLs within your site’s domain name, making it unlikely the crawler will access real content,” a Nepenthes explainer reads.

The only AI company that responded to Ars’ request to comment was OpenAI, whose spokesperson confirmed that OpenAI is already working on a way to fight tarpitting.

“We’re aware of efforts to disrupt AI web crawlers,” OpenAI’s spokesperson said. “We design our systems to be resilient while respecting robots.txt and standard web practices.”

But to Aaron, the fight is not about winning. Instead, it’s about resisting the AI industry further decaying the Internet with tech that no one asked for, like chatbots that replace customer service agents or the rise of inaccurate AI search summaries. By releasing Nepenthes, he hopes to do as much damage as possible, perhaps spiking companies’ AI training costs, dragging out training efforts, or even accelerating model collapse, with tarpits helping to delay the next wave of enshittification.

“Ultimately, it’s like the Internet that I grew up on and loved is long gone,” Aaron told Ars. “I’m just fed up, and you know what? Let’s fight back, even if it’s not successful. Be indigestible. Grow spikes.”

Nepenthes instantly inspires another tarpit

Nepenthes was released in mid-January but was instantly popularized beyond Aaron’s expectations after tech journalist Cory Doctorow boosted a tech commentator, Jürgen Geuter, praising the novel AI attack method on Mastodon. Very quickly, Aaron was shocked to see engagement with Nepenthes skyrocket.

“That’s when I realized, ‘oh this is going to be something,'” Aaron told Ars. “I’m kind of shocked by how much it’s blown up.”

It’s hard to tell how widely Nepenthes has been deployed. Site owners are discouraged from flagging when the malware has been deployed, forcing crawlers to face unknown “consequences” if they ignore robots.txt instructions.

Aaron told Ars that while “a handful” of site owners have reached out and “most people are being quiet about it,” his web server logs indicate that people are already deploying the tool. Likely, site owners want to protect their content, deter scraping, or mess with AI companies.

When software developer and hacker Gergely Nagy, who goes by the handle “algernon” online, saw Nepenthes, he was delighted. At that time, Nagy told Ars that nearly all of his server’s bandwidth was being “eaten” by AI crawlers.

Already blocking scraping and attempting to poison AI models through a simpler method, Nagy took his defense method further and created his own tarpit, Iocaine. He told Ars the tarpit immediately killed off about 94 percent of bot traffic to his site, which was primarily from AI crawlers. Soon, social media discussion drove users to inquire about Iocaine deployment, including not just individuals but also organizations wanting to take stronger steps to block scraping.

Iocaine takes ideas (not code) from Nepenthes, but it’s more intent on using the tarpit to poison AI models. Nagy used a reverse proxy to trap crawlers in an “infinite maze of garbage” in an attempt to slowly poison their data collection as much as possible for daring to ignore robots.txt.

Taking its name from “one of the deadliest poisons known to man” from The Princess Bride, Iocaine is jokingly depicted as the “deadliest poison known to AI.” While there’s no way of validating that claim, Nagy’s motto is that the more poisoning attacks that are out there, “the merrier.” He told Ars that his primary reasons for building Iocaine were to help rights holders wall off valuable content and stop AI crawlers from crawling with abandon.

Tarpits aren’t perfect weapons against AI

Running malware like Nepenthes can burden servers, too. Aaron likened the cost of running Nepenthes to running a cheap virtual machine on a Raspberry Pi, and Nagy said that serving crawlers Iocaine costs about the same as serving his website.

But Aaron told Ars that Nepenthes wasting resources is the chief objection he’s seen preventing its deployment. Critics fear that deploying Nepenthes widely will not only burden their servers but also increase the costs of powering all that AI crawling for nothing.

“That seems to be what they’re worried about more than anything,” Aaron told Ars. “The amount of power that AI models require is already astronomical, and I’m making it worse. And my view of that is, OK, so if I do nothing, AI models, they boil the planet. If I switch this on, they boil the planet. How is that my fault?”

Aaron also defends against this criticism by suggesting that a broader impact could slow down AI investment enough to possibly curb some of that energy consumption. Perhaps due to the resistance, AI companies will be pushed to seek permission first to scrape or agree to pay more content creators for training on their data.

“Any time one of these crawlers pulls from my tarpit, it’s resources they’ve consumed and will have to pay hard cash for, but, being bullshit, the money [they] have spent to get it won’t be paid back by revenue,” Aaron posted, explaining his tactic online. “It effectively raises their costs. And seeing how none of them have turned a profit yet, that’s a big problem for them. The investor money will not continue forever without the investors getting paid.”

Nagy agrees that the more anti-AI attacks there are, the greater the potential is for them to have an impact. And by releasing Iocaine, Nagy showed that social media chatter about new attacks can inspire new tools within a few days. Marcus Butler, an independent software developer, similarly built his poisoning attack called Quixotic over a few days, he told Ars. Soon afterward, he received messages from others who built their own versions of his tool.

Butler is not in the camp of wanting to destroy AI. He told Ars that he doesn’t think “tools like Quixotic (or Nepenthes) will ‘burn AI to the ground.'” Instead, he takes a more measured stance, suggesting that “these tools provide a little protection (a very little protection) against scrapers taking content and, say, reposting it or using it for training purposes.”

But for a certain sect of Internet users, every little bit of protection seemingly helps. Geuter linked Ars to a list of tools bent on sabotaging AI. Ultimately, he expects that tools like Nepenthes are “probably not gonna be useful in the long run” because AI companies can likely detect and drop gibberish from training data. But Nepenthes represents a sea change, Geuter told Ars, providing a useful tool for people who “feel helpless” in the face of endless scraping and showing that “the story of there being no alternative or choice is false.”

Criticism of tarpits as AI weapons

Critics debating Nepenthes’ utility on Hacker News suggested that most AI crawlers could easily avoid tarpits like Nepenthes, with one commenter describing the attack as being “very crawler 101.” Aaron said that was his “favorite comment” because if tarpits are considered elementary attacks, he has “2 million lines of access log that show that Google didn’t graduate.”

But efforts to poison AI or waste AI resources don’t just mess with the tech industry. Governments globally are seeking to leverage AI to solve societal problems, and attacks on AI’s resilience seemingly threaten to disrupt that progress.

Nathan VanHoudnos is a senior AI security research scientist in the federally funded CERT Division of the Carnegie Mellon University Software Engineering Institute, which partners with academia, industry, law enforcement, and government to “improve the security and resilience of computer systems and networks.” He told Ars that new threats like tarpits seem to replicate a problem that AI companies are already well aware of: “that some of the stuff that you’re going to download from the Internet might not be good for you.”

“It sounds like these tarpit creators just mainly want to cause a little bit of trouble,” VanHoudnos said. “They want to make it a little harder for these folks to get” the “better or different” data “that they’re looking for.”

VanHoudnos co-authored a paper on “Counter AI” last August, pointing out that attackers like Aaron and Nagy are limited in how much they can mess with AI models. They may have “influence over what training data is collected but may not be able to control how the data are labeled, have access to the trained model, or have access to the Al system,” the paper said.

Further, AI companies are increasingly turning to the deep web for unique data, so any efforts to wall off valuable content with tarpits may be coming right when crawling on the surface web starts to slow, VanHoudnos suggested.

But according to VanHoudnos, AI crawlers are also “relatively cheap,” and companies may deprioritize fighting against new attacks on crawlers if “there are higher-priority assets” under attack. And tarpitting “does need to be taken seriously because it is a tool in a toolkit throughout the whole life cycle of these systems. There is no silver bullet, but this is an interesting tool in a toolkit,” he said.

Offering a choice to abstain from AI training

Aaron told Ars that he never intended Nepenthes to be a major project but that he occasionally puts in work to fix bugs or add new features. He said he’d consider working on integrations for real-time reactions to crawlers if there was enough demand.

Currently, Aaron predicts that Nepenthes might be most attractive to rights holders who want AI companies to pay to scrape their data. And many people seem enthusiastic about using it to reinforce robots.txt. But “some of the most exciting people are in the ‘let it burn’ category,” Aaron said. These people are drawn to tools like Nepenthes as an act of rebellion against AI making the Internet less useful and enjoyable for users.

Geuter told Ars that he considers Nepenthes “more of a sociopolitical statement than really a technological solution (because the problem it’s trying to address isn’t purely technical, it’s social, political, legal, and needs way bigger levers).”

To Geuter, a computer scientist who has been writing about the social, political, and structural impact of tech for two decades, AI is the “most aggressive” example of “technologies that are not done ‘for us’ but ‘to us.'”

“It feels a bit like the social contract that society and the tech sector/engineering have had (you build useful things, and we’re OK with you being well-off) has been canceled from one side,” Geuter said. “And that side now wants to have its toy eat the world. People feel threatened and want the threats to stop.”

As AI evolves, so do attacks, with one 2021 study showing that increasingly stronger data poisoning attacks, for example, were able to break data sanitization defenses. Whether these attacks can ever do meaningful destruction or not, Geuter sees tarpits as a “powerful symbol” of the resistance that Aaron and Nagy readily joined.

“It’s a great sign to see that people are challenging the notion that we all have to do AI now,” Geuter said. “Because we don’t. It’s a choice. A choice that mostly benefits monopolists.”

Tarpit creators like Nagy will likely be watching to see if poisoning attacks continue growing in sophistication. On the Iocaine site—which, yes, is protected from scraping by Iocaine—he posted this call to action: “Let’s make AI poisoning the norm. If we all do it, they won’t have anything to crawl.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt Read More »

states-say-they’ve-been-shut-out-of-medicaid-amid-trump-funding-freeze

States say they’ve been shut out of Medicaid amid Trump funding freeze

Amid the Trump administration’s abrupt, wide-scale freeze on federal funding, states are reporting that they’ve lost access to Medicaid, a program jointly funded by the federal government and states to provide comprehensive health coverage and care to tens of millions of low-income adults and children in the US.

The funding freeze was announced in a memo dated January 27 from Matthew Vaeth, the acting director of the Office of Management and Budget, and was first reported Monday evening by independent journalist Marisa Kabas. The freeze is intended to prevent “use of Federal resources to advance Marxist equity, transgenderism, and green new deal social engineering policies,” Vaeth wrote. The memo ordered federal agencies to complete a comprehensive analysis of all federal financial assistance programs to ensure they align with the president’s policies and requirements.

“In the interim, to the extent permissible under applicable law, Federal agencies must temporarily pause all activities related to obligation or disbursement of all Federal financial assistance, and other relevant agency activities that may be implicated by the executive orders…” Vaeth wrote.

Illinois was the first state to report that it had lost access to Medicaid. According to the Chicago Sun-Times, Gov. JB Pritzker’s office expected the freeze to go into effect at 5 pm Eastern Time today but found the state locked out this morning. The Times noted that Medicaid covered about 3.9 million people in Illinois in 2023, including low-income adults, children, pregnant people, and people with disabilities.

In a post Tuesday afternoon on the social media platform Bluesky, Senator Ron Wyden (D-Ore.) reported that all 50 states have since lost access. “My staff has confirmed reports that Medicaid portals are down in all 50 states following last night’s federal funding freeze,” Wyden wrote. “This is a blatant attempt to rip away health care from millions of Americans overnight and will get people killed.”

States say they’ve been shut out of Medicaid amid Trump funding freeze Read More »

there’s-not-much-for-anyone-to-like-in-the-star-trek:-section-31-movie

There’s not much for anyone to like in the Star Trek: Section 31 movie

It is, in a word, awful. Which is really a shame!

Putting the “TV” in “TV movie”

Sam Richardson as Quasi, a shape-shifter. Comedy and melodrama coexist uneasily throughout Section 31. Credit: Michael Gibson/Paramount+

The movie explains its premise clearly enough, albeit in a clumsy exposition-heavy voiceover section near the beginning: Philippa Georgiou (Michelle Yeoh) was once the ruler of the bloodthirsty Terran Empire, an evil mirror of Star Trek’s utopian United Federation of Planets. She crossed over into “our” universe and gradually reformed, sort of, before vanishing. Now Section 31—Starfleet’s version of the CIA, more or less—needs to track her down and enlist her to help them save the galaxy from another threat that has crossed over from the evil universe to ours.

Emperor Georgiou originated on Star Trek: Discovery, and she was a consistently fun presence on a very uneven show. Yeoh clearly had a blast playing a sadistic, horny version of the kind and upstanding Captain Georgiou who died in Discovery‘s premiere.

But that fun is mostly absent here. To the extent that anything about Section 31 works, it’s as a sort of brain-off generic sci-fi action movie, Star Trek’s stab at a Suicide Squad-esque antihero story. Things happen in space, sometimes in a spaceship. There is some fighting, though nearly all of it involves punching instead of phasers or photon torpedoes. There is an Important Item that needs to be chased down, for the Fate of the Universe is at stake.

But the movie also feels more like a failed spin-off pilot that never made it to series, and it suffers for it; it’s chopped up into four episodes “chapters” and has to establish an entire crew’s worth of quirky misfits inside a 10-minute montage.

That might work if the script or the performers could make any of the characters endearing, but it isn’t, and they don’t. Performances are almost uniformly bad, ranging from inert to unbearable to “not trying particularly hard” (respectively: Omari Hardwick’s Alok, a humorless genetically augmented human; Sven Ruygrok’s horrifically grating Fuzz, a tiny and inexplicably Irish alien piloting a Vulkan-shaped robot; and Sam Richardson’s Quasi, whose amiable patter is right at home on Detroiters and I Think You Should Leave but is mostly distracting here). Every time one of these characters ends up dead, you feel a sense of relief because there’s one fewer one-note character to have to pay attention to.

There’s not much for anyone to like in the Star Trek: Section 31 movie Read More »