ai slop

new-openai-tool-renews-fears-that-“ai-slop”-will-overwhelm-scientific-research

New OpenAI tool renews fears that “AI slop” will overwhelm scientific research


New “Prism” workspace launches just as studies show AI-assisted papers are flooding journals with diminished quality.

On Tuesday, OpenAI released a free AI-powered workspace for scientists. It’s called Prism, and it has drawn immediate skepticism from researchers who fear the tool will accelerate the already overwhelming flood of low-quality papers into scientific journals. The launch coincides with growing alarm among publishers about what many are calling “AI slop” in academic publishing.

To be clear, Prism is a writing and formatting tool, not a system for conducting research itself, though OpenAI’s broader pitch blurs that line.

Prism integrates OpenAI’s GPT-5.2 model into a LaTeX-based text editor (a standard used for typesetting documents), allowing researchers to draft papers, generate citations, create diagrams from whiteboard sketches, and collaborate with co-authors in real time. The tool is free for anyone with a ChatGPT account.

“I think 2026 will be for AI and science what 2025 was for AI in software engineering,” Kevin Weil, vice president of OpenAI for Science, told reporters at a press briefing attended by MIT Technology Review. He said that ChatGPT receives about 8.4 million messages per week on “hard science” topics, which he described as evidence that AI is transitioning from curiosity to core workflow for scientists.

OpenAI built Prism on technology from Crixet, a cloud-based LaTeX platform the company acquired in late 2025. The company envisions Prism helping researchers spend less time on tedious formatting tasks and more time on actual science. During a demonstration, an OpenAI employee showed how the software could automatically find and incorporate relevant scientific literature, then format the bibliography.

But AI models are tools, and any tool can be misused. The risk here is specific: By making it easy to produce polished, professional-looking manuscripts, tools like Prism could flood the peer review system with papers that don’t meaningfully advance their fields. The barrier to producing science-flavored text is dropping, but the capacity to evaluate that research has not kept pace.

When asked about the possibility of the AI model confabulating fake citations, Weil acknowledged in the press demo that “none of this absolves the scientist of the responsibility to verify that their references are correct.”

Unlike traditional reference management software (such as EndNote), which has formatted citations for over 30 years without inventing them, AI models can generate plausible-sounding sources that don’t exist. Weil added: “We’re conscious that as AI becomes more capable, there are concerns around volume, quality, and trust in the scientific community.”

The slop problem

Those concerns are not hypothetical, as we have previously covered. A December 2025 study published in the journal Science found that researchers using large language models to write papers increased their output by 30 to 50 percent, depending on the field. But those AI-assisted papers performed worse in peer review. Papers with complex language written without AI assistance were most likely to be accepted by journals, while papers with complex language likely written by AI models were less likely to be accepted. Reviewers apparently recognized that sophisticated prose was masking weak science.

“It is a very widespread pattern across different fields of science,” Yian Yin, an information science professor at Cornell University and one of the study’s authors, told the Cornell Chronicle. “There’s a big shift in our current ecosystem that warrants a very serious look, especially for those who make decisions about what science we should support and fund.”

Another analysis of 41 million papers published between 1980 and 2025 found that while AI-using scientists receive more citations and publish more papers, the collective scope of scientific exploration appears to be narrowing. Lisa Messeri, a sociocultural anthropologist at Yale University, told Science magazine that these findings should set off “loud alarm bells” for the research community.

“Science is nothing but a collective endeavor,” she said. “There needs to be some deep reckoning with what we do with a tool that benefits individuals but destroys science.”

Concerns about AI-generated scientific content are not new. In 2022, Meta pulled a demo of Galactica, a large language model designed to write scientific literature, after users discovered it could generate convincing nonsense on any topic, including a wiki entry about a fictional research paper called “The benefits of eating crushed glass.” Two years later, Tokyo-based Sakana AI announced “The AI Scientist,” an autonomous research system that critics on Hacker News dismissed as producing “garbage” papers. “As an editor of a journal, I would likely desk-reject them,” one commenter wrote at the time. “They contain very limited novel knowledge.”

The problem has only grown worse since then. In his first editorial of 2026 for Science, Editor-in-Chief H. Holden Thorp wrote that the journal is “less susceptible” to AI slop because of its size and human editorial investment, but he warned that “no system, human or artificial, can catch everything.” Science currently allows limited AI use for editing and gathering references but requires disclosure for anything beyond that and prohibits AI-generated figures.

Mandy Hill, managing director of academic publishing at Cambridge University Press & Assessment, has been even more blunt. In October 2025, she told Retraction Watch that the publishing ecosystem is under strain and called for “radical change.” She explained to the University of Cambridge publication Varsity that “too many journal articles are being published, and this is causing huge strain” and warned that AI “will exacerbate” the problem.

Accelerating science or overwhelming peer review?

OpenAI is serious about leaning on its ability to accelerate science, and the company laid out its case for AI-assisted research in a report published earlier this week. It profiles researchers who say AI models have sped up their work, including a mathematician who used GPT-5.2 to solve an open problem in optimization over three evenings and a physicist who watched the model reproduce symmetry calculations that had taken him months to derive.

Those examples go beyond writing assistance into using AI for actual research work, a distinction OpenAI’s marketing intentionally blurs. For scientists who don’t speak English fluently, AI writing tools could legitimately accelerate the publication of good research. But that benefit may be offset by a flood of mediocre submissions jamming up an already strained peer-review system.

Weil told MIT Technology Review that his goal is not to produce a single AI-generated discovery but rather “10,000 advances in science that maybe wouldn’t have happened or wouldn’t have happened as quickly.” He described this as “an incremental, compounding acceleration.”

Whether that acceleration produces more scientific knowledge or simply more scientific papers remains to be seen. Nikita Zhivotovskiy, a statistician at UC Berkeley not connected to OpenAI, told MIT Technology Review that GPT-5 has already become valuable in his own work for polishing text and catching mathematical typos, making “interaction with the scientific literature smoother.”

But by making papers look polished and professional regardless of their scientific merit, AI writing tools may help weak research clear the initial screening that editors and reviewers use to assess presentation quality. The risk is that conversational workflows obscure assumptions and blur accountability, and they might overwhelm the still very human peer review process required to vet it all.

OpenAI appears aware of this tension. Its public statements about Prism emphasize that the tool will not conduct research independently and that human scientists remain responsible for verification.

Still, one commenter on Hacker News captured the anxiety spreading through technical communities: “I’m scared that this type of thing is going to do to science journals what AI-generated bug reports is doing to bug bounties. We’re truly living in a post-scarcity society now, except that the thing we have an abundance of is garbage, and it’s drowning out everything of value.”

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

New OpenAI tool renews fears that “AI slop” will overwhelm scientific research Read More »

merriam-webster’s-word-of-the-year-delivers-a-dismissive-verdict-on-junk-ai-content

Merriam-Webster’s word of the year delivers a dismissive verdict on junk AI content

Like most tools, generative AI models can be misused. And when the misuse gets bad enough that a major dictionary notices, you know it’s become a cultural phenomenon.

On Sunday, Merriam-Webster announced that “slop” is its 2025 Word of the Year, reflecting how the term has become shorthand for the flood of low-quality AI-generated content that has spread across social media, search results, and the web at large. The dictionary defines slop as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.”

“It’s such an illustrative word,” Merriam-Webster president Greg Barlow told the Associated Press. “It’s part of a transformative technology, AI, and it’s something that people have found fascinating, annoying, and a little bit ridiculous.”

To select its Word of the Year, Merriam-Webster’s editors review data on which words rose in search volume and usage, then reach consensus on which term best captures the year. Barlow told the AP that the spike in searches for “slop” reflects growing awareness among users that they are encountering fake or shoddy content online.

Dictionaries have been tracking AI’s impact on language for the past few years, with Cambridge having selected “hallucinate” as its 2023 word of the year due to the tendency of AI models to generate plausible-but-false information (long-time Ars readers will be happy to hear there’s another word term for that in the dictionary as well).

The trend extends to online culture in general, which is ripe with new coinages. This year, Oxford University Press chose “rage bait,” referring to content designed to provoke anger for engagement. Cambridge Dictionary selected “parasocial,” describing one-sided relationships between fans and celebrities or influencers.

The difference between the baby and the bathwater

As the AP points out, the word “slop” originally entered English in the 1700s to mean soft mud. By the 1800s, it had evolved to describe food waste fed to pigs, and eventually came to mean rubbish or products of little value. The new AI-related definition builds on that history of describing something unwanted and unpleasant.

Merriam-Webster’s word of the year delivers a dismissive verdict on junk AI content Read More »

three-bizarre-home-devices-and-a-couple-good-things-at-ces-2025

Three bizarre home devices and a couple good things at CES 2025


You can’t replace cats with AI, not yet

Some quietly good things made an appearance at CES 2025, amidst the AI slush.

Credit: Verity Burns/WIRED UK

Every year, thousands of product vendors, journalists, and gadget enthusiasts gather in an unreasonable city to gawk at mostly unrealistic products.

To be of service to our readers, Ars has done the work of looking through hundreds of such items presented at the 2025 Consumer Electronic Show, pulling out the most bizarre, unnecessary, and head-scratching items. Andrew Cunningham swept across PC and gaming accessories. This writer stuck to goods related to the home.

It’s a lie to say it’s all a prank, so I snuck in a couple of actually good things for human domiciles announced during CES. But the stuff you’ll want to tell your family and friends about in mock disbelief? Plenty of that, still.

AI-powered spice dispenser: Spicerr

A hand holding a white tubular device, with spice tubes loaded into a bottom area, spices dropping out of the bottom.

Credit: Spicerr

Part of my job is to try and stretch my viewpoint outward—to encompass people who might not have the same experiences and who might want different things from technology. Not everybody is a professional writer, pecking away in Markdown about the latest turn-based strategy game. You must try to hear many timbres inside the common voice in your head when addressing new products and technologies.

I cannot get there with Spicerr, the “world’s first AI-powered spice dispenser,” even leaving aside the AI bit. Is the measurement and dumping of spices into a dish even five percent of the overall challenge? Will a mechanical dispenser be any more precise than standard teaspoons? Are there many kinds of food on which you would want to sprinkle a “customized blend” of spices? Are there home cooks so dedicated to fresh, bright flavors that they want their spices delivered in small vials, at presumably premium prices, rather than simply having small quantities of regularly restocked essentials?

Maybe the Spicerr would be a boon to inexperienced cooks, whose relatives all know them to under-season their food. Rather than buying them a battery-powered device, they must charge to “take the guesswork out of seasoning,” though, you could … buy them good cookbooks, or a Times Cooking subscription, or just a few new bottles of paprika, oregano, cumin, cayenne, and turmeric.

Philips Hue’s (sigh) AI-powered lighting assistants

Image of AI assistant responding to prompts from user,

Credit: Signify

I’m not dismayed that Philips Hue is jumping on the “This has AI now” bandwagon. Well, I am, but not specifically dismayed, because every vendor at CES this year is hawking AI. No, the bad thing here is that Hue lights are devices that work great. Maybe Philips’ pursuit of an “AI assistant” to help you figure out that Halloween lights should be orange-ish won’t distract them from their core product’s reliability. But I have my doubts.

Hue has recently moved from a relatively open lighting system to an app-and-account-required, cloud-controlled scheme, supposedly in the name of security and user control. Having an AI assistant is perhaps another way to sell services beyond hardware, like the $130 or $3/month LG TV app it now offers. The AI service is free for now, but charging for it in the future is far from impossible.

Again, none of this should necessarily affect people who, like me, use Hue bulbs to have a porch light come on at sunset or turn a dim, warm hue when it’s time to wind down. But it felt like Hue, which charges a very decent amount for their hardware, might have held off on chasing this trend.

Robot vacuums doing way too much

Switchbot K20+ Pro holding up a tablet while a woman does a yoga pose in front of an insanely wealthy-person view of a California cliffside.

Credit: Switchbot

Robot vacuums are sometimes worth the hassle and price… if you don’t mind doing a pre-vacuum sweep of things that might get stuck in its brushes, you’ve got room for an emptying base or will empty it yourself, and you don’t mind that they usually miss floor edges and corners. They’re fine, I’m saying.

Robot vacuum makers have steadfastly refused to accept “fine” and are out way over their skis this year. In one trade show, you can find:

  • Eureka’s J15 Max Ultra, incorporating “IntelliView AI 2.0,” infrared, and FHD vision, detects liquid spills and switches brushes and vacuums to better clean and avoid spreading.
  • Roborock’s Saros Z70 has a “mechanical task arm” that can pick up objects like socks and small debris (up to 10.5 ounces) and put them in a pre-determined pile spot.
  • SwitchBot’s modular K20+ Pro, which is a vacuum onto which you can attach air purifiers, tablet mounts, security cameras, or other things you want rolling around your home.
  • Dreame’s X50, which can pivot to clean some small ledges but cannot actually climb.
  • The Narwal Flow, which has a wide, flat, off-center mop to reach wall edges.

Pricing and availability are not available for these vacuums yet, but each is likely to set you back the equivalent of at least one new MacBook. They are also rather big devices to stash in your home (it’s hard to hide an arm or an air purifier). Each is an early adopter device, and getting replacement consumable parts for them long-term is an uncertain bet. I’m not sure who they are for, but that has not stopped this apparently fertile field from growing many new products.

Now for good things, starting with Google Home

Nest Hub second generation, on a nightstand with a bamboo top and dim lamp in the near background.

Credit: Corey Gaskin

I’ve been watching and occasionally writing about the progress of the nascent Matter smart home protocol, somewhat in the vein of a high school coach who knows their team is held back by a lack of coordination, communication, and consistent direction. What Matter wants to do is vital for the future of the smart home, but it’s very much a loose scrimmage right now.

And yet, this week, in a CES-adjacent announcement, Google reminded me that Matter can really, uh, matter. All of Google Home’s hub devices—Nest screens and speakers, Chromecasts, Google TV devices running at least Android 14, and a few other gadgets—can interoperate with Matter devices locally, with no cloud required.

That means people with a Google Home setup can switch devices, adjust volumes, and otherwise control devices, faster, with Internet outages or latency no longer an issue. Local, no-cloud-required control of devices across brands is one of Matter’s key promises, and seeing it happen inside one major home brand is encouraging.

More we’ll-see-what-happens news is the unveiling of the public Home APIs, which promise to make it easier for third-party devices to be set up, integrated, and automated in a Google Home setup. Even if you’re skeptical of Google’s long-term support for APIs, the company is also working with the Matter group to improve the Matter certification process for all devices. Device makers should then have Matter to fall back onto, failing enthusiasm for Google Home APIs.

This cat tower is also an air purifier; it is also good

Two fake cats, sitting on seats atop an air purifier at CES 2025

Credit: Verity Burns/WIRED UK

There are a lot of phones out there that need charging and a bunch of gamers who, for some reason, need even more controllers and screens to play on. But there is another, eternally underserved market getting some attention at CES: cats wanting to sit.

LG, which primarily concerned itself with stuffing generative AI interfaces into every other device at CES 2025, crafted something that feels like a real old-time trade show gimmick. There is no guarantee that your cat will use the AeroCat Tower; some cats may just sit inside the cardboard box it came in out of spite. But should they deign to luxuriate on it, the AeroCat will provide gentle heat beneath them, weigh them, and give you a record of their sleep habits. Also, it purifies the air in that room.

There is no pricing or availability information yet. But if you like your cats, you want to combine the function of a cat tower and air purifier, or you just want to consider something even just a little bit fun about the march of technology, look out for this one.

Photo of Kevin Purdy

Kevin is a senior technology reporter at Ars Technica, covering open-source software, PC gaming, home automation, repairability, e-bikes, and tech history. He has previously worked at Lifehacker, Wirecutter, iFixit, and Carbon Switch.

Three bizarre home devices and a couple good things at CES 2025 Read More »