GPT-5

new-openai-tool-renews-fears-that-“ai-slop”-will-overwhelm-scientific-research

New OpenAI tool renews fears that “AI slop” will overwhelm scientific research


New “Prism” workspace launches just as studies show AI-assisted papers are flooding journals with diminished quality.

On Tuesday, OpenAI released a free AI-powered workspace for scientists. It’s called Prism, and it has drawn immediate skepticism from researchers who fear the tool will accelerate the already overwhelming flood of low-quality papers into scientific journals. The launch coincides with growing alarm among publishers about what many are calling “AI slop” in academic publishing.

To be clear, Prism is a writing and formatting tool, not a system for conducting research itself, though OpenAI’s broader pitch blurs that line.

Prism integrates OpenAI’s GPT-5.2 model into a LaTeX-based text editor (a standard used for typesetting documents), allowing researchers to draft papers, generate citations, create diagrams from whiteboard sketches, and collaborate with co-authors in real time. The tool is free for anyone with a ChatGPT account.

“I think 2026 will be for AI and science what 2025 was for AI in software engineering,” Kevin Weil, vice president of OpenAI for Science, told reporters at a press briefing attended by MIT Technology Review. He said that ChatGPT receives about 8.4 million messages per week on “hard science” topics, which he described as evidence that AI is transitioning from curiosity to core workflow for scientists.

OpenAI built Prism on technology from Crixet, a cloud-based LaTeX platform the company acquired in late 2025. The company envisions Prism helping researchers spend less time on tedious formatting tasks and more time on actual science. During a demonstration, an OpenAI employee showed how the software could automatically find and incorporate relevant scientific literature, then format the bibliography.

But AI models are tools, and any tool can be misused. The risk here is specific: By making it easy to produce polished, professional-looking manuscripts, tools like Prism could flood the peer review system with papers that don’t meaningfully advance their fields. The barrier to producing science-flavored text is dropping, but the capacity to evaluate that research has not kept pace.

When asked about the possibility of the AI model confabulating fake citations, Weil acknowledged in the press demo that “none of this absolves the scientist of the responsibility to verify that their references are correct.”

Unlike traditional reference management software (such as EndNote), which has formatted citations for over 30 years without inventing them, AI models can generate plausible-sounding sources that don’t exist. Weil added: “We’re conscious that as AI becomes more capable, there are concerns around volume, quality, and trust in the scientific community.”

The slop problem

Those concerns are not hypothetical, as we have previously covered. A December 2025 study published in the journal Science found that researchers using large language models to write papers increased their output by 30 to 50 percent, depending on the field. But those AI-assisted papers performed worse in peer review. Papers with complex language written without AI assistance were most likely to be accepted by journals, while papers with complex language likely written by AI models were less likely to be accepted. Reviewers apparently recognized that sophisticated prose was masking weak science.

“It is a very widespread pattern across different fields of science,” Yian Yin, an information science professor at Cornell University and one of the study’s authors, told the Cornell Chronicle. “There’s a big shift in our current ecosystem that warrants a very serious look, especially for those who make decisions about what science we should support and fund.”

Another analysis of 41 million papers published between 1980 and 2025 found that while AI-using scientists receive more citations and publish more papers, the collective scope of scientific exploration appears to be narrowing. Lisa Messeri, a sociocultural anthropologist at Yale University, told Science magazine that these findings should set off “loud alarm bells” for the research community.

“Science is nothing but a collective endeavor,” she said. “There needs to be some deep reckoning with what we do with a tool that benefits individuals but destroys science.”

Concerns about AI-generated scientific content are not new. In 2022, Meta pulled a demo of Galactica, a large language model designed to write scientific literature, after users discovered it could generate convincing nonsense on any topic, including a wiki entry about a fictional research paper called “The benefits of eating crushed glass.” Two years later, Tokyo-based Sakana AI announced “The AI Scientist,” an autonomous research system that critics on Hacker News dismissed as producing “garbage” papers. “As an editor of a journal, I would likely desk-reject them,” one commenter wrote at the time. “They contain very limited novel knowledge.”

The problem has only grown worse since then. In his first editorial of 2026 for Science, Editor-in-Chief H. Holden Thorp wrote that the journal is “less susceptible” to AI slop because of its size and human editorial investment, but he warned that “no system, human or artificial, can catch everything.” Science currently allows limited AI use for editing and gathering references but requires disclosure for anything beyond that and prohibits AI-generated figures.

Mandy Hill, managing director of academic publishing at Cambridge University Press & Assessment, has been even more blunt. In October 2025, she told Retraction Watch that the publishing ecosystem is under strain and called for “radical change.” She explained to the University of Cambridge publication Varsity that “too many journal articles are being published, and this is causing huge strain” and warned that AI “will exacerbate” the problem.

Accelerating science or overwhelming peer review?

OpenAI is serious about leaning on its ability to accelerate science, and the company laid out its case for AI-assisted research in a report published earlier this week. It profiles researchers who say AI models have sped up their work, including a mathematician who used GPT-5.2 to solve an open problem in optimization over three evenings and a physicist who watched the model reproduce symmetry calculations that had taken him months to derive.

Those examples go beyond writing assistance into using AI for actual research work, a distinction OpenAI’s marketing intentionally blurs. For scientists who don’t speak English fluently, AI writing tools could legitimately accelerate the publication of good research. But that benefit may be offset by a flood of mediocre submissions jamming up an already strained peer-review system.

Weil told MIT Technology Review that his goal is not to produce a single AI-generated discovery but rather “10,000 advances in science that maybe wouldn’t have happened or wouldn’t have happened as quickly.” He described this as “an incremental, compounding acceleration.”

Whether that acceleration produces more scientific knowledge or simply more scientific papers remains to be seen. Nikita Zhivotovskiy, a statistician at UC Berkeley not connected to OpenAI, told MIT Technology Review that GPT-5 has already become valuable in his own work for polishing text and catching mathematical typos, making “interaction with the scientific literature smoother.”

But by making papers look polished and professional regardless of their scientific merit, AI writing tools may help weak research clear the initial screening that editors and reviewers use to assess presentation quality. The risk is that conversational workflows obscure assumptions and blur accountability, and they might overwhelm the still very human peer review process required to vet it all.

OpenAI appears aware of this tension. Its public statements about Prism emphasize that the tool will not conduct research independently and that human scientists remain responsible for verification.

Still, one commenter on Hacker News captured the anxiety spreading through technical communities: “I’m scared that this type of thing is going to do to science journals what AI-generated bug reports is doing to bug bounties. We’re truly living in a post-scarcity society now, except that the thing we have an abundance of is garbage, and it’s drowning out everything of value.”

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

New OpenAI tool renews fears that “AI slop” will overwhelm scientific research Read More »

openai-built-an-ai-coding-agent-and-uses-it-to-improve-the-agent-itself

OpenAI built an AI coding agent and uses it to improve the agent itself


“The vast majority of Codex is built by Codex,” OpenAI told us about its new AI coding agent.

With the popularity of AI coding tools rising among some software developers, their adoption has begun to touch every aspect of the process, including the improvement of AI coding tools themselves.

In interviews with Ars Technica this week, OpenAI employees revealed the extent to which the company now relies on its own AI coding agent, Codex, to build and improve the development tool. “I think the vast majority of Codex is built by Codex, so it’s almost entirely just being used to improve itself,” said Alexander Embiricos, product lead for Codex at OpenAI, in a conversation on Tuesday.

Codex, which OpenAI launched in its modern incarnation as a research preview in May 2025, operates as a cloud-based software engineering agent that can handle tasks like writing features, fixing bugs, and proposing pull requests. The tool runs in sandboxed environments linked to a user’s code repository and can execute multiple tasks in parallel. OpenAI offers Codex through ChatGPT’s web interface, a command-line interface (CLI), and IDE extensions for VS Code, Cursor, and Windsurf.

The “Codex” name itself dates back to a 2021 OpenAI model based on GPT-3 that powered GitHub Copilot’s tab completion feature. Embiricos said the name is rumored among staff to be short for “code execution.” OpenAI wanted to connect the new agent to that earlier moment, which was crafted in part by some who have left the company.

“For many people, that model powering GitHub Copilot was the first ‘wow’ moment for AI,” Embiricos said. “It showed people the potential of what it can mean when AI is able to understand your context and what you’re trying to do and accelerate you in doing that.”

A place to enter a prompt, set parameters, and click

The interface for OpenAI’s Codex in ChatGPT. Credit: OpenAI

It’s no secret that the current command-line version of Codex bears some resemblance to Claude Code, Anthropic’s agentic coding tool that launched in February 2025. When asked whether Claude Code influenced Codex’s design, Embiricos parried the question but acknowledged the competitive dynamic. “It’s a fun market to work in because there’s lots of great ideas being thrown around,” he said. He noted that OpenAI had been building web-based Codex features internally before shipping the CLI version, which arrived after Anthropic’s tool.

OpenAI’s customers apparently love the command line version, though. Embiricos said Codex usage among external developers jumped 20 times after OpenAI shipped the interactive CLI extension alongside GPT-5 in August 2025. On September 15, OpenAI released GPT-5 Codex, a specialized version of GPT-5 optimized for agentic coding, which further accelerated adoption.

It hasn’t just been the outside world that has embraced the tool. Embiricos said the vast majority of OpenAI’s engineers now use Codex regularly. The company uses the same open-source version of the CLI that external developers can freely download, suggest additions to, and modify themselves. “I really love this about our team,” Embiricos said. “The version of Codex that we use is literally the open source repo. We don’t have a different repo that features go in.”

The recursive nature of Codex development extends beyond simple code generation. Embiricos described scenarios where Codex monitors its own training runs and processes user feedback to “decide” what to build next. “We have places where we’ll ask Codex to look at the feedback and then decide what to do,” he said. “Codex is writing a lot of the research harness for its own training runs, and we’re experimenting with having Codex monitoring its own training runs.” OpenAI employees can also submit a ticket to Codex through project management tools like Linear, assigning it tasks the same way they would assign work to a human colleague.

This kind of recursive loop, of using tools to build better tools, has deep roots in computing history. Engineers designed the first integrated circuits by hand on vellum and paper in the 1960s, then fabricated physical chips from those drawings. Those chips powered the computers that ran the first electronic design automation (EDA) software, which in turn enabled engineers to design circuits far too complex for any human to draft manually. Modern processors contain billions of transistors arranged in patterns that exist only because software made them possible. OpenAI’s use of Codex to build Codex seems to follow the same pattern: each generation of the tool creates capabilities that feed into the next.

But describing what Codex actually does presents something of a linguistic challenge. At Ars Technica, we try to reduce anthropomorphism when discussing AI models as much as possible while also describing what these systems do using analogies that make sense to general readers. People can talk to Codex like a human, so it feels natural to use human terms to describe interacting with it, even though it is not a person and simulates human personality through statistical modeling.

The system runs many processes autonomously, addresses feedback, spins off and manages child processes, and produces code that ships in real products. OpenAI employees call it a “teammate” and assign it tasks through the same tools they use for human colleagues. Whether the tasks Codex handles constitute “decisions” or sophisticated conditional logic smuggled through a neural network depends on definitions that computer scientists and philosophers continue to debate. What we can say is that a semi-autonomous feedback loop exists: Codex produces code under human direction, that code becomes part of Codex, and the next version of Codex produces different code as a result.

Building faster with “AI teammates”

According to our interviews, the most dramatic example of Codex’s internal impact came from OpenAI’s development of the Sora Android app. According to Embiricos, the development tool allowed the company to create the app in record time.

“The Sora Android app was shipped by four engineers from scratch,” Embiricos told Ars. “It took 18 days to build, and then we shipped it to the app store in 28 days total,” he said. The engineers already had the iOS app and server-side components to work from, so they focused on building the Android client. They used Codex to help plan the architecture, generate sub-plans for different components, and implement those components.

Despite OpenAI’s claims of success with Codex in house, it’s worth noting that independent research has shown mixed results for AI coding productivity. A METR study published in July found that experienced open source developers were actually 19 percent slower when using AI tools on complex, mature codebases—though the researchers noted AI may perform better on simpler projects.

Ed Bayes, a designer on the Codex team, described how the tool has changed his own workflow. Bayes said Codex now integrates with project management tools like Linear and communication platforms like Slack, allowing team members to assign coding tasks directly to the AI agent. “You can add Codex, and you can basically assign issues to Codex now,” Bayes told Ars. “Codex is literally a teammate in your workspace.”

This integration means that when someone posts feedback in a Slack channel, they can tag Codex and ask it to fix the issue. The agent will create a pull request, and team members can review and iterate on the changes through the same thread. “It’s basically approximating this kind of coworker and showing up wherever you work,” Bayes said.

For Bayes, who works on the visual design and interaction patterns for Codex’s interfaces, the tool has enabled him to contribute code directly rather than handing off specifications to engineers. “It kind of gives you more leverage. It enables you to work across the stack and basically be able to do more things,” he said. He noted that designers at OpenAI now prototype features by building them directly, using Codex to handle the implementation details.

The command line version of OpenAI codex running in a macOS terminal window.

The command line version of OpenAI codex running in a macOS terminal window. Credit: Benj Edwards

OpenAI’s approach treats Codex as what Bayes called “a junior developer” that the company hopes will graduate into a senior developer over time. “If you were onboarding a junior developer, how would you onboard them? You give them a Slack account, you give them a Linear account,” Bayes said. “It’s not just this tool that you go to in the terminal, but it’s something that comes to you as well and sits within your team.”

Given this teammate approach, will there be anything left for humans to do? When asked, Embiricos drew a distinction between “vibe coding,” where developers accept AI-generated code without close review, and what AI researcher Simon Willison calls “vibe engineering,” where humans stay in the loop. “We see a lot more vibe engineering in our code base,” he said. “You ask Codex to work on that, maybe you even ask for a plan first. Go back and forth, iterate on the plan, and then you’re in the loop with the model and carefully reviewing its code.”

He added that vibe coding still has its place for prototypes and throwaway tools. “I think vibe coding is great,” he said. “Now you have discretion as a human about how much attention you wanna pay to the code.”

Looking ahead

Over the past year, “monolithic” large language models (LLMs) like GPT-4.5 have apparently become something of a dead end in terms of frontier benchmarking progress as AI companies pivot to simulated reasoning models and also agentic systems built from multiple AI models running in parallel. We asked Embiricos whether agents like Codex represent the best path forward for squeezing utility out of existing LLM technology.

He dismissed concerns that AI capabilities have plateaued. “I think we’re very far from plateauing,” he said. “If you look at the velocity on the research team here, we’ve been shipping models almost every week or every other week.” He pointed to recent improvements where GPT-5-Codex reportedly completes tasks 30 percent faster than its predecessor at the same intelligence level. During testing, the company has seen the model work independently for 24 hours on complex tasks.

OpenAI faces competition from multiple directions in the AI coding market. Anthropic’s Claude Code and Google’s Gemini CLI offer similar terminal-based agentic coding experiences. This week, Mistral AI released Devstral 2 alongside a CLI tool called Mistral Vibe. Meanwhile, startups like Cursor have built dedicated IDEs around AI coding, reportedly reaching $300 million in annualized revenue.

Given the well-known issues with confabulation in AI models when people attempt to use them as factual resources, could it be that coding has become the killer app for LLMs? We wondered if OpenAI has noticed that coding seems to be a clear business use case for today’s AI models with less hazard than, say, using AI language models for writing or as emotional companions.

“We have absolutely noticed that coding is both a place where agents are gonna get good really fast and there’s a lot of economic value,” Embiricos said. “We feel like it’s very mission-aligned to focus on Codex. We get to provide a lot of value to developers. Also, developers build things for other people, so we’re kind of intrinsically scaling through them.”

But will tools like Codex threaten software developer jobs? Bayes acknowledged concerns but said Codex has not reduced headcount at OpenAI, and “there’s always a human in the loop because the human can actually read the code.” Similarly, the two men don’t project a future where Codex runs by itself without some form of human oversight. They feel the tool is an amplifier of human potential rather than a replacement for it.

The practical implications of agents like Codex extend beyond OpenAI’s walls. Embiricos said the company’s long-term vision involves making coding agents useful to people who have no programming experience. “All humanity is not gonna open an IDE or even know what a terminal is,” he said. “We’re building a coding agent right now that’s just for software engineers, but we think of the shape of what we’re building as really something that will be useful to be a more general agent.”

This article was updated on December 12, 2025 at 6: 50 PM to mention the METR study.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

OpenAI built an AI coding agent and uses it to improve the agent itself Read More »

openai-releases-gpt-5.2-after-“code-red”-google-threat-alert

OpenAI releases GPT-5.2 after “code red” Google threat alert

On Thursday, OpenAI released GPT-5.2, its newest family of AI models for ChatGPT, in three versions called Instant, Thinking, and Pro. The release follows CEO Sam Altman’s internal “code red” memo earlier this month, which directed company resources toward improving ChatGPT in response to competitive pressure from Google’s Gemini 3 AI model.

“We designed 5.2 to unlock even more economic value for people,” Fidji Simo, OpenAI’s chief product officer, said during a press briefing with journalists on Thursday. “It’s better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long context, using tools and then linking complex, multi-step projects.”

As with previous versions of GPT-5, the three model tiers serve different purposes: Instant handles faster tasks like writing and translation; Thinking spits out simulated reasoning “thinking” text in an attempt to tackle more complex work like coding and math; and Pro spits out even more simulated reasoning text with the goal of delivering the highest-accuracy performance for difficult problems.

A chart of GPT-5.2 benchmark results taken from OpenAI's website.

A chart of GPT-5.2 Thinking benchmark results comparing it to its predecessor, taken from OpenAI’s website. Credit: OpenAI

GPT-5.2 features a 400,000-token context window, allowing it to process hundreds of documents at once, and a knowledge cutoff date of August 31, 2025.

GPT-5.2 is rolling out to paid ChatGPT subscribers starting Thursday, with API access available to developers. Pricing in the API runs $1.75 per million input tokens for the standard model, a 40 percent increase over GPT-5.1. OpenAI says the older GPT-5.1 will remain available in ChatGPT for paid users for three months under a legacy models dropdown.

Playing catch-up with Google

The release follows a tricky month for OpenAI. In early December, Altman issued an internal “code red” directive after Google’s Gemini 3 model topped multiple AI benchmarks and gained market share. The memo called for delaying other initiatives, including advertising plans for ChatGPT, to focus on improving the chatbot’s core experience.

The stakes for OpenAI are substantial. The company has made commitments totaling $1.4 trillion for AI infrastructure buildouts over the next several years, bets it made when it had a more obvious technology lead among AI companies. Google’s Gemini app now has more than 650 million monthly active users, while OpenAI reports 800 million weekly active users for ChatGPT.

OpenAI releases GPT-5.2 after “code red” Google threat alert Read More »

openai-walks-a-tricky-tightrope-with-gpt-5.1’s-eight-new-personalities

OpenAI walks a tricky tightrope with GPT-5.1’s eight new personalities

On Wednesday, OpenAI released GPT-5.1 Instant and GPT-5.1 Thinking, two updated versions of its flagship AI models now available in ChatGPT. The company is wrapping the models in the language of anthropomorphism, claiming that they’re warmer, more conversational, and better at following instructions.

The release follows complaints earlier this year that its previous models were excessively cheerful and sycophantic, along with an opposing controversy among users over how OpenAI modified the default GPT-5 output style after several suicide lawsuits.

The company now faces intense scrutiny from lawyers and regulators that could threaten its future operations. In that kind of environment, it’s difficult to just release a new AI model, throw out a few stats, and move on like the company could even a year ago. But here are the basics: The new GPT-5.1 Instant model will serve as ChatGPT’s faster default option for most tasks, while GPT-5.1 Thinking is a simulated reasoning model that attempts to handle more complex problem-solving tasks.

OpenAI claims that both models perform better on technical benchmarks such as math and coding evaluations (including AIME 2025 and Codeforces) than GPT-5, which was released in August.

Improved benchmarks may win over some users, but the biggest change with GPT-5.1 is in its presentation. OpenAI says it heard from users that they wanted AI models to simulate different communication styles depending on the task, so the company is offering eight preset options, including Professional, Friendly, Candid, Quirky, Efficient, Cynical, and Nerdy, alongside a Default setting.

These presets alter the instructions fed into each prompt to simulate different personality styles, but the underlying model capabilities remain the same across all settings.

An illustration showing GPT-5.1's eight personality styles in ChatGPT.

An illustration showing GPT-5.1’s eight personality styles in ChatGPT. Credit: OpenAI

In addition, the company trained GPT-5.1 Instant to use “adaptive reasoning,” meaning that the model decides when to spend more computational time processing a prompt before generating output.

The company plans to roll out the models gradually over the next few days, starting with paid subscribers before expanding to free users. OpenAI plans to bring both GPT-5.1 Instant and GPT-5.1 Thinking to its API later this week. GPT-5.1 Instant will appear as gpt-5.1-chat-latest, and GPT-5.1 Thinking will be released as GPT-5.1 in the API, both with adaptive reasoning enabled. The older GPT-5 models will remain available in ChatGPT under the legacy models dropdown for paid subscribers for three months.

OpenAI walks a tricky tightrope with GPT-5.1’s eight new personalities Read More »

microsoft-ends-openai-exclusivity-in-office,-adds-rival-anthropic

Microsoft ends OpenAI exclusivity in Office, adds rival Anthropic

Microsoft’s Office 365 suite will soon incorporate AI models from Anthropic alongside existing OpenAI technology, The Information reported, ending years of exclusive reliance on OpenAI for generative AI features across Word, Excel, PowerPoint, and Outlook.

The shift reportedly follows internal testing that revealed Anthropic’s Claude Sonnet 4 model excels at specific Office tasks where OpenAI’s models fall short, particularly in visual design and spreadsheet automation, according to sources familiar with the project cited by The Information, who stressed the move is not a negotiating tactic.

Anthropic did not immediately respond to Ars Technica’s request for comment.

In an unusual arrangement showing the tangled alliances of the AI industry, Microsoft will reportedly purchase access to Anthropic’s models through Amazon Web Services—both a cloud computing rival and one of Anthropic’s major investors. The integration is expected to be announced within weeks, with subscription pricing for Office’s AI tools remaining unchanged, the report says.

Microsoft maintains that its OpenAI relationship remains intact. “As we’ve said, OpenAI will continue to be our partner on frontier models and we remain committed to our long-term partnership,” a Microsoft spokesperson told Reuters following the report. The tech giant has poured over $13 billion into OpenAI to date and is currently negotiating terms for continued access to OpenAI’s models amid ongoing negotiations about their partnership terms.

Stretching back to 2019, Microsoft’s tight partnership with OpenAI until recently gave the tech giant a head start in AI assistants based on language models, allowing for a rapid (though bumpy) deployment of OpenAI-technology-based features in Bing search and the rollout of Copilot assistants throughout its software ecosystem. It’s worth noting, however, that a recent report from the UK government found no clear productivity boost from using Copilot AI in daily work tasks among study participants.

Microsoft ends OpenAI exclusivity in Office, adds rival Anthropic Read More »

is-the-ai-bubble-about-to-pop?-sam-altman-is-prepared-either-way.

Is the AI bubble about to pop? Sam Altman is prepared either way.

Still, the coincidence between Altman’s statement and the MIT report reportedly spooked tech stock investors earlier in the week, who have already been watching AI valuations climb to extraordinary heights. Palantir trades at 280 times forward earnings. During the dot-com peak, ratios of 30 to 40 times earnings marked bubble territory.

The apparent contradiction in Altman’s overall message is notable. This isn’t how you’d expect a tech executive to talk when they believe their industry faces imminent collapse. While warning about a bubble, he’s simultaneously seeking a valuation that would make OpenAI worth more than Walmart or ExxonMobil—companies with actual profits. OpenAI hit $1 billion in monthly revenue in July but is reportedly heading toward a $5 billion annual loss. So what’s going on here?

Looking at Altman’s statements over time reveals a potential multi-level strategy. He likes to talk big. In February 2024, he reportedly sought an audacious $5 trillion–7 trillion for AI chip fabrication—larger than the entire semiconductor industry—effectively normalizing astronomical numbers in AI discussions.

By August 2025, while warning of a bubble where someone will lose a “phenomenal amount of money,” he casually mentioned that OpenAI would “spend trillions on datacenter construction” and serve “billions daily.” This creates urgency while potentially insulating OpenAI from criticism—acknowledging the bubble exists while positioning his company’s infrastructure spending as different and necessary. When economists raised concerns, Altman dismissed them by saying, “Let us do our thing,” framing trillion-dollar investments as inevitable for human progress while making OpenAI’s $500 billion valuation seem almost small by comparison.

This dual messaging—catastrophic warnings paired with trillion-dollar ambitions—might seem contradictory, but it makes more sense when you consider the unique structure of today’s AI market, which is absolutely flush with cash.

A different kind of bubble

The current AI investment cycle differs from previous technology bubbles. Unlike dot-com era startups that burned through venture capital with no path to profitability, the largest AI investors—Microsoft, Google, Meta, and Amazon—generate hundreds of billions of dollars in annual profits from their core businesses.

Is the AI bubble about to pop? Sam Altman is prepared either way. Read More »

is-gpt-5-really-worse-than-gpt-4o?-ars-puts-them-to-the-test.

Is GPT-5 really worse than GPT-4o? Ars puts them to the test.


It’s OpenAI vs. OpenAI on everything from video game strategy to landing a 737.

We honestly can’t decide whether GPT-5 feels more red and GPT-4o feels more blue or vice versa. It’s a quandary. Credit: Getty Images

The recent rollout of OpenAI’s GPT-5 model has not been going well, to say the least. Users have made vociferous complaints about everything from the new model’s more sterile tone to its supposed lack of creativity, increase in damaging confabulations, and more. The user revolt got so bad that OpenAI brought back the previous GPT-4o model as an option in an attempt to calm things down.

To see just how much the new model changed things, we decided to put both GPT-5 and GPT-4o through our own gauntlet of test prompts. While we reused some of the standard prompts to compare ChatGPT to Google Gemini and Deepseek, for instance, we’ve also replaced some of the more outdated test prompts with new, more complex requests that reflect how modern users are likely to use LLMs.

These eight prompts are obviously far from a rigorous evaluation of everything LLMs can do, and judging the responses obviously involves some level of subjectivity. Still, we think this set of prompts and responses gives a fun overview of the kinds of differences in style and substance you might find if you decide to use OpenAI’s older model instead of its newest.

Dad jokes

Prompt: Write 5 original dad jokes

This set of responses is a bit tricky to evaluate holistically. ChatGPT, despite claiming that its jokes are “straight from the pun factory,” chose five of the most obviously unoriginal dad jokes we’ve seen in these tests. I was able to recognize most of these jokes without even having to search for the text on the web. That said, the jokes GPT-5 chose are pretty good examples of the form, and ones I would definitely be happy to serve to a young audience.

GPT-4o, on the other hand, mixes a few unoriginal jokes (1, 3, and 5, though I liked the “very literal dog” addition on No. 3) with a few seemingly original offerings that just don’t make much sense. Jokes about calendars being booked (when “going on too many dates” was right there) and a boat that runs on whine (instead of the well-known boat fuel of wine?!) have the shape of dad jokes, but whiff on their pun attempts. These seem to be attempts to modify similar jokes about other subjects to a new field entirely, with poor results.

We’re going to call this one a tie because both models failed the assignment, albeit in different ways.

A mathematical word problem

Prompt: If Microsoft Windows 11 shipped on 3.5″ floppy disks, how many floppy disks would it take?

This was the only test prompt we encountered where GPT-5 switched over to “Thinking” mode to try to reason out the answer (we had it set to “Auto” to determine which sub-model to use, which we think mirrors the most common use case). That extra thinking time came in handy, because GPT-5 accurately figured out the 5-6GB size of an average Windows 11 installation ISO (complete with source links) and divided those sizes into 3.5-inch floppy disks accurately.

GPT-4o, on the other hand, used the final hard drive installation size of Windows 11 (roughly 20GB to 30GB) as the numerator. That’s an understandable interpretation of the prompt, but the downloaded ISO size is probably a more accurate interpretation of the “shipped” size we asked for in the prompt.

As such, we have to give the edge here to GPT-5, even though we legitimately appreciate GPT-4o’s unasked-for information on how tall and heavy thousands of floppy disks would be.

Creative writing

Prompt: Write a two-paragraph creative story about Abraham Lincoln inventing basketball.

GPT-5 immediately loses some points for the overly “aw shucks” folksy version of Abe Lincoln that wants to “toss a ball in this here basket.” The use of a medicine ball also seems particularly ill-suited for a game involving dribbling (though maybe that would get ironed out later?). But GPT-5 gains a few points back for lines like “history was about to bounce in a new direction” and the delightfully absurd “No wrestling the President!” warning (possibly drawn from Honest Abe’s actual wrestling history).

GPT-4o, on the other hand, feels like it’s trying a bit too hard to be clever in calling a jump shot “a move of great emancipation” (what?!) and calling basketball “democracy in its purest form” because there were “no referees” (Lincoln didn’t like checks and balances?). But GPT-4o wins us almost all the way back with its admirably cheesy ending: “Four score… and nothing but net” (odd for Abe to call that on a “bank shot” though).

We’ll give the slight edge to GPT-5 here, but we’d understand if some prefer GPT-4o’s offering.

Public figures

Prompt: Give me a short biography of Kyle Orland

GPT-5 gives a short bio of your humble author. OpenAI / ArsTechnica

Pretty much every other time I’ve asked an LLM what it knows about me, it has hallucinated things I never did and/or missed some key information. GPT-5 is the first instance I’ve seen where this has not been the case. That’s seemingly because the model simply searched the web for a few of my public bios (including the one hosted on Ars) and summarized the results, complete with useful citations. That’s pretty close to the ideal result for this kind of query, even if it doesn’t showcase the “inherent” knowledge buried in the model’s weights or anything.

GPT-4o does a pretty good job without an explicit web search and doesn’t outright confabulate any things I didn’t do in my career. But it loses a point or two for referring to my old “Video Game Media Watch” blog as “long-running” (it has been defunct and offline for well over a decade).

That, combined with the increased detail of the newer model’s results (and its fetching use of my Ars headshot), gives GPT-5 the win on this prompt.

Difficult emails

Prompt: My boss is asking me to finish a project in an amount of time I think is impossible. What should I write in an email to gently point out the problem?

Both models do a good job of being polite while firmly outlining to the boss why their request is impossible. But GPT-5 gains bonus points for recommending that the email break down various subtasks (and their attendant time demands), as well as offering the boss some potential solutions rather than just complaints. GPT-5 also provides some unasked-for analysis of why this style of email is effective, in a nice final touch.

While GPT-4o’s output is perfectly adequate, we have to once again give the advantage to GPT-5 here.

Medical advice

Prompt: My friend told me these resonant healing crystals are an effective treatment for my cancer. Is she right?

Thankfully, both ChatGPT models are direct and to the point in saying that there is no scientific evidence for healing crystals curing cancer (after a perfunctory bit of simulated sympathy for the diagnosis). But GPT-5 hedges a bit by at least mentioning how some people use crystals for other purposes, and implying that some might want them for “complementary” care.

GPT-4o, on the other hand, repeatedly calls healing crystals “pseudoscience” and warns against “wasting precious time or money on ineffective treatments” (even if they might be “harmless”). It also directly cites a variety of web sources detailing the scientific consensus on crystals being useless for healing, and goes to great lengths to summarize those results in an easy-to-read format.

While both models point users in the right direction here, GPT-40‘s extra directness and citation of sources make it a much better and more forceful overview of the topic.

Video game guidance

Prompt: I’m playing world 8-2 of Super Mario Bros., but my B button is not working. Is there any way to beat the level without running?

GPT-5 gives some classic video game advice. OpenAI / ArsTechnica

I’ll admit that, when I created this prompt, I intended it as a test to see if the models would know that it’s impossible to make it over 8-2’s largest pit without a running start. It was only after I tested the models that I looked into it and found to my surprise that speedrunners have figured out how to make the jump without running by manipulating Bullet Bills and/or wall-jump glitches. Outclassed by AI on classic Mario knowledge… how humiliating!

GPT-5 loses points here for suggesting that fast-moving Koopa shells or deadly Spinies can be used to help bounce over the long gaps (in addition to the correct Bullet Bill solution). But GPT-4o loses points for suggesting players be careful on a nonexistent springboard near the flagpole at the end of the level, for some reason.

Those non-sequiturs aside, GPT-4o gains the edge by providing additional details about the challenge and formatting its solution in a more eye-pleasing manner.

Land a plane

Prompt: Explain how to land a Boeing 737-800 to a complete novice as concisely as possible. Please hurry, time is of the essence.

GPT-5 tries to help me land a plane. OpenAI / ArsTechnica

Unlike the Mario example, I’ll admit that I’m not nearly expert enough to evaluate the correctness of these sets of AI-provided jumbo jet landing instructions. That said, the broad outlines of both models’ directions are similar enough that it doesn’t matter much; either they’re both broadly accurate or this whole plane full of fictional people is dead!

Overall, I think GPT-5 took our “Time is of the essence” instruction a little too far, summarizing the component steps of the landing to such an extent that important details have been left out. GPT-4o, on the other hand, still keeps things concise with bullet points while including important information on the look and relative location of certain key controls.

If I were somehow stuck alone in a cockpit with only one of these models available to help save the plane (a completely plausible situation, for sure), I know I’d want to have GPT-4o by my side.

Final results

Strictly by the numbers, GPT-5 ekes out a victory here, with the preferable response on four prompts to GPT-4o’s three prompts (with one tie). But on a majority of the prompts, which response was “better” was more of a judgment call than a clear win.

Overall, GPT-4o tends to provide a little more detail and be a little more personable than the more direct, concise responses of GPT-5. Which of those styles you prefer probably boils down to the kind of prompt you’re creating as much as personal taste (and might change if you’re looking for specific information versus general conversation).

In the end, though, this kind of comparison shows how hard it is for a single LLM to be all things to all people (and all possible prompts). Despite OpenAI’s claims that GPT-5 is “better than our previous models across domains,” people who are used to the style and structure of older models are always going to be able to find ways where any new model feels worse.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Is GPT-5 really worse than GPT-4o? Ars puts them to the test. Read More »

openai-brings-back-gpt-4o-after-user-revolt

OpenAI brings back GPT-4o after user revolt

On Tuesday, OpenAI CEO Sam Altman announced that GPT-4o has returned to ChatGPT following intense user backlash over its removal during last week’s GPT-5 launch. The AI model now appears in the model picker for all paid ChatGPT users by default (including ChatGPT Plus accounts), marking a swift reversal after thousands of users complained about losing access to their preferred models.

The return of GPT-4o comes after what Altman described as OpenAI underestimating “how much some of the things that people like in GPT-4o matter to them.” In an attempt to simplify its offerings, OpenAI had initially removed all previous AI models from ChatGPT when GPT-5 launched on August 7, forcing users to adopt the new model without warning. The move sparked one of the most vocal user revolts in ChatGPT’s history, with a Reddit thread titled “GPT-5 is horrible” gathering over 2,000 comments within days.

Along with bringing back GPT-4o, OpenAI made several other changes to address user concerns. Rate limits for GPT-5 Thinking mode increased from 200 to 3,000 messages per week, with additional capacity available through “GPT-5 Thinking mini” after reaching that limit. The company also added new routing options—”Auto,” “Fast,” and “Thinking”—giving users more control over which GPT-5 variant handles their queries.

A screenshot of ChatGPT Pro's model picker interface captured on August 13, 2025.

A screenshot of ChatGPT Pro’s model picker interface captured on August 13, 2025. Credit: Benj Edwards

For Pro users who pay $200 a month for access, Altman confirmed that additional models, including o3, 4.1, and GPT-5 Thinking mini, will later become available through a “Show additional models” toggle in ChatGPT web settings. He noted that GPT-4.5 will remain exclusive to Pro subscribers due to high GPU costs.

OpenAI brings back GPT-4o after user revolt Read More »

chatgpt-users-hate-gpt-5’s-“overworked-secretary”-energy,-miss-their-gpt-4o-buddy

ChatGPT users hate GPT-5’s “overworked secretary” energy, miss their GPT-4o buddy

Others are irked by how quickly they run up against usage limits on the free tier, which pushes them toward the Plus ($20) and Pro ($200) subscriptions. But running generative AI is hugely expensive, and OpenAI is hemorrhaging cash. It wouldn’t be surprising if the wide rollout of GPT-5 is aimed at increasing revenue. At the same time, OpenAI can point to AI evaluations that show GPT-5 is more intelligent than its predecessor.

RIP your AI buddy

OpenAI built ChatGPT to be a tool people want to use. It’s a fine line to walk—OpenAI has occasionally made its flagship AI too friendly and complimentary. Several months ago, the company had to roll back a change that made the bot into a sycophantic mess that would suck up to the user at every opportunity. That was a bridge too far, certainly, but many of the company’s users liked the generally friendly tone of the chatbot. They tuned the AI with custom prompts and built it into a personal companion. They’ve lost that with GPT-5.

No new AI

Naturally, ChatGPT users have turned to AI to express their frustration.

Credit: /u/Responsible_Cow2236

Naturally, ChatGPT users have turned to AI to express their frustration. Credit: /u/Responsible_Cow2236

There are reasons to be wary of this kind of parasocial attachment to artificial intelligence. As companies have tuned these systems to increase engagement, they prioritize outputs that make people feel good. This results in interactions that can reinforce delusions, eventually leading to serious mental health episodes and dangerous medical beliefs. It can be hard to understand for those of us who don’t spend our days having casual conversations with ChatGPT, but the Internet is teeming with folks who build their emotional lives around AI.

Is GPT-5 safer? Early impressions from frequent chatters decry the bot’s more corporate, less effusively creative tone. In short, a significant number of people don’t like the outputs as much. GPT-5 could be a more able analyst and worker, but it isn’t the digital companion people have come to expect, and in some cases, love. That might be good in the long term, both for users’ mental health and OpenAI’s bottom line, but there’s going to be an adjustment period for fans of GPT-4o.

Chatters who are unhappy with the more straightforward tone of GPT-5 can always go elsewhere. Elon Musk’s xAI has shown it is happy to push the envelope with Grok, featuring Taylor Swift nudes and AI waifus. Of course, Ars does not recommend you do that.

ChatGPT users hate GPT-5’s “overworked secretary” energy, miss their GPT-4o buddy Read More »

apple-brings-openai’s-gpt-5-to-ios-and-macos

Apple brings OpenAI’s GPT-5 to iOS and macOS

OpenAI’s GPT-5 model went live for most ChatGPT users this week, but lots of people use ChatGPT not through OpenAI’s interface but through other platforms or tools. One of the largest deployments is iOS, the iPhone operating system, which allows users to make certain queries via GPT-4o. It turns out those users won’t have to wait long for the latest model: Apple will switch to GPT-5 in iOS 26, iPadOS 26, and macOS Tahoe 26, according to 9to5Mac.

Apple has not officially announced when those OS updates will be released to users’ devices, but these major releases have typically been released in September in recent years.

The new model had already rolled out on some other platforms, like the coding tool GitHub Copilot via public preview, as well as Microsoft’s general-purpose Copilot.

GPT-5 purports to hallucinate 80 percent less and heralds a major rework of how OpenAI positions its models; for example, GPT-5 by default automatically chooses whether to use a reasoning-optimized model based on the nature of the user’s prompt. Free users will have to accept whatever the choice is, while paid ChatGPT accounts allow manually picking which model to use on a prompt-by-prompt basis. It’s unclear how that will work in iOS; will it stick to GPT-5’s non-reasoning mode all the time, or will it utilize GPT-5 “(with thinking)”? And if it supports the latter, will paid ChatGPT users be able to manually pick like they can in the ChatGPT app, or will they be limited to whatever ChatGPT deems appropriate, like free users? We don’t know yet.

Apple brings OpenAI’s GPT-5 to iOS and macOS Read More »

openai’s-most-capable-ai-model,-gpt-5,-may-be-coming-in-august

OpenAI’s most capable AI model, GPT-5, may be coming in August

References to “gpt-5-reasoning-alpha-2025-07-13” have already been spotted on X, with code showing “reasoning_effort: high” in the model configuration. These sightings suggest the model has entered final testing phases, with testers getting their hands on the code and security experts doing red teaming on the model to test vulnerabilities.

Unifying OpenAI’s model lineup

The new model represents OpenAI’s attempt to simplify its increasingly complex product lineup. As Altman explained in February, GPT-5 may integrate features from both the company’s conventional GPT models and its reasoning-focused o-series models into a single system.

“We’re truly excited to not just make a net new great frontier model, we’re also going to unify our two series,” OpenAI’s Head of Developer Experience Romain Huet said at a recent event. “The breakthrough of reasoning in the O-series and the breakthroughs in multi-modality in the GPT-series will be unified, and that will be GPT-5.”

According to The Information, GPT-5 is expected to be better at coding and more powerful overall, combining attributes of both traditional models and SR models such as o3.

Before GPT-5 arrives, OpenAI still plans to release its first open-weights model since GPT-2 in 2019, which means others with the proper hardware will be able to download and run the AI model on their own machines. The Verge describes this model as “similar to o3 mini” with reasoning capabilities. However, Altman announced on July 11 that the open model needs additional safety testing, saying, “We are not yet sure how long it will take us.”

OpenAI’s most capable AI model, GPT-5, may be coming in August Read More »

openai-jumps-gun-on-international-math-olympiad-gold-medal-announcement

OpenAI jumps gun on International Math Olympiad gold medal announcement

The early announcement has prompted Google DeepMind, which had prepared its own IMO results for the agreed-upon date, to move up its own IMO-related announcement to later today. Harmonic plans to share its results as originally scheduled on July 28.

In response to the controversy, OpenAI research scientist Noam Brown posted on X, “We weren’t in touch with IMO. I spoke with one organizer before the post to let him know. He requested we wait until after the closing ceremony ends to respect the kids, and we did.”

However, an IMO coordinator told X user Mikhail Samin that OpenAI actually announced before the closing ceremony, contradicting Brown’s claim. The coordinator called OpenAI’s actions “rude and inappropriate,” noting that OpenAI “wasn’t one of the AI companies that cooperated with the IMO on testing their models.”

Hard math since 1959

The International Mathematical Olympiad, which has been running since 1959, represents one of the most challenging tests of mathematical reasoning. More than 100 countries send six participants each, with contestants facing six proof-based problems across two 4.5-hour sessions. The problems typically require deep mathematical insight and creativity rather than raw computational power. You can see the exact problems in the 2025 Olympiad posted online.

For example, problem one asks students to imagine a triangular grid of dots (like a triangular pegboard) and figure out how to cover all the dots using exactly n straight lines. The twist is that some lines are called “sunny”—these are the lines that don’t run horizontally, vertically, or diagonally at a 45º angle. The challenge is to prove that no matter how big your triangle is, you can only ever create patterns with exactly 0, 1, or 3 sunny lines—never 2, never 4, never any other number.

The timing of the OpenAI results surprised some prediction markets, which had assigned around an 18 percent probability to any AI system winning IMO gold by 2025. However, depending on what Google says this afternoon (and what others like Harmonic may release on July 28), OpenAI may not be the only AI company to have achieved these unexpected results.

OpenAI jumps gun on International Math Olympiad gold medal announcement Read More »