Tech

m4-max-and-m3-ultra-mac-studio-review:-a-weird-update,-but-it-mostly-works

M4 Max and M3 Ultra Mac Studio Review: A weird update, but it mostly works

Comparing the M4 Max and M3 Ultra to high-end PC desktop processors.

As for the Intel and AMD comparisons, both companies’ best high-end desktop CPUs like the Ryzen 9 9950X and Core Ultra 285K are often competitive with the M4 Max’s multi-core performance, but are dramatically less power-efficient at their default settings.

Mac Studio or M4 Pro Mac mini?

The Mac Studio (bottom) and redesigned M4 Mac mini. Credit: Andrew Cunningham

Ever since Apple beefed up the Mac mini with Pro-tier chips, there’s been a pricing overlap around and just over $2,000 where the mini and the Studio are both compelling.

A $2,000 Mac mini comes with a fully enabled M4 Pro processor (14 CPU cores, 20 GPU cores), 512GB of storage, and 48GB of RAM, with 64GB of RAM available for another $200 and 10 gigabit Ethernet available for another $100. RAM is the high-end Mac mini’s main advantage over the Studio—the $1,999 Studio comes with a slightly cut-down M4 Max (also 14 CPU cores, but 32 GPU cores), 512GB of storage, and just 36GB of RAM.

In general, if you’re spending $2,000 on a Mac desktop, I would lean toward the Studio rather than the mini. You’re getting roughly the same CPU but a much faster GPU and more ports. You get less RAM, but depending on what you’re doing, there’s a good chance that 36GB is more than enough.

The only place where the mini is clearly better than the Studio once you’ve above $2,000 is memory. If you want 64GB of RAM in your Mac, you can get it in the Mac mini for $2,200. The cheapest Mac Studio with 64GB of RAM also requires a processor upgrade, bringing the total cost to $2,700. If you need memory more than you need raw performance, or if you just need something that’s as small as it can possibly be, that’s when the high-end mini can still make sense.

A lot of power—if you need it

Apple’s M4 Max Mac Studio. Credit: Andrew Cunningham

Obviously, Apple’s hermetically sealed desktop computers have some downsides compared to a gaming or workstation PC, most notably that you need to throw out and replace the whole thing any time you want to upgrade literally any component.

M4 Max and M3 Ultra Mac Studio Review: A weird update, but it mostly works Read More »

why-extracting-data-from-pdfs-is-still-a-nightmare-for-data-experts

Why extracting data from PDFs is still a nightmare for data experts


Optical Character Recognition

Countless digital documents hold valuable info, and the AI industry is attempting to set it free.

For years, businesses, governments, and researchers have struggled with a persistent problem: How to extract usable data from Portable Document Format (PDF) files. These digital documents serve as containers for everything from scientific research to government records, but their rigid formats often trap the data inside, making it difficult for machines to read and analyze.

“Part of the problem is that PDFs are a creature of a time when print layout was a big influence on publishing software, and PDFs are more of a ‘print’ product than a digital one,” Derek Willis, a lecturer in Data and Computational Journalism at the University of Maryland, wrote in an email to Ars Technica. “The main issue is that many PDFs are simply pictures of information, which means you need Optical Character Recognition software to turn those pictures into data, especially when the original is old or includes handwriting.”

Computational journalism is a field where traditional reporting techniques merge with data analysis, coding, and algorithmic thinking to uncover stories that might otherwise remain hidden in large datasets, which makes unlocking that data a particular interest for Willis.

The PDF challenge also represents a significant bottleneck in the world of data analysis and machine learning at large. According to several studies, approximately 80–90 percent of the world’s organizational data is stored as unstructured data in documents, much of it locked away in formats that resist easy extraction. The problem worsens with two-column layouts, tables, charts, and scanned documents with poor image quality.

The inability to reliably extract data from PDFs affects numerous sectors but hits hardest in areas that rely heavily on documentation and legacy records, including digitizing scientific research, preserving historical documents, streamlining customer service, and making technical literature more accessible to AI systems.

“It is a very real problem for almost anything published more than 20 years ago and in particular for government records,” Willis says. “That impacts not just the operation of public agencies like the courts, police, and social services but also journalists, who rely on those records for stories. It also forces some industries that depend on information, like insurance and banking, to invest time and resources in converting PDFs into data.”

A very brief history of OCR

Traditional optical character recognition (OCR) technology, which converts images of text into machine-readable text, has been around since the 1970s. Inventor Ray Kurzweil pioneered the commercial development of OCR systems, including the Kurzweil Reading Machine for the blind in 1976, which relied on pattern-matching algorithms to identify characters from pixel arrangements.

These traditional OCR systems typically work by identifying patterns of light and dark pixels in images, matching them to known character shapes, and outputting the recognized text. While effective for clear, straightforward documents, these pattern-matching systems, a form of AI themselves, often falter when faced with unusual fonts, multiple columns, tables, or poor-quality scans.

Traditional OCR persists in many workflows precisely because its limitations are well-understood—it makes predictable errors that can be identified and corrected, offering a reliability that sometimes outweighs the theoretical advantages of newer AI-based solutions. But now that transformer-based large language models (LLMs) are getting the lion’s share of funding dollars, companies are increasingly turning to them for a new approach to reading documents.

The rise of AI language models in OCR

Unlike traditional OCR methods that follow a rigid sequence of identifying characters based on pixel patterns, multimodal LLMs that can read documents are trained on text and images that have been translated into chunks of data called tokens and fed into large neural networks. Vision-capable LLMs from companies like OpenAI, Google, and Meta analyze documents by recognizing relationships between visual elements and understanding contextual cues.

The “visual” image-based method is how ChatGPT reads a PDF file, for example, if you upload it through the AI assistant interface. It’s a fundamentally different approach than standard OCR that allows them to potentially process documents more holistically, considering both visual layouts and text content simultaneously.

And as it turns out, some LLMs from certain vendors are better at this task than others.

“The LLMs that do well on these tasks tend to behave in ways that are more consistent with how I would do it manually,” Willis said. He noted that some traditional OCR methods are quite good, particularly Amazon’s Textract, but that “they also are bound by the rules of their software and limitations on how much text they can refer to when attempting to recognize an unusual pattern.” Willis added, “With LLMs, I think you trade that for an expanded context that seems to help them make better predictions about whether a digit is a three or an eight, for example.”

This context-based approach enables these models to better handle complex layouts, interpret tables, and distinguish between document elements like headers, captions, and body text—all tasks that traditional OCR solutions struggle with.

“[LLMs] aren’t perfect and sometimes require significant intervention to do the job well, but the fact that you can adjust them at all [with custom prompts] is a big advantage,” Willis said.

New attempts at LLM-based OCR

As the demand for better document-processing solutions grows, new AI players are entering the market with specialized offerings. One such recent entrant has caught the attention of document-processing specialists in particular.

Mistral, a French AI company known for its smaller LLMs, recently entered the LLM-powered optical reader space with Mistral OCR, a specialized API designed for document processing. According to Mistral’s materials, their system aims to extract text and images from documents with complex layouts by using its language model capabilities to process document elements.

Robot sitting on a bunch of books, reading a book.

However, these promotional claims don’t always match real-world performance, according to recent tests. “I’m typically a pretty big fan of the Mistral models, but the new OCR-specific one they released last week really performed poorly,” Willis noted.

“A colleague sent this PDF and asked if I could help him parse the table it contained,” says Willis. “It’s an old document with a table that has some complex layout elements. The new [Mistral] OCR-specific model really performed poorly, repeating the names of cities and botching a lot of the numbers.”

AI app developer Alexander Doria also recently pointed out on X a flaw with Mistral OCR’s ability to understand handwriting, writing, “Unfortunately Mistral-OCR has still the usual VLM curse: with challenging manuscripts, it hallucinates completely.”

According to Willis, Google currently leads the field in AI models that can read documents: “Right now, for me the clear leader is Google’s Gemini 2.0 Flash Pro Experimental. It handled the PDF that Mistral did not with a tiny number of mistakes, and I’ve run multiple messy PDFs through it with success, including those with handwritten content.”

Gemini’s performance stems largely from its ability to process expansive documents (in a type of short-term memory called a “context window”), which Willis specifically notes as a key advantage: “The size of its context window also helps, since I can upload large documents and work through them in parts.” This capability, combined with more robust handling of handwritten content, apparently gives Google’s model a practical edge over competitors in real-world document-processing tasks for now.

The drawbacks of LLM-based OCR

Despite their promise, LLMs introduce several new problems to document processing. Among them, they can introduce confabulations or hallucinations (plausible-sounding but incorrect information), accidentally follow instructions in the text (thinking they are part of a user prompt), or just generally misinterpret the data.

“The biggest [drawback] is that they are probabilistic prediction machines and will get it wrong in ways that aren’t just ‘that’s the wrong word’,” Willis explains. “LLMs will sometimes skip a line in larger documents where the layout repeats itself, I’ve found, where OCR isn’t likely to do that.”

AI researcher and data journalist Simon Willison identified several critical concerns of using LLMs for OCR in a conversation with Ars Technica. “I still think the biggest challenge is the risk of accidental instruction following,” he says, always wary of prompt injections (in this case accidental) that might feed nefarious or contradictory instructions to a LLM.

“That and the fact that table interpretation mistakes can be catastrophic,” Willison adds. “In the past I’ve had lots of cases where a vision LLM has matched up the wrong line of data with the wrong heading, which results in absolute junk that looks correct. Also that thing where sometimes if text is illegible a model might just invent the text.”

These issues become particularly troublesome when processing financial statements, legal documents, or medical records, where a mistake might put someone’s life in danger. The reliability problems mean these tools often require careful human oversight, limiting their value for fully automated data extraction.

The path forward

Even in our seemingly advanced age of AI, there is still no perfect OCR solution. The race to unlock data from PDFs continues, with companies like Google now offering context-aware generative AI products. Some of the motivation for unlocking PDFs among AI companies, as Willis observes, doubtless involves potential training data acquisition: “I think Mistral’s announcement is pretty clear evidence that documents—not just PDFs—are a big part of their strategy, exactly because it will likely provide additional training data.”

Whether it benefits AI companies with training data or historians analyzing a historical census, as these technologies improve, they may unlock repositories of knowledge currently trapped in digital formats designed primarily for human consumption. That could lead to a new golden age of data analysis—or a field day for hard-to-spot mistakes, depending on the technology used and how blindly we trust it.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Why extracting data from PDFs is still a nightmare for data experts Read More »

gmail-gains-gemini-powered-“add-to-calendar”-button

Gmail gains Gemini-powered “Add to calendar” button

Google has a new mission in the AI era: to add Gemini to as many of the company’s products as possible. We’ve already seen Gemini appear in search results, text messages, and more. In Google’s latest update to Workspace, Gemini will be able to add calendar appointments from Gmail with a single click. Well, assuming Gemini gets it right the first time, which is far from certain.

The new calendar button will appear at the top of emails, right next to the summarize button that arrived last year. The calendar option will show up in Gmail threads with actionable meeting chit-chat, allowing you to mash that button to create an appointment in one step. The Gemini sidebar will open to confirm the appointment was made, which is a good opportunity to double-check the robot. There will be a handy edit button in the Gemini window in the event it makes a mistake. However, the robot can’t invite people to these events yet.

The effect of using the button is the same as opening the Gemini panel and asking it to create an appointment. The new functionality is simply detecting events and offering the button as a shortcut of sorts. You should not expect to see this button appear on messages that already have calendar integration, like dining reservations and flights. Those already pop up in Google Calendar without AI.

Gmail gains Gemini-powered “Add to calendar” button Read More »

firmware-update-bricks-hp-printers,-makes-them-unable-to-use-hp-cartridges

Firmware update bricks HP printers, makes them unable to use HP cartridges

HP, along with other printer brands, is infamous for issuing firmware updates that brick already-purchased printers that have tried to use third-party ink. In a new form of frustration, HP is now being accused of issuing a firmware update that broke customers’ laser printers—even though the devices are loaded with HP-brand toner.

The firmware update in question is version 20250209, which HP issued on March 4 for its LaserJet MFP M232-M237 models. Per HP, the update includes “security updates,” a “regulatory requirement update,” “general improvements and bug fixes,” and fixes for IPP Everywhere. Looking back to older updates’ fixes and changes, which the new update includes, doesn’t reveal anything out of the ordinary. The older updates mention things like “fixed print quality to ensure borders are not cropped for certain document types,” and “improved firmware update and cartridge rejection experiences.” But there’s no mention of changes to how the printers use or read toner.

However, users have been reporting sudden problems using HP-brand toner in their M232–M237 series printers since their devices updated to 20250209. Users on HP’s support forum say they see Error Code 11 and the hardware’s toner light flashing when trying to print. Some said they’ve cleaned the contacts and reinstalled their toner but still can’t print.

“Insanely frustrating because it’s my small business printer and just stopped working out of nowhere[,] and I even replaced the tone[r,] which was a $60 expense,” a forum user wrote on March 8.

Firmware update bricks HP printers, makes them unable to use HP cartridges Read More »

“they-curdle-like-milk”:-wb-dvds-from-2006–2008-are-rotting-away-in-their-cases

“They curdle like milk”: WB DVDs from 2006–2008 are rotting away in their cases

Although digital media has surpassed physical media in popularity, there are still plenty of reasons for movie buffs and TV fans to hold onto, and even continue buying, DVDs. With physical media, owners are assured that they’ll always be able to play their favorite titles, so long as they take care of their discs. While digital copies are sometimes abruptly ripped away from viewers, physical media owners don’t have to worry about a corporation ruining their Friday night movie plans. At least, that’s what we thought.

It turns out that if your DVD collection includes titles distributed by Warner Bros. Home Entertainment, the home movie distribution arm of Warner Bros. Discovery (WBD), you may one day open up the box to find a case of DVD rot.

Recently, Chris Bumbray, editor-in-chief of movie news and reviews site JoBlo, detailed what would be a harrowing experience for any film collector. He said he recently tried to play his Passage to Marseille DVD, but “after about an hour, the disc simply stopped working.” He said “the same thing happened” with Across the Pacific. Bumbray bought a new DVD player but still wasn’t able to play his Desperate Journey disc. The latter case was especially alarming because, like a lot of classic films and shows, the title isn’t available as a digital copy.

DVDs, if taken care of properly, should last for 30 to up to 100 years. It turned out that the problems that Bumbray had weren’t due to a DVD player or poor DVD maintenance. In a statement to JoBlo shared on Tuesday, WBD confirmed widespread complaints about DVDs manufactured between 2006 and 2008. The statement said:

Warner Bros. Home Entertainment is aware of potential issues affecting select DVD titles manufactured between 2006 – 2008, and the company has been actively working with consumers to replace defective discs.

Where possible, the defective discs have been replaced with the same title. However, as some of the affected titles are no longer in print or the rights have expired, consumers have been offered an exchange for a title of like-value.

Consumers with affected product can contact the customer support team at whv@wbd.com.

Collectors have known about this problem for years

It’s helpful that WBD recently provided some clarity about this situation, but its statement to JoBlo appears to be the first time the company has publicly acknowledged the disc problems. This is despite DVD collectors lamenting early onset disc rot for years, including via YouTube and online forums.

“They curdle like milk”: WB DVDs from 2006–2008 are rotting away in their cases Read More »

bad-vibes?-google-may-have-screwed-up-haptics-in-the-new-pixel-drop-update

Bad vibes? Google may have screwed up haptics in the new Pixel Drop update

The unexpected appearance of notification cooldown, along with smaller changes to haptics globally, could be responsible for the complaints. Maybe this is working as intended and Pixel owners are just caught off guard; or maybe Google broke something. It wouldn’t be the first time.

Pixel notification cooldown

The unexpected appearance of Notification Cooldown in the update might have something to do with the reports—it’s on by default.

Credit: Ryan Whitwam

The unexpected appearance of Notification Cooldown in the update might have something to do with the reports—it’s on by default. Credit: Ryan Whitwam

In 2022, Google released an update that weakened haptic feedback on the Pixel 6, making it so soft that people were missing calls. Google released a fix for the problem a few weeks later. If there’s something wrong with the new Pixel Drop, it’s a more subtle problem. People can’t even necessarily explain how it’s different, but most seem to agree that it is.

After testing several Pixel phones both before and after the update, there may be some truth to the complaints. The length and intensity of haptic notification feedback feel different on a Pixel 9 Pro XL post-update, but our Pixel 9 Pro feels the same after installing the Pixel Drop. The different models may simply have been tuned differently in the update, or there could be a bug involved. We’ve reached out to Google to ask about this possible issue and have been told the Pixel team is actively investigating the reports.

Updated on 3/7/2025 with comment from Google. 

Bad vibes? Google may have screwed up haptics in the new Pixel Drop update Read More »

amd-says-top-tier-ryzen-9900x3d-and-9950x3d-cpus-arrive-march-12-for-$599-and-$699

AMD says top-tier Ryzen 9900X3D and 9950X3D CPUs arrive March 12 for $599 and $699

Like the 7950X3D and 7900X3D, these new X3D chips combine a pair of AMD’s CPU chiplets, one that has the extra 64MB of cache stacked underneath it and one that doesn’t. For the 7950X3D, you get eight cores with extra cache and eight without; for the 7900X3D, you get eight cores with extra cache and four without.

It’s up to AMD’s chipset software to decide what kinds of apps get to run on each kind of CPU core. Non-gaming workloads prioritize the normal CPU cores, which are generally capable of slightly higher peak clock speeds, while games that benefit disproportionately from the extra cache are run on those cores instead. AMD’s software can “park” the non-V-Cache CPU cores when you’re playing games to ensure they’re not accidentally being run on less-suitable CPU cores.

We didn’t have issues with this core parking technology when we initially tested the 7950X3D and 7900X3D, and AMD has steadily made improvements since then to make sure that core parking is working properly. The new 9000-series X3D chips should benefit from that work, too. To get the best results, AMD officially recommends a fresh and fully updated Windows install, along with the newest BIOS for your motherboard and the newest AMD chipset drivers; swapping out another Ryzen CPU for an X3D model (or vice versa) without reinstalling Windows can occasionally lead to CPUs being parked (or not parked) when they are supposed to be (or not supposed to be).

AMD says top-tier Ryzen 9900X3D and 9950X3D CPUs arrive March 12 for $599 and $699 Read More »

what-does-“phd-level”-ai-mean?-openai’s-rumored-$20,000-agent-plan-explained.

What does “PhD-level” AI mean? OpenAI’s rumored $20,000 agent plan explained.

On the Frontier Math benchmark by EpochAI, o3 solved 25.2 percent of problems, while no other model has exceeded 2 percent—suggesting a leap in mathematical reasoning capabilities over the previous model.

Benchmarks vs. real-world value

Ideally, potential applications for a true PhD-level AI model would include analyzing medical research data, supporting climate modeling, and handling routine aspects of research work.

The high price points reported by The Information, if accurate, suggest that OpenAI believes these systems could provide substantial value to businesses. The publication notes that SoftBank, an OpenAI investor, has committed to spending $3 billion on OpenAI’s agent products this year alone—indicating significant business interest despite the costs.

Meanwhile, OpenAI faces financial pressures that may influence its premium pricing strategy. The company reportedly lost approximately $5 billion last year covering operational costs and other expenses related to running its services.

News of OpenAI’s stratospheric pricing plans come after years of relatively affordable AI services that have conditioned users to expect powerful capabilities at relatively low costs. ChatGPT Plus remains $20 per month and Claude Pro costs $30 monthly—both tiny fractions of these proposed enterprise tiers. Even ChatGPT Pro’s $200/month subscription is relatively small compared to the new proposed fees. Whether the performance difference between these tiers will match their thousandfold price difference is an open question.

Despite their benchmark performances, these simulated reasoning models still struggle with confabulations—instances where they generate plausible-sounding but factually incorrect information. This remains a critical concern for research applications where accuracy and reliability are paramount. A $20,000 monthly investment raises questions about whether organizations can trust these systems not to introduce subtle errors into high-stakes research.

In response to the news, several people quipped on social media that companies could hire an actual PhD student for much cheaper. “In case you have forgotten,” wrote xAI developer Hieu Pham in a viral tweet, “most PhD students, including the brightest stars who can do way better work than any current LLMs—are not paid $20K / month.”

While these systems show strong capabilities on specific benchmarks, the “PhD-level” label remains largely a marketing term. These models can process and synthesize information at impressive speeds, but questions remain about how effectively they can handle the creative thinking, intellectual skepticism, and original research that define actual doctoral-level work. On the other hand, they will never get tired or need health insurance, and they will likely continue to improve in capability and drop in cost over time.

What does “PhD-level” AI mean? OpenAI’s rumored $20,000 agent plan explained. Read More »

iphone-16e-review:-the-most-expensive-cheap-iphone-yet

iPhone 16e review: The most expensive cheap iPhone yet


The iPhone 16e rethinks—and prices up—the basic iPhone.

An iPhone sits on the table, displaying the time with the screen on

The iPhone 16e, with a notch and an Action Button. Credit: Samuel Axon

The iPhone 16e, with a notch and an Action Button. Credit: Samuel Axon

For a long time, the cheapest iPhones were basically just iPhones that were older than the current flagship, but last week’s release of the $600 iPhone 16e marks a big change in how Apple is approaching its lineup.

Rather than a repackaging of an old iPhone, the 16e is the latest main iPhone—that is, the iPhone 16—with a bunch of stuff stripped away.

There are several potential advantages to this change. In theory, it allows Apple to support its lower-end offerings for longer with software updates, and it gives entry-level buyers access to more current technologies and features. It also simplifies the marketplace of accessories and the like.

There’s bad news, too, though: Since it replaces the much cheaper iPhone SE in Apple’s lineup, the iPhone 16e significantly raises the financial barrier to entry for iOS (the SE started at $430).

We spent a few days trying out the 16e and found that it’s a good phone—it’s just too bad it’s a little more expensive than the entry-level iPhone should ideally be. In many ways, this phone solves more problems for Apple than it does for consumers. Let’s explore why.

Table of Contents

A beastly processor for an entry-level phone

Like the 16, the 16e has Apple’s A18 chip, the most recent in the made-for-iPhone line of Apple-designed chips. There’s only one notable difference: This variation of the A18 has just four GPU cores instead of five. That will show up in benchmarks and in a handful of 3D games, but it shouldn’t make too much of a difference for most people.

It’s a significant step up over the A15 found in the final 2022 refresh of the iPhone SE, enabling a handful of new features like AAA games and Apple Intelligence.

The A18’s inclusion is good for both Apple and the consumer; Apple gets to establish a new, higher baseline of performance when developing new features for current and future handsets, and consumers likely get many more years of software updates than they’d get on the older chip.

The key example of a feature enabled by the A18 that Apple would probably like us all to talk about the most is Apple Intelligence, a suite of features utilizing generative AI to solve some user problems or enable new capabilities across iOS. By enabling these for the cheapest iPhone, Apple is making its messaging around Apple Intelligence a lot easier; it no longer needs to put effort into clarifying that you can use X feature with this new iPhone but not that one.

We’ve written a lot about Apple Intelligence already, but here’s the gist: There are some useful features here in theory, but Apple’s models are clearly a bit behind the cutting edge, and results for things like notifications summaries or writing tools are pretty mixed. It’s fun to generate original emojis, though!

The iPhone 16e can even use Visual Intelligence, which actually is handy sometimes. On my iPhone 16 Pro Max, I can point the rear camera at an object and press the camera button a certain way to get information about it.

I wouldn’t have expected the 16e to support this, but it does, via the Action Button (which was first introduced in the iPhone 15 Pro). This is a reprogrammable button that can perform a variety of functions, albeit just one at a time. Visual Intelligence is one of the options here, which is pretty cool, even though it’s not essential.

The screen is the biggest upgrade over the SE

Also like the 16, the 16e has a 6.1-inch display. The resolution’s a bit different, though; it’s 2,532 by 1,170 pixels instead of 2,556 by 1,179. It also has a notch instead of the Dynamic Island seen in the 16. All this makes the iPhone 16e’s display seem like a very close match to the one seen in 2022’s iPhone 14—in fact, it might literally be the same display.

I really missed the Dynamic Island while using the iPhone 16e—it’s one of my favorite new features added to the iPhone in recent years, as it consolidates what was previously a mess of notification schemes in iOS. Plus, it’s nice to see things like Uber and DoorDash ETAs and sports scores at a glance.

The main problem with losing the Dynamic Island is that we’re back to the old minor mess of notifications approaches, and I guess Apple has to keep supporting the old ways for a while yet. That genuinely surprises me; I would have thought Apple would want to unify notifications and activities with the Dynamic Island just like the A18 allows the standardization of other features.

This seems to indicate that the Dynamic Island is a fair bit more expensive to include than the good old camera notch flagship iPhones had been rocking since 2017’s iPhone X.

That compromise aside, the display on the iPhone 16e is ridiculously good for a phone at this price point, and it makes the old iPhone SE’s small LCD display look like it’s from another eon entirely by comparison. It gets brighter for both HDR content and sunny-day operation; the blacks are inky and deep, and the contrast and colors are outstanding.

It’s the best thing about the iPhone 16e, even if it isn’t quite as refined as the screens in Apple’s current flagships. Most people would never notice the difference between the screens in the 16e and the iPhone 16 Pro, though.

There is one other screen feature I miss from the higher-end iPhones you can buy in 2025: Those phones can drop the display all the way down to 1 nit, which is awesome for using the phone late at night in bed without disturbing a sleeping partner. Like earlier iPhones, the 16e can only get so dark.

It gets quite bright, though; Apple claims it typically reaches 800 nits in peak brightness but that it can stretch to 1200 when viewing certain HDR photos and videos. That means it gets about twice as bright as the SE did.

Connectivity is key

The iPhone 16e supports the core suite of connectivity options found in modern phones. There’s Wi-Fi 6, Bluetooth 5.3, and Apple’s usual limited implementation of NFC.

There are three new things of note here, though, and they’re good, neutral, and bad, respectively.

USB-C

Let’s start with the good. We’ve moved from Apple’s proprietary Lightning port found in older iPhones (including the final iPhone SE) toward USB-C, now a near-universal standard on mobile devices. It allows faster charging and more standardized charging cable support.

Sure, it’s a bummer to start over if you’ve spent years buying Lightning accessories, but it’s absolutely worth it in the long run. This change means that the entire iPhone line has now abandoned Lightning, so all iPhones and Android phones will have the same main port for years to come. Finally!

The finality of this shift solves a few problems for Apple: It greatly simplifies the accessory landscape and allows the company to move toward producing a smaller range of cables.

Satellite connectivity

Recent flagship iPhones have gradually added a small suite of features that utilize satellite connectivity to make life a little easier and safer.

Among those is crash detection and roadside assistance. The former will use the sensors in the phone to detect if you’ve been in a car crash and contact help, and roadside assistance allows you to text for help when you’re outside of cellular reception in the US and UK.

There are also Emergency SOS and Find My via satellite, which let you communicate with emergency responders from remote places and allow you to be found.

Along with a more general feature that allows Messages via satellite, these features can greatly expand your options if you’re somewhere remote, though they’re not as easy to use and responsive as using the regular cellular network.

Where’s MagSafe?

I don’t expect the 16e to have all the same features as the 16, which is $200 more expensive. In fact, it has more modern features than I think most of its target audience needs (more on that later). That said, there’s one notable omission that makes no sense to me at all.

The 16e does not support MagSafe, a standard for connecting accessories to the back of the device magnetically, often while allowing wireless charging via the Qi standard.

Qi wireless charging is still supported, albeit at a slow 7.5 W, but there are no magnets, meaning a lot of existing MagSafe accessories are a lot less useful with this phone, if they’re usable at all. To be fair, the SE didn’t support MagSafe either, but every new iPhone design since the iPhone 12 way back in 2020 has—and not just the premium flagships.

It’s not like the MagSafe accessory ecosystem was some bottomless well of innovation, but that magnetic alignment is handier than you might think, whether we’re talking about making sure the phone locks into place for the fastest wireless charging speeds or hanging the phone on a car dashboard to use GPS on the go.

It’s one of those things where folks coming from much older iPhones may not care because they don’t know what they’re missing, but it could be annoying in households with multiple generations of iPhones, and it just doesn’t make any sense.

Most of Apple’s choices in the 16e seem to serve the goal of unifying the whole iPhone lineup to simplify the message for consumers and make things easier for Apple to manage efficiently, but the dropping of MagSafe is bizarre.

It almost makes me think that Apple might plan to drop MagSafe from future flagship iPhones, too, and go toward something new, just because that’s the only explanation I can think of. That otherwise seems unlikely to me right now, but I guess we’ll see.

The first Apple-designed cellular modem

We’ve been seeing rumors that Apple planned to drop third-party modems from companies like Qualcomm for years. As far back as 2018, Apple was poaching Qualcomm employees in an adjacent office in San Diego. In 2020, Apple SVP Johny Srouji announced to employees that work had begun.

It sounds like development has been challenging, but the first Apple-designed modem has arrived here in the 16e of all places. Dubbed the C1, it’s… perfectly adequate. It’s about as fast or maybe just a smidge slower than what you get in the flagship phones, but almost no user would notice any difference at all.

That’s really a win for Apple, which has struggled with a tumultuous relationship with its partners here for years and which has long run into space problems in its phones in part because the third-party modems weren’t compact enough.

This change may not matter much for the consumer beyond freeing up just a tiny bit of space for a slightly larger battery, but it’s another step in Apple’s long journey to ultimately and fully control every component in the iPhone that it possibly can.

Bigger is better for batteries

There is one area where the 16e is actually superior to the 16, much less the SE: battery life. The 16e reportedly has a 3,961 mAh battery, the largest in any of the many iPhones with roughly this size screen. Apple says it offers up to 26 hours of video playback, which is the kind of number you expect to see in a much larger flagship phone.

I charged this phone three times in just under a week with it, though I wasn’t heavily hitting 5G networks, playing many 3D games, or cranking the brightness way up all the time while using it.

That’s a bit of a bump over the 16, but it’s a massive leap over the SE, which promised a measly 15 hours of video playback. Every single phone in Apple’s lineup now has excellent battery life by any standard.

Quality over quantity in the camera system

The 16E’s camera system leaves the SE in the dust, but it’s no match for the robust system found in the iPhone 16. Regardless, it’s way better than you’d typically expect from a phone at this price.

Like the 16, the 16e has a 48 MP “Fusion” wide-angle rear camera. It typically doesn’t take photos at 48 MP (though you can do that while compromising color detail). Rather, 24 MP is the target. The 48 MP camera enables 2x zoom that is nearly visually indistinguishable from optical zoom.

Based on both the specs and photo comparisons, the main camera sensor in the 16e appears to me to be exactly the same as that one found in the 16. We’re just missing the ultra-wide lens (which allows more zoomed-out photos, ideal for groups of people in small spaces, for example) and several extra features like advanced image stabilization, the newest Photographic Styles, and macro photography.

The iPhone 16e takes excellent photos in bright conditions. Samuel Axon

That’s a lot of missing features, sure, but it’s wild how good this camera is for this price point. Even something like the Pixel 8a can’t touch it (though to be fair, the Pixel 8a is $100 cheaper).

Video capture is a similar situation: The 16e shoots at the same resolutions and framerates as the 16, but it lacks a few specialized features like Cinematic and Action modes. There’s also a front-facing camera with the TrueDepth sensor for Face ID in that notch, and it has comparable specs to the front-facing cameras we’ve seen in a couple of years of iPhones at this point.

If you were buying a phone for the cameras, this wouldn’t be the one for you. It’s absolutely worth paying another $200 for the iPhone 16 (or even just $100 for the iPhone 15 for the ultra-wide lens for 0.5x zoom; the 15 is still available in the Apple Store) if that’s your priority.

The iPhone 16’s macro mode isn’t available here, so ultra-close-ups look fuzzy. Samuel Axon

But for the 16e’s target consumer (mostly folks with the iPhone 11 or older or an iPhone SE, who just want the cheapest functional iPhone they can get) it’s almost overkill. I’m not complaining, though it’s a contributing factor to the phone’s cost compared to entry-level Android phones and Apple’s old iPhone SE.

RIP small phones, once and for all

In one fell swoop, the iPhone 16e’s replacement of the iPhone SE eliminates a whole range of legacy technologies that have held on at the lower end of the iPhone lineup for years. Gone are Touch ID, the home button, LCD displays, and Lightning ports—they’re replaced by Face ID, swipe gestures, OLED, and USB-C.

Newer iPhones have had most of those things for quite some time. The latest feature was USB-C, which came in 2023’s iPhone 15. The removal of the SE from the lineup catches the bottom end of the iPhone up with the top in these respects.

That said, the SE had maintained one positive differentiator, too: It was small enough to be used one-handed by almost anyone. With the end of the SE and the release of the 16e, the one-handed iPhone is well and truly dead. Of course, most people have been clear they want big screens and batteries above almost all else, so the writing had been on the wall for a while for smaller phones.

The death of the iPhone SE ushers in a new era for the iPhone with bigger and better features—but also bigger price tags.

A more expensive cheap phone

Assessing the iPhone 16e is a challenge. It’s objectively a good phone—good enough for the vast majority of people. It has a nearly top-tier screen (though it clocks in at 60Hz, while some Android phones close to this price point manage 120Hz), a camera system that delivers on quality even if it lacks special features seen in flagships, strong connectivity, and performance far above what you’d expect at this price.

If you don’t care about extra camera features or nice-to-haves like MagSafe or the Dynamic Island, it’s easy to recommend saving a couple hundred bucks compared to the iPhone 16.

The chief criticism I have that relates to the 16e has less to do with the phone itself than Apple’s overall lineup. The iPhone SE retailed for $430, nearly half the price of the 16. By making the 16e the new bottom of the lineup, Apple has significantly raised the financial barrier to entry for iOS.

Now, it’s worth mentioning that a pretty big swath of the target market for the 16e will buy it subsidized through a carrier, so they might not pay that much up front. I always recommend buying a phone directly if you can, though, as carrier subsidization deals are usually worse for the consumer.

The 16e’s price might push more people to go for the subsidy. Plus, it’s just more phone than some people need. For example, I love a high-quality OLED display for watching movies, but I don’t think the typical iPhone SE customer was ever going to care about that.

That’s why I believe the iPhone 16e solves more problems for Apple than it does for the consumer. In multiple ways, it allows Apple to streamline production, software support, and marketing messaging. It also drives up the average price per unit across the whole iPhone line and will probably encourage some people who would have spent $430 to spend $600 instead, possibly improving revenue. All told, it’s a no-brainer for Apple.

It’s just a mixed bag for the sort of no-frills consumer who wants a minimum viable phone and who for one reason or another didn’t want to go the Android route. The iPhone 16e is definitely a good phone—I just wish there were more options for that consumer.

The good

  • Dramatically improved display than the iPhone SE
  • Likely stronger long-term software support than most previous entry-level iPhones
  • Good battery life and incredibly good performance for this price point
  • A high-quality camera, especially for the price

The bad

  • No ultra-wide camera
  • No MagSafe
  • No Dynamic Island

The ugly

  • Significantly raises the entry price point for buying an iPhone

Photo of Samuel Axon

Samuel Axon is a senior editor at Ars Technica. He covers Apple, software development, gaming, AI, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

iPhone 16e review: The most expensive cheap iPhone yet Read More »

cmu-research-shows-compression-alone-may-unlock-ai-puzzle-solving-abilities

CMU research shows compression alone may unlock AI puzzle-solving abilities


Tis the season for a squeezin’

New research challenges prevailing idea that AI needs massive datasets to solve problems.

A pair of Carnegie Mellon University researchers recently discovered hints that the process of compressing information can solve complex reasoning tasks without pre-training on a large number of examples. Their system tackles some types of abstract pattern-matching tasks using only the puzzles themselves, challenging conventional wisdom about how machine learning systems acquire problem-solving abilities.

“Can lossless information compression by itself produce intelligent behavior?” ask Isaac Liao, a first-year PhD student, and his advisor Professor Albert Gu from CMU’s Machine Learning Department. Their work suggests the answer might be yes. To demonstrate, they created CompressARC and published the results in a comprehensive post on Liao’s website.

The pair tested their approach on the Abstraction and Reasoning Corpus (ARC-AGI), an unbeaten visual benchmark created in 2019 by machine learning researcher François Chollet to test AI systems’ abstract reasoning skills. ARC presents systems with grid-based image puzzles where each provides several examples demonstrating an underlying rule, and the system must infer that rule to apply it to a new example.

For instance, one ARC-AGI puzzle shows a grid with light blue rows and columns dividing the space into boxes. The task requires figuring out which colors belong in which boxes based on their position: black for corners, magenta for the middle, and directional colors (red for up, blue for down, green for right, and yellow for left) for the remaining boxes. Here are three other example ARC-AGI puzzles, taken from Liao’s website:

Three example ARC-AGI benchmarking puzzles.

Three example ARC-AGI benchmarking puzzles. Credit: Isaac Liao / Albert Gu

The puzzles test capabilities that some experts believe may be fundamental to general human-like reasoning (often called “AGI” for artificial general intelligence). Those properties include understanding object persistence, goal-directed behavior, counting, and basic geometry without requiring specialized knowledge. The average human solves 76.2 percent of the ARC-AGI puzzles, while human experts reach 98.5 percent.

OpenAI made waves in December for the claim that its o3 simulated reasoning model earned a record-breaking score on the ARC-AGI benchmark. In testing with computational limits, o3 scored 75.7 percent on the test, while in high-compute testing (basically unlimited thinking time), it reached 87.5 percent, which OpenAI says is comparable to human performance.

CompressARC achieves 34.75 percent accuracy on the ARC-AGI training set (the collection of puzzles used to develop the system) and 20 percent on the evaluation set (a separate group of unseen puzzles used to test how well the approach generalizes to new problems). Each puzzle takes about 20 minutes to process on a consumer-grade RTX 4070 GPU, compared to top-performing methods that use heavy-duty data center-grade machines and what the researchers describe as “astronomical amounts of compute.”

Not your typical AI approach

CompressARC takes a completely different approach than most current AI systems. Instead of relying on pre-training—the process where machine learning models learn from massive datasets before tackling specific tasks—it works with no external training data whatsoever. The system trains itself in real-time using only the specific puzzle it needs to solve.

“No pretraining; models are randomly initialized and trained during inference time. No dataset; one model trains on just the target ARC-AGI puzzle and outputs one answer,” the researchers write, describing their strict constraints.

When the researchers say “No search,” they’re referring to another common technique in AI problem-solving where systems try many different possible solutions and select the best one. Search algorithms work by systematically exploring options—like a chess program evaluating thousands of possible moves—rather than directly learning a solution. CompressARC avoids this trial-and-error approach, relying solely on gradient descent—a mathematical technique that incrementally adjusts the network’s parameters to reduce errors, similar to how you might find the bottom of a valley by always walking downhill.

A block diagram of the CompressARC architecture, created by the researchers.

A block diagram of the CompressARC architecture, created by the researchers. Credit: Isaac Liao / Albert Gu

The system’s core principle uses compression—finding the most efficient way to represent information by identifying patterns and regularities—as the driving force behind intelligence. CompressARC searches for the shortest possible description of a puzzle that can accurately reproduce the examples and the solution when unpacked.

While CompressARC borrows some structural principles from transformers (like using a residual stream with representations that are operated upon), it’s a custom neural network architecture designed specifically for this compression task. It’s not based on an LLM or standard transformer model.

Unlike typical machine learning methods, CompressARC uses its neural network only as a decoder. During encoding (the process of converting information into a compressed format), the system fine-tunes the network’s internal settings and the data fed into it, gradually making small adjustments to minimize errors. This creates the most compressed representation while correctly reproducing known parts of the puzzle. These optimized parameters then become the compressed representation that stores the puzzle and its solution in an efficient format.

An animated GIF showing the multi-step process of CompressARC solving an ARC-AGI puzzle.

An animated GIF showing the multi-step process of CompressARC solving an ARC-AGI puzzle. Credit: Isaac Liao

“The key challenge is to obtain this compact representation without needing the answers as inputs,” the researchers explain. The system essentially uses compression as a form of inference.

This approach could prove valuable in domains where large datasets don’t exist or when systems need to learn new tasks with minimal examples. The work suggests that some forms of intelligence might emerge not from memorizing patterns across vast datasets, but from efficiently representing information in compact forms.

The compression-intelligence connection

The potential connection between compression and intelligence may sound strange at first glance, but it has deep theoretical roots in computer science concepts like Kolmogorov complexity (the shortest program that produces a specified output) and Solomonoff induction—a theoretical gold standard for prediction equivalent to an optimal compression algorithm.

To compress information efficiently, a system must recognize patterns, find regularities, and “understand” the underlying structure of the data—abilities that mirror what many consider intelligent behavior. A system that can predict what comes next in a sequence can compress that sequence efficiently. As a result, some computer scientists over the decades have suggested that compression may be equivalent to general intelligence. Based on these principles, the Hutter Prize has offered awards to researchers who can compress a 1GB file to the smallest size.

We previously wrote about intelligence and compression in September 2023, when a DeepMind paper discovered that large language models can sometimes outperform specialized compression algorithms. In that study, researchers found that DeepMind’s Chinchilla 70B model could compress image patches to 43.4 percent of their original size (beating PNG’s 58.5 percent) and audio samples to just 16.4 percent (outperforming FLAC’s 30.3 percent).

Photo of a C-clamp compressing books.

That 2023 research suggested a deep connection between compression and intelligence—the idea that truly understanding patterns in data enables more efficient compression, which aligns with this new CMU research. While DeepMind demonstrated compression capabilities in an already-trained model, Liao and Gu’s work takes a different approach by showing that the compression process can generate intelligent behavior from scratch.

This new research matters because it challenges the prevailing wisdom in AI development, which typically relies on massive pre-training datasets and computationally expensive models. While leading AI companies push toward ever-larger models trained on more extensive datasets, CompressARC suggests intelligence emerging from a fundamentally different principle.

“CompressARC’s intelligence emerges not from pretraining, vast datasets, exhaustive search, or massive compute—but from compression,” the researchers conclude. “We challenge the conventional reliance on extensive pretraining and data, and propose a future where tailored compressive objectives and efficient inference-time computation work together to extract deep intelligence from minimal input.”

Limitations and looking ahead

Even with its successes, Liao and Gu’s system comes with clear limitations that may prompt skepticism. While it successfully solves puzzles involving color assignments, infilling, cropping, and identifying adjacent pixels, it struggles with tasks requiring counting, long-range pattern recognition, rotations, reflections, or simulating agent behavior. These limitations highlight areas where simple compression principles may not be sufficient.

The research has not been peer-reviewed, and the 20 percent accuracy on unseen puzzles, though notable without pre-training, falls significantly below both human performance and top AI systems. Critics might argue that CompressARC could be exploiting specific structural patterns in the ARC puzzles that might not generalize to other domains, challenging whether compression alone can serve as a foundation for broader intelligence rather than just being one component among many required for robust reasoning capabilities.

And yet as AI development continues its rapid advance, if CompressARC holds up to further scrutiny, it offers a glimpse of a possible alternative path that might lead to useful intelligent behavior without the resource demands of today’s dominant approaches. Or at the very least, it might unlock an important component of general intelligence in machines, which is still poorly understood.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

CMU research shows compression alone may unlock AI puzzle-solving abilities Read More »

no-one-asked-for-this:-google-is-testing-round-keys-in-gboard

No one asked for this: Google is testing round keys in Gboard

Most Android phones ship with Google’s Gboard as the default input option. It’s a reliable, feature-rich on-screen keyboard, so most folks just keep using it instead of installing a third-party option. Depending on how you feel about circles, it might be time to check out some of those alternatives. Google has quietly released an update that changes the shape and position of the keys, and users are not pleased.

In the latest build of Gboard (v15.1.05.726012951-beta-arm64-v8a), Google has changed the key shape from the long-running squares to circle shapes. If you’re using the four-row layout, the keys are like little pills. In five-row mode with the exposed number row, the keys are collapsed further into circles. The reactions seem split between those annoyed by this change and those annoyed that everyone else is so annoyed.

Change can be hard sometimes, so certainly some of the discontent is just a function of having the phone interface changed without warning. If you find it particularly distasteful, you can head into the Gboard settings and open the Themes menu. From there, you can tap on a theme and then turn off the key borders. Thus, you won’t be distracted by the horror of rounded edges. That’s not the only problem with the silent update, though.

The wave of objections isn’t just about aesthetics—this update also moves the keys around a bit. After years of tapping away on keys with a particular layout, people develop muscle memory. Big texters can sometimes type messages on their phone without even looking at it, but moving the keys around even slightly, as Google has done here, can cause you to miss more keys than you did before the update.

No one asked for this: Google is testing round keys in Gboard Read More »

will-the-future-of-software-development-run-on-vibes?

Will the future of software development run on vibes?


Accepting AI-written code without understanding how it works is growing in popularity.

For many people, coding is about telling a computer what to do and having the computer perform those precise actions repeatedly. With the rise of AI tools like ChatGPT, it’s now possible for someone to describe a program in English and have the AI model translate it into working code without ever understanding how the code works. Former OpenAI researcher Andrej Karpathy recently gave this practice a name—”vibe coding”—and it’s gaining traction in tech circles.

The technique, enabled by large language models (LLMs) from companies like OpenAI and Anthropic, has attracted attention for potentially lowering the barrier to entry for software creation. But questions remain about whether the approach can reliably produce code suitable for real-world applications, even as tools like Cursor Composer, GitHub Copilot, and Replit Agent make the process increasingly accessible to non-programmers.

Instead of being about control and precision, vibe coding is all about surrendering to the flow. On February 2, Karpathy introduced the term in a post on X, writing, “There’s a new kind of coding I call ‘vibe coding,’ where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” He described the process in deliberately casual terms: “I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”

Karapthy tweet screenshot: There's a new kind of coding I call

A screenshot of Karpathy’s original X post about vibe coding from February 2, 2025. Credit: Andrej Karpathy / X

While vibe coding, if an error occurs, you feed it back into the AI model, accept the changes, hope it works, and repeat the process. Karpathy’s technique stands in stark contrast to traditional software development best practices, which typically emphasize careful planning, testing, and understanding of implementation details.

As Karpathy humorously acknowledged in his original post, the approach is for the ultimate lazy programmer experience: “I ask for the dumbest things, like ‘decrease the padding on the sidebar by half,’ because I’m too lazy to find it myself. I ‘Accept All’ always; I don’t read the diffs anymore.”

At its core, the technique transforms anyone with basic communication skills into a new type of natural language programmer—at least for simple projects. With AI models currently being held back by the amount of code an AI model can digest at once (context size), there tends to be an upper-limit to how complex a vibe-coded software project can get before the human at the wheel becomes a high-level project manager, manually assembling slices of AI-generated code into a larger architecture. But as technical limits expand with each generation of AI models, those limits may one day disappear.

Who are the vibe coders?

There’s no way to know exactly how many people are currently vibe coding their way through either hobby projects or development jobs, but Cursor reported 40,000 paying users in August 2024, and GitHub reported 1.3 million Copilot users just over a year ago (February 2024). While we can’t find user numbers for Replit Agent, the site claims 30 million users, with an unknown percentage using the site’s AI-powered coding agent.

One thing we do know: the approach has particularly gained traction online as a fun way of rapidly prototyping games. Microsoft’s Peter Yang recently demonstrated vibe coding in an X thread by building a simple 3D first-person shooter zombie game through conversational prompts fed into Cursor and Claude 3.7 Sonnet. Yang even used a speech-to-text app so he could verbally describe what he wanted to see and refine the prototype over time.

A photo of a MS-DOS computer with Q-BASIC code on the screen.

In August 2024, the author vibe coded his way into a working Q-BASIC utility script for MS-DOS, thanks to Claude Sonnet. Credit: Benj Edwards

We’ve been doing some vibe coding ourselves. Multiple Ars staffers have used AI assistants and coding tools for extracurricular hobby projects such as creating small games, crafting bespoke utilities, writing processing scripts, and more. Having a vibe-based code genie can come in handy in unexpected places: Last year, I asked Anthropic’s Claude write a Microsoft Q-BASIC program in MS-DOS that decompressed 200 ZIP files into custom directories, saving me many hours of manual typing work.

Debugging the vibes

With all this vibe coding going on, we had to turn to an expert for some input. Simon Willison, an independent software developer and AI researcher, offered a nuanced perspective on AI-assisted programming in an interview with Ars Technica. “I really enjoy vibe coding,” he said. “It’s a fun way to try out an idea and prove if it can work.”

But there are limits to how far Willison will go. “Vibe coding your way to a production codebase is clearly risky. Most of the work we do as software engineers involves evolving existing systems, where the quality and understandability of the underlying code is crucial.”

At some point, understanding at least some of the code is important because AI-generated code may include bugs, misunderstandings, and confabulations—for example, instances where the AI model generates references to nonexistent functions or libraries.

“Vibe coding is all fun and games until you have to vibe debug,” developer Ben South noted wryly on X, highlighting this fundamental issue.

Willison recently argued on his blog that encountering hallucinations with AI coding tools isn’t as detrimental as embedding false AI-generated information into a written report, because coding tools have built-in fact-checking: If there’s a confabulation, the code won’t work. This provides a natural boundary for vibe coding’s reliability—the code runs or it doesn’t.

Even so, the risk-reward calculation for vibe coding becomes far more complex in professional settings. While a solo developer might accept the trade-offs of vibe coding for personal projects, enterprise environments typically require code maintainability and reliability standards that vibe-coded solutions may struggle to meet. When code doesn’t work as expected, debugging requires understanding what the code is actually doing—precisely the knowledge that vibe coding tends to sidestep.

Programming without understanding

When it comes to defining what exactly constitutes vibe coding, Willison makes an important distinction: “If an LLM wrote every line of your code, but you’ve reviewed, tested, and understood it all, that’s not vibe coding in my book—that’s using an LLM as a typing assistant.” Vibe coding, in contrast, involves accepting code without fully understanding how it works.

While vibe coding originated with Karpathy as a playful term, it may encapsulate a real shift in how some developers approach programming tasks—prioritizing speed and experimentation over deep technical understanding. And to some people, that may be terrifying.

Willison emphasizes that developers need to take accountability for their code: “I firmly believe that as a developer you have to take accountability for the code you produce—if you’re going to put your name to it you need to be confident that you understand how and why it works—ideally to the point that you can explain it to somebody else.”

He also warns about a common path to technical debt: “For experiments and low-stake projects where you want to explore what’s possible and build fun prototypes? Go wild! But stay aware of the very real risk that a good enough prototype often faces pressure to get pushed to production.”

The future of programming jobs

So, is all this vibe coding going to cost human programmers their jobs? At its heart, programming has always been about telling a computer how to operate. The method of how we do that has changed over time, but there may always be people who are better at telling a computer precisely what to do than others—even in natural language. In some ways, those people may become the new “programmers.”

There was a point in the late 1970s to early ’80s when many people thought people required programming skills to use a computer effectively because there were very few pre-built applications for all the various computer platforms available. School systems worldwide made educational computer literacy efforts to teach people to code.

A brochure for the GE 210 computer from 1964. BASIC's creators used a similar computer four years later to develop the programming language.

A brochure for the GE 210 computer from 1964. BASIC’s creators used a similar computer four years later to develop the programming language that many children were taught at home and school. Credit: GE / Wikipedia

Before too long, people made useful software applications that let non-coders utilize computers easily—no programming required. Even so, programmers didn’t disappear—instead, they used applications to create better and more complex programs. Perhaps that will also happen with AI coding tools.

To use an analogy, computer controlled technologies like autopilot made reliable supersonic flight possible because they could handle aspects of flight that were too taxing for all but the most highly trained and capable humans to safely control. AI may do the same for programming, allowing humans to abstract away complexities that would otherwise take too much time to manually code, and that may allow for the creation of more complex and useful software experiences in the future.

But at that point, will humans still be able to understand or debug them? Maybe not. We may be completely dependent on AI tools, and some people no doubt find that a little scary or unwise.

Whether vibe coding lasts in the programming landscape or remains a prototyping technique will likely depend less on the capabilities of AI models and more on the willingness of organizations to accept risky trade-offs in code quality, maintainability, and technical debt. For now, vibe coding remains an apt descriptor of the messy, experimental relationship between AI and human developers—more collaborative than autonomous, but increasingly blurring the lines of who (or what) is really doing the programming.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Will the future of software development run on vibes? Read More »