Author name: Kelly Newman

texas-measles-outbreak-spills-into-third-state-as-cases-reach-258

Texas measles outbreak spills into third state as cases reach 258

Texas and New Mexico

Meanwhile, the Texas health department on Tuesday provided an outbreak update, raising the case count to 223, up 25 from the 198 Texas cases reported Friday. Of the Texas cases, 29 have been hospitalized and one has died—a 6-year-old girl from Gaines County, the outbreak’s epicenter. The girl was unvaccinated and had no known underlying health conditions.

The outbreak continues to be primarily in unvaccinated children. Of the 223 cases, 76 are in ages 0 to 4, and 98 are between ages 5 and 17. Of the cases, 80 are unvaccinated, 138 lack vaccination status, and five are known to have received at least one dose of the Measles, Mumps, and Rubella vaccine.

One dose of MMR is estimated to be 93 percent effective against measles, and two doses offer 98 percent protection. It’s not unexpected to see a small number of breakthrough cases in large, localized outbreaks.

Across the border from Gaines County in Texas sits Lea County, where New Mexico officials have now documented 32 cases, with an additional case reported in neighboring Eddy County, bringing the state’s current total to 33. Of those cases, one person has been hospitalized and one person (not hospitalized) died. The death was an adult who did not seek medical care and tested positive for measles only after death. The cause of their death is under investigation.

Of New Mexico’s 33 cases, 27 were unvaccinated and five did not have a vaccination status, and one had received at least one MMR dose. Eighteen of the 33 cases are in adults, 13 are ages 0 to 17, and two cases have no confirmed age.

On Friday, the Centers for Disease Control and Prevention released a travel alert over the measles outbreak. “With spring and summer travel season approaching in the United States, CDC emphasizes the important role that clinicians and public health officials play in preventing the spread of measles,” the agency said in the alert. It advised clinicians to be vigilant in identifying potential measles cases.

The agency stressed the importance of vaccination, putting in bold: “Measles-mumps-rubella (MMR) vaccination remains the most important tool for preventing measles,” while saying that “all US residents should be up to date on their MMR vaccinations.”

US health secretary and long-time anti-vaccine advocate Robert F. Kennedy Jr, meanwhile, has been emphasizing cod liver oil, which does not prevent measles, and falsely blaming the outbreak on poor nutrition.

Texas measles outbreak spills into third state as cases reach 258 Read More »

m4-max-and-m3-ultra-mac-studio-review:-a-weird-update,-but-it-mostly-works

M4 Max and M3 Ultra Mac Studio Review: A weird update, but it mostly works

Comparing the M4 Max and M3 Ultra to high-end PC desktop processors.

As for the Intel and AMD comparisons, both companies’ best high-end desktop CPUs like the Ryzen 9 9950X and Core Ultra 285K are often competitive with the M4 Max’s multi-core performance, but are dramatically less power-efficient at their default settings.

Mac Studio or M4 Pro Mac mini?

The Mac Studio (bottom) and redesigned M4 Mac mini. Credit: Andrew Cunningham

Ever since Apple beefed up the Mac mini with Pro-tier chips, there’s been a pricing overlap around and just over $2,000 where the mini and the Studio are both compelling.

A $2,000 Mac mini comes with a fully enabled M4 Pro processor (14 CPU cores, 20 GPU cores), 512GB of storage, and 48GB of RAM, with 64GB of RAM available for another $200 and 10 gigabit Ethernet available for another $100. RAM is the high-end Mac mini’s main advantage over the Studio—the $1,999 Studio comes with a slightly cut-down M4 Max (also 14 CPU cores, but 32 GPU cores), 512GB of storage, and just 36GB of RAM.

In general, if you’re spending $2,000 on a Mac desktop, I would lean toward the Studio rather than the mini. You’re getting roughly the same CPU but a much faster GPU and more ports. You get less RAM, but depending on what you’re doing, there’s a good chance that 36GB is more than enough.

The only place where the mini is clearly better than the Studio once you’ve above $2,000 is memory. If you want 64GB of RAM in your Mac, you can get it in the Mac mini for $2,200. The cheapest Mac Studio with 64GB of RAM also requires a processor upgrade, bringing the total cost to $2,700. If you need memory more than you need raw performance, or if you just need something that’s as small as it can possibly be, that’s when the high-end mini can still make sense.

A lot of power—if you need it

Apple’s M4 Max Mac Studio. Credit: Andrew Cunningham

Obviously, Apple’s hermetically sealed desktop computers have some downsides compared to a gaming or workstation PC, most notably that you need to throw out and replace the whole thing any time you want to upgrade literally any component.

M4 Max and M3 Ultra Mac Studio Review: A weird update, but it mostly works Read More »

why-extracting-data-from-pdfs-is-still-a-nightmare-for-data-experts

Why extracting data from PDFs is still a nightmare for data experts


Optical Character Recognition

Countless digital documents hold valuable info, and the AI industry is attempting to set it free.

For years, businesses, governments, and researchers have struggled with a persistent problem: How to extract usable data from Portable Document Format (PDF) files. These digital documents serve as containers for everything from scientific research to government records, but their rigid formats often trap the data inside, making it difficult for machines to read and analyze.

“Part of the problem is that PDFs are a creature of a time when print layout was a big influence on publishing software, and PDFs are more of a ‘print’ product than a digital one,” Derek Willis, a lecturer in Data and Computational Journalism at the University of Maryland, wrote in an email to Ars Technica. “The main issue is that many PDFs are simply pictures of information, which means you need Optical Character Recognition software to turn those pictures into data, especially when the original is old or includes handwriting.”

Computational journalism is a field where traditional reporting techniques merge with data analysis, coding, and algorithmic thinking to uncover stories that might otherwise remain hidden in large datasets, which makes unlocking that data a particular interest for Willis.

The PDF challenge also represents a significant bottleneck in the world of data analysis and machine learning at large. According to several studies, approximately 80–90 percent of the world’s organizational data is stored as unstructured data in documents, much of it locked away in formats that resist easy extraction. The problem worsens with two-column layouts, tables, charts, and scanned documents with poor image quality.

The inability to reliably extract data from PDFs affects numerous sectors but hits hardest in areas that rely heavily on documentation and legacy records, including digitizing scientific research, preserving historical documents, streamlining customer service, and making technical literature more accessible to AI systems.

“It is a very real problem for almost anything published more than 20 years ago and in particular for government records,” Willis says. “That impacts not just the operation of public agencies like the courts, police, and social services but also journalists, who rely on those records for stories. It also forces some industries that depend on information, like insurance and banking, to invest time and resources in converting PDFs into data.”

A very brief history of OCR

Traditional optical character recognition (OCR) technology, which converts images of text into machine-readable text, has been around since the 1970s. Inventor Ray Kurzweil pioneered the commercial development of OCR systems, including the Kurzweil Reading Machine for the blind in 1976, which relied on pattern-matching algorithms to identify characters from pixel arrangements.

These traditional OCR systems typically work by identifying patterns of light and dark pixels in images, matching them to known character shapes, and outputting the recognized text. While effective for clear, straightforward documents, these pattern-matching systems, a form of AI themselves, often falter when faced with unusual fonts, multiple columns, tables, or poor-quality scans.

Traditional OCR persists in many workflows precisely because its limitations are well-understood—it makes predictable errors that can be identified and corrected, offering a reliability that sometimes outweighs the theoretical advantages of newer AI-based solutions. But now that transformer-based large language models (LLMs) are getting the lion’s share of funding dollars, companies are increasingly turning to them for a new approach to reading documents.

The rise of AI language models in OCR

Unlike traditional OCR methods that follow a rigid sequence of identifying characters based on pixel patterns, multimodal LLMs that can read documents are trained on text and images that have been translated into chunks of data called tokens and fed into large neural networks. Vision-capable LLMs from companies like OpenAI, Google, and Meta analyze documents by recognizing relationships between visual elements and understanding contextual cues.

The “visual” image-based method is how ChatGPT reads a PDF file, for example, if you upload it through the AI assistant interface. It’s a fundamentally different approach than standard OCR that allows them to potentially process documents more holistically, considering both visual layouts and text content simultaneously.

And as it turns out, some LLMs from certain vendors are better at this task than others.

“The LLMs that do well on these tasks tend to behave in ways that are more consistent with how I would do it manually,” Willis said. He noted that some traditional OCR methods are quite good, particularly Amazon’s Textract, but that “they also are bound by the rules of their software and limitations on how much text they can refer to when attempting to recognize an unusual pattern.” Willis added, “With LLMs, I think you trade that for an expanded context that seems to help them make better predictions about whether a digit is a three or an eight, for example.”

This context-based approach enables these models to better handle complex layouts, interpret tables, and distinguish between document elements like headers, captions, and body text—all tasks that traditional OCR solutions struggle with.

“[LLMs] aren’t perfect and sometimes require significant intervention to do the job well, but the fact that you can adjust them at all [with custom prompts] is a big advantage,” Willis said.

New attempts at LLM-based OCR

As the demand for better document-processing solutions grows, new AI players are entering the market with specialized offerings. One such recent entrant has caught the attention of document-processing specialists in particular.

Mistral, a French AI company known for its smaller LLMs, recently entered the LLM-powered optical reader space with Mistral OCR, a specialized API designed for document processing. According to Mistral’s materials, their system aims to extract text and images from documents with complex layouts by using its language model capabilities to process document elements.

Robot sitting on a bunch of books, reading a book.

However, these promotional claims don’t always match real-world performance, according to recent tests. “I’m typically a pretty big fan of the Mistral models, but the new OCR-specific one they released last week really performed poorly,” Willis noted.

“A colleague sent this PDF and asked if I could help him parse the table it contained,” says Willis. “It’s an old document with a table that has some complex layout elements. The new [Mistral] OCR-specific model really performed poorly, repeating the names of cities and botching a lot of the numbers.”

AI app developer Alexander Doria also recently pointed out on X a flaw with Mistral OCR’s ability to understand handwriting, writing, “Unfortunately Mistral-OCR has still the usual VLM curse: with challenging manuscripts, it hallucinates completely.”

According to Willis, Google currently leads the field in AI models that can read documents: “Right now, for me the clear leader is Google’s Gemini 2.0 Flash Pro Experimental. It handled the PDF that Mistral did not with a tiny number of mistakes, and I’ve run multiple messy PDFs through it with success, including those with handwritten content.”

Gemini’s performance stems largely from its ability to process expansive documents (in a type of short-term memory called a “context window”), which Willis specifically notes as a key advantage: “The size of its context window also helps, since I can upload large documents and work through them in parts.” This capability, combined with more robust handling of handwritten content, apparently gives Google’s model a practical edge over competitors in real-world document-processing tasks for now.

The drawbacks of LLM-based OCR

Despite their promise, LLMs introduce several new problems to document processing. Among them, they can introduce confabulations or hallucinations (plausible-sounding but incorrect information), accidentally follow instructions in the text (thinking they are part of a user prompt), or just generally misinterpret the data.

“The biggest [drawback] is that they are probabilistic prediction machines and will get it wrong in ways that aren’t just ‘that’s the wrong word’,” Willis explains. “LLMs will sometimes skip a line in larger documents where the layout repeats itself, I’ve found, where OCR isn’t likely to do that.”

AI researcher and data journalist Simon Willison identified several critical concerns of using LLMs for OCR in a conversation with Ars Technica. “I still think the biggest challenge is the risk of accidental instruction following,” he says, always wary of prompt injections (in this case accidental) that might feed nefarious or contradictory instructions to a LLM.

“That and the fact that table interpretation mistakes can be catastrophic,” Willison adds. “In the past I’ve had lots of cases where a vision LLM has matched up the wrong line of data with the wrong heading, which results in absolute junk that looks correct. Also that thing where sometimes if text is illegible a model might just invent the text.”

These issues become particularly troublesome when processing financial statements, legal documents, or medical records, where a mistake might put someone’s life in danger. The reliability problems mean these tools often require careful human oversight, limiting their value for fully automated data extraction.

The path forward

Even in our seemingly advanced age of AI, there is still no perfect OCR solution. The race to unlock data from PDFs continues, with companies like Google now offering context-aware generative AI products. Some of the motivation for unlocking PDFs among AI companies, as Willis observes, doubtless involves potential training data acquisition: “I think Mistral’s announcement is pretty clear evidence that documents—not just PDFs—are a big part of their strategy, exactly because it will likely provide additional training data.”

Whether it benefits AI companies with training data or historians analyzing a historical census, as these technologies improve, they may unlock repositories of knowledge currently trapped in digital formats designed primarily for human consumption. That could lead to a new golden age of data analysis—or a field day for hard-to-spot mistakes, depending on the technology used and how blindly we trust it.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Why extracting data from PDFs is still a nightmare for data experts Read More »

gmail-gains-gemini-powered-“add-to-calendar”-button

Gmail gains Gemini-powered “Add to calendar” button

Google has a new mission in the AI era: to add Gemini to as many of the company’s products as possible. We’ve already seen Gemini appear in search results, text messages, and more. In Google’s latest update to Workspace, Gemini will be able to add calendar appointments from Gmail with a single click. Well, assuming Gemini gets it right the first time, which is far from certain.

The new calendar button will appear at the top of emails, right next to the summarize button that arrived last year. The calendar option will show up in Gmail threads with actionable meeting chit-chat, allowing you to mash that button to create an appointment in one step. The Gemini sidebar will open to confirm the appointment was made, which is a good opportunity to double-check the robot. There will be a handy edit button in the Gemini window in the event it makes a mistake. However, the robot can’t invite people to these events yet.

The effect of using the button is the same as opening the Gemini panel and asking it to create an appointment. The new functionality is simply detecting events and offering the button as a shortcut of sorts. You should not expect to see this button appear on messages that already have calendar integration, like dining reservations and flights. Those already pop up in Google Calendar without AI.

Gmail gains Gemini-powered “Add to calendar” button Read More »

after-less-than-a-day,-the-athena-lander-is-dead-on-the-moon

After less than a day, the Athena lander is dead on the Moon

NASA expected Athena to have a reasonable chance of success. Although it landed on its side, Odysseus was generally counted as a win because it accomplished most of its tasks. Accordingly, NASA loaded a number of instruments onto the lander. Most notable among these was the PRIME-1 experiment, an ice drill to sample and analyze any ice that lies below the surface.

A dark day, but not the end

“After landing, mission controllers were able to accelerate several program and payload milestones, including NASA’s PRIME-1 suite, before the lander’s batteries depleted,” the company’s statement said. However, this likely means that the company was able to contact the instrument but not perform any meaningful scientific activities.

NASA has accepted that these commercial lunar missions are high-risk, high-reward. (Firefly’s successful landing last weekend offers an example of high rewards). It is paying the companies, on average, $100 million or less per flight. This is a fraction of what NASA would pay through a traditional procurement program. The hope is that, after surviving initial failures, companies like Intuitive Machines will learn from their mistakes and open a low-cost, reliable pathway to the lunar surface.

Even so, this failure has to be painful for NASA and Intuitive Machines. The space agency lost out on some valuable science, and Intuitive Machines has taken a step backward with this mission rather than moving forward as it had hoped to do.

Fortunately, this is unlikely to be the end for the company. NASA has committed to a third and fourth mission on Intuitive Machines’ lander, the next of which could come during the first quarter of 2026. NASA has also contracted with the company to build a small network of satellites around the Moon for communications and positioning services. So although the company’s fortunes look dark today, they are not permanently shadowed like the craters on the Moon that NASA hopes to soon explore.

After less than a day, the Athena lander is dead on the Moon Read More »

blood-typers-is-a-terrifically-tense,-terror-filled-typing-tutor

Blood Typers is a terrifically tense, terror-filled typing tutor

When you think about it, the keyboard is the most complex video game controller in common use today, with over 100 distinct inputs arranged in a vast grid. Yet even the most complex keyboard-controlled games today tend to only use a relative handful of all those available keys for actual gameplay purposes.

The biggest exception to this rule is a typing game, which by definition asks players to send their fingers flying across every single letter on the keyboard (and then some) in quick succession. By default, though, typing games tend to take the form of extremely basic typing tutorials, where the gameplay amounts to little more than typing out words and sentences by rote as they appear on screen, maybe with a few cute accompanying animations.

Typing “gibbon” quickly has rarely felt this tense or important.

Credit: Outer Brain Studios

Typing “gibbon” quickly has rarely felt this tense or important. Credit: Outer Brain Studios

Blood Typers adds some much-needed complexity to that basic type-the-word-you-see concept, layering its typing tests on top of a full-fledged survival horror game reminiscent of the original PlayStation era. The result is an amazingly tense and compelling action adventure that also serves as a great way to hone your touch-typing skills.

See it, type it, do it

For some, Blood Typers may bring up first-glance memories of Typing of the Dead, Sega’s campy, typing-controlled take on the House of the Dead light gun game series. But Blood Typers goes well beyond Typing of the Dead‘s on-rails shooting, offering an experience that’s more like a typing-controlled version of Resident Evil.

Practically every action in Blood Typers requires typing a word that you see on-screen. That includes basic locomotion, which is accomplished by typing any of a number of short words scattered at key points in your surroundings in order to automatically walk to that point. It’s a bit awkward at first, but quickly becomes second nature as you memorize the names of various checkpoints and adjust to using the shift keys to turn that camera as you move.

Each of those words on the ground is a waypoint that you can type to move toward.

Credit: Outer Brain Studios

Each of those words on the ground is a waypoint that you can type to move toward. Credit: Outer Brain Studios

When any number of undead enemies appear, a quick tap of the tab key switches you to combat mode, which asks you to type longer words that appear above those enemies to use your weapons. More difficult enemies require multiple words to take down, including some with armor that means typing a single word repeatedly before you can move on.

While you start each scenario in Blood Typers with a handy melee weapon, you’ll end up juggling a wide variety of projectile firearms that feel uniquely tuned to the typing gameplay. The powerful shotgun, for instance, can take out larger enemies with just a single word, while the rapid-fire SMG lets you type only the first few letters of each word, allowing for a sort of rapid fire feel. The flamethrower, on the other hand, can set whole groups of nearby enemies aflame, which makes each subsequent attack word that much shorter and faster.

Blood Typers is a terrifically tense, terror-filled typing tutor Read More »

the-x-37b-spaceplane-lands-after-helping-pave-the-way-for-“maneuver-warfare”

The X-37B spaceplane lands after helping pave the way for “maneuver warfare”

On this mission, military officials said the X-37B tested “space domain awareness technology experiments” that aim to improve the Space Force’s knowledge of the space environment. Defense officials consider the space domain—like land, sea, and aira contested environment that could become a battlefield in future conflicts.

Last month, the Space Force released the first image of Earth from an X-37B in space. This image was captured in 2024 as the spacecraft flew in its high-altitude orbit, and shows a portion of the X-37B’s power-generating solar array. Credit: US Space Force

The Space Force hasn’t announced plans for the next X-37B mission. Typically, the next X-37B flight has launched within a year of the prior mission’s landing. So far, all of the X-37B flights have launched from Florida, with landings at Vandenberg and at NASA’s Kennedy Space Center, where Boeing and the Space Force refurbish the spaceplanes between missions.

The aerobraking maneuvers demonstrated by the X-37B could find applications on future operational military satellites, according to Gen. Stephen Whiting, head of US Space Command.

“The X-37 is a test and experimentation platform, but that aerobraking maneuver allowed it to bridge multiple orbital regimes, and we think this is exactly the kind of maneuverability we’d like to see in future systems, which will unlock a whole new series of operational concepts,” Whiting said in December at the Space Force Association’s Spacepower Conference.

Space Command’s “astrographic” area of responsibility (AOR) starts at the top of Earth’s atmosphere and extends to the Moon and beyond.

“An irony of the space domain is that everything in our AOR is in motion, but rarely do we use maneuver as a way to gain positional advantage,” Whiting said. “We believe at US Space Command it is vital, given the threats we now see in novel orbits that are hard for us to get to, as well as the fact that the Chinese have been testing on-orbit refueling capability, that we need some kind of sustained space maneuver.”

Improvements in maneuverability would have benefits in surveilling an adversary’s satellites, as well as in defensive and offensive combat operations in orbit.

The Space Force could attain the capability for sustained maneuvers—known in some quarters as dynamic space operations—in several ways. One is to utilize in-orbit refueling that allows satellites to “maneuver without regret,” and another is to pursue more fuel-efficient means of changing orbits, such as aerobraking or solar-electric propulsion.

Then, Whiting said Space Command could transform how it operates by employing “maneuver warfare” as the Army, Navy and Air Force do. “We think we need to move toward a joint function of true maneuver advantage in space.”

The X-37B spaceplane lands after helping pave the way for “maneuver warfare” Read More »

feds-arrest-man-for-sharing-dvd-rip-of-spider-man-movie-with-millions-online

Feds arrest man for sharing DVD rip of Spider-Man movie with millions online

A 37-year-old Tennessee man was arrested Thursday, accused of stealing Blu-rays and DVDs from a manufacturing and distribution company used by major movie studios and sharing them online before the movies’ scheduled release dates.

According to a US Department of Justice press release, Steven Hale worked at the DVD company and allegedly stole “numerous ‘pre-release’ DVDs and Blu-rays” between February 2021 and March 2022. He then allegedly “ripped” the movies, “bypassing encryption that prevents unauthorized copying” and shared copies widely online. He also supposedly sold the actual stolen discs on e-commerce sites, the DOJ alleged.

Hale has been charged with “two counts of criminal copyright infringement and one count of interstate transportation of stolen goods,” the DOJ said. He faces a maximum sentence of five years for the former, and 10 years for the latter.

Among blockbuster movies that Hale is accused of stealing are Dune, F9: The Fast Saga, Venom: Let There Be Carnage, Godzilla v. Kong, and, perhaps most notably, Spider-Man: No Way Home.

The DOJ claimed that “copies of Spider-Man: No Way Home were downloaded tens of millions of times, with an estimated loss to the copyright owner of tens of millions of dollars.”

In 2021, when the Spider-Man movie was released in theaters only, it became the first movie during the COVID-19 pandemic to gross more than $1 billion at the box office, Forbes noted. But for those unwilling to venture out to see the movie, Forbes reported, the temptation to find leaks and torrents apparently became hard to resist. It was in this climate that Hale is accused of widely sharing copies of the movie before it was released online.

Feds arrest man for sharing DVD rip of Spider-Man movie with millions online Read More »

iphone-16e-review:-the-most-expensive-cheap-iphone-yet

iPhone 16e review: The most expensive cheap iPhone yet


The iPhone 16e rethinks—and prices up—the basic iPhone.

An iPhone sits on the table, displaying the time with the screen on

The iPhone 16e, with a notch and an Action Button. Credit: Samuel Axon

The iPhone 16e, with a notch and an Action Button. Credit: Samuel Axon

For a long time, the cheapest iPhones were basically just iPhones that were older than the current flagship, but last week’s release of the $600 iPhone 16e marks a big change in how Apple is approaching its lineup.

Rather than a repackaging of an old iPhone, the 16e is the latest main iPhone—that is, the iPhone 16—with a bunch of stuff stripped away.

There are several potential advantages to this change. In theory, it allows Apple to support its lower-end offerings for longer with software updates, and it gives entry-level buyers access to more current technologies and features. It also simplifies the marketplace of accessories and the like.

There’s bad news, too, though: Since it replaces the much cheaper iPhone SE in Apple’s lineup, the iPhone 16e significantly raises the financial barrier to entry for iOS (the SE started at $430).

We spent a few days trying out the 16e and found that it’s a good phone—it’s just too bad it’s a little more expensive than the entry-level iPhone should ideally be. In many ways, this phone solves more problems for Apple than it does for consumers. Let’s explore why.

Table of Contents

A beastly processor for an entry-level phone

Like the 16, the 16e has Apple’s A18 chip, the most recent in the made-for-iPhone line of Apple-designed chips. There’s only one notable difference: This variation of the A18 has just four GPU cores instead of five. That will show up in benchmarks and in a handful of 3D games, but it shouldn’t make too much of a difference for most people.

It’s a significant step up over the A15 found in the final 2022 refresh of the iPhone SE, enabling a handful of new features like AAA games and Apple Intelligence.

The A18’s inclusion is good for both Apple and the consumer; Apple gets to establish a new, higher baseline of performance when developing new features for current and future handsets, and consumers likely get many more years of software updates than they’d get on the older chip.

The key example of a feature enabled by the A18 that Apple would probably like us all to talk about the most is Apple Intelligence, a suite of features utilizing generative AI to solve some user problems or enable new capabilities across iOS. By enabling these for the cheapest iPhone, Apple is making its messaging around Apple Intelligence a lot easier; it no longer needs to put effort into clarifying that you can use X feature with this new iPhone but not that one.

We’ve written a lot about Apple Intelligence already, but here’s the gist: There are some useful features here in theory, but Apple’s models are clearly a bit behind the cutting edge, and results for things like notifications summaries or writing tools are pretty mixed. It’s fun to generate original emojis, though!

The iPhone 16e can even use Visual Intelligence, which actually is handy sometimes. On my iPhone 16 Pro Max, I can point the rear camera at an object and press the camera button a certain way to get information about it.

I wouldn’t have expected the 16e to support this, but it does, via the Action Button (which was first introduced in the iPhone 15 Pro). This is a reprogrammable button that can perform a variety of functions, albeit just one at a time. Visual Intelligence is one of the options here, which is pretty cool, even though it’s not essential.

The screen is the biggest upgrade over the SE

Also like the 16, the 16e has a 6.1-inch display. The resolution’s a bit different, though; it’s 2,532 by 1,170 pixels instead of 2,556 by 1,179. It also has a notch instead of the Dynamic Island seen in the 16. All this makes the iPhone 16e’s display seem like a very close match to the one seen in 2022’s iPhone 14—in fact, it might literally be the same display.

I really missed the Dynamic Island while using the iPhone 16e—it’s one of my favorite new features added to the iPhone in recent years, as it consolidates what was previously a mess of notification schemes in iOS. Plus, it’s nice to see things like Uber and DoorDash ETAs and sports scores at a glance.

The main problem with losing the Dynamic Island is that we’re back to the old minor mess of notifications approaches, and I guess Apple has to keep supporting the old ways for a while yet. That genuinely surprises me; I would have thought Apple would want to unify notifications and activities with the Dynamic Island just like the A18 allows the standardization of other features.

This seems to indicate that the Dynamic Island is a fair bit more expensive to include than the good old camera notch flagship iPhones had been rocking since 2017’s iPhone X.

That compromise aside, the display on the iPhone 16e is ridiculously good for a phone at this price point, and it makes the old iPhone SE’s small LCD display look like it’s from another eon entirely by comparison. It gets brighter for both HDR content and sunny-day operation; the blacks are inky and deep, and the contrast and colors are outstanding.

It’s the best thing about the iPhone 16e, even if it isn’t quite as refined as the screens in Apple’s current flagships. Most people would never notice the difference between the screens in the 16e and the iPhone 16 Pro, though.

There is one other screen feature I miss from the higher-end iPhones you can buy in 2025: Those phones can drop the display all the way down to 1 nit, which is awesome for using the phone late at night in bed without disturbing a sleeping partner. Like earlier iPhones, the 16e can only get so dark.

It gets quite bright, though; Apple claims it typically reaches 800 nits in peak brightness but that it can stretch to 1200 when viewing certain HDR photos and videos. That means it gets about twice as bright as the SE did.

Connectivity is key

The iPhone 16e supports the core suite of connectivity options found in modern phones. There’s Wi-Fi 6, Bluetooth 5.3, and Apple’s usual limited implementation of NFC.

There are three new things of note here, though, and they’re good, neutral, and bad, respectively.

USB-C

Let’s start with the good. We’ve moved from Apple’s proprietary Lightning port found in older iPhones (including the final iPhone SE) toward USB-C, now a near-universal standard on mobile devices. It allows faster charging and more standardized charging cable support.

Sure, it’s a bummer to start over if you’ve spent years buying Lightning accessories, but it’s absolutely worth it in the long run. This change means that the entire iPhone line has now abandoned Lightning, so all iPhones and Android phones will have the same main port for years to come. Finally!

The finality of this shift solves a few problems for Apple: It greatly simplifies the accessory landscape and allows the company to move toward producing a smaller range of cables.

Satellite connectivity

Recent flagship iPhones have gradually added a small suite of features that utilize satellite connectivity to make life a little easier and safer.

Among those is crash detection and roadside assistance. The former will use the sensors in the phone to detect if you’ve been in a car crash and contact help, and roadside assistance allows you to text for help when you’re outside of cellular reception in the US and UK.

There are also Emergency SOS and Find My via satellite, which let you communicate with emergency responders from remote places and allow you to be found.

Along with a more general feature that allows Messages via satellite, these features can greatly expand your options if you’re somewhere remote, though they’re not as easy to use and responsive as using the regular cellular network.

Where’s MagSafe?

I don’t expect the 16e to have all the same features as the 16, which is $200 more expensive. In fact, it has more modern features than I think most of its target audience needs (more on that later). That said, there’s one notable omission that makes no sense to me at all.

The 16e does not support MagSafe, a standard for connecting accessories to the back of the device magnetically, often while allowing wireless charging via the Qi standard.

Qi wireless charging is still supported, albeit at a slow 7.5 W, but there are no magnets, meaning a lot of existing MagSafe accessories are a lot less useful with this phone, if they’re usable at all. To be fair, the SE didn’t support MagSafe either, but every new iPhone design since the iPhone 12 way back in 2020 has—and not just the premium flagships.

It’s not like the MagSafe accessory ecosystem was some bottomless well of innovation, but that magnetic alignment is handier than you might think, whether we’re talking about making sure the phone locks into place for the fastest wireless charging speeds or hanging the phone on a car dashboard to use GPS on the go.

It’s one of those things where folks coming from much older iPhones may not care because they don’t know what they’re missing, but it could be annoying in households with multiple generations of iPhones, and it just doesn’t make any sense.

Most of Apple’s choices in the 16e seem to serve the goal of unifying the whole iPhone lineup to simplify the message for consumers and make things easier for Apple to manage efficiently, but the dropping of MagSafe is bizarre.

It almost makes me think that Apple might plan to drop MagSafe from future flagship iPhones, too, and go toward something new, just because that’s the only explanation I can think of. That otherwise seems unlikely to me right now, but I guess we’ll see.

The first Apple-designed cellular modem

We’ve been seeing rumors that Apple planned to drop third-party modems from companies like Qualcomm for years. As far back as 2018, Apple was poaching Qualcomm employees in an adjacent office in San Diego. In 2020, Apple SVP Johny Srouji announced to employees that work had begun.

It sounds like development has been challenging, but the first Apple-designed modem has arrived here in the 16e of all places. Dubbed the C1, it’s… perfectly adequate. It’s about as fast or maybe just a smidge slower than what you get in the flagship phones, but almost no user would notice any difference at all.

That’s really a win for Apple, which has struggled with a tumultuous relationship with its partners here for years and which has long run into space problems in its phones in part because the third-party modems weren’t compact enough.

This change may not matter much for the consumer beyond freeing up just a tiny bit of space for a slightly larger battery, but it’s another step in Apple’s long journey to ultimately and fully control every component in the iPhone that it possibly can.

Bigger is better for batteries

There is one area where the 16e is actually superior to the 16, much less the SE: battery life. The 16e reportedly has a 3,961 mAh battery, the largest in any of the many iPhones with roughly this size screen. Apple says it offers up to 26 hours of video playback, which is the kind of number you expect to see in a much larger flagship phone.

I charged this phone three times in just under a week with it, though I wasn’t heavily hitting 5G networks, playing many 3D games, or cranking the brightness way up all the time while using it.

That’s a bit of a bump over the 16, but it’s a massive leap over the SE, which promised a measly 15 hours of video playback. Every single phone in Apple’s lineup now has excellent battery life by any standard.

Quality over quantity in the camera system

The 16E’s camera system leaves the SE in the dust, but it’s no match for the robust system found in the iPhone 16. Regardless, it’s way better than you’d typically expect from a phone at this price.

Like the 16, the 16e has a 48 MP “Fusion” wide-angle rear camera. It typically doesn’t take photos at 48 MP (though you can do that while compromising color detail). Rather, 24 MP is the target. The 48 MP camera enables 2x zoom that is nearly visually indistinguishable from optical zoom.

Based on both the specs and photo comparisons, the main camera sensor in the 16e appears to me to be exactly the same as that one found in the 16. We’re just missing the ultra-wide lens (which allows more zoomed-out photos, ideal for groups of people in small spaces, for example) and several extra features like advanced image stabilization, the newest Photographic Styles, and macro photography.

The iPhone 16e takes excellent photos in bright conditions. Samuel Axon

That’s a lot of missing features, sure, but it’s wild how good this camera is for this price point. Even something like the Pixel 8a can’t touch it (though to be fair, the Pixel 8a is $100 cheaper).

Video capture is a similar situation: The 16e shoots at the same resolutions and framerates as the 16, but it lacks a few specialized features like Cinematic and Action modes. There’s also a front-facing camera with the TrueDepth sensor for Face ID in that notch, and it has comparable specs to the front-facing cameras we’ve seen in a couple of years of iPhones at this point.

If you were buying a phone for the cameras, this wouldn’t be the one for you. It’s absolutely worth paying another $200 for the iPhone 16 (or even just $100 for the iPhone 15 for the ultra-wide lens for 0.5x zoom; the 15 is still available in the Apple Store) if that’s your priority.

The iPhone 16’s macro mode isn’t available here, so ultra-close-ups look fuzzy. Samuel Axon

But for the 16e’s target consumer (mostly folks with the iPhone 11 or older or an iPhone SE, who just want the cheapest functional iPhone they can get) it’s almost overkill. I’m not complaining, though it’s a contributing factor to the phone’s cost compared to entry-level Android phones and Apple’s old iPhone SE.

RIP small phones, once and for all

In one fell swoop, the iPhone 16e’s replacement of the iPhone SE eliminates a whole range of legacy technologies that have held on at the lower end of the iPhone lineup for years. Gone are Touch ID, the home button, LCD displays, and Lightning ports—they’re replaced by Face ID, swipe gestures, OLED, and USB-C.

Newer iPhones have had most of those things for quite some time. The latest feature was USB-C, which came in 2023’s iPhone 15. The removal of the SE from the lineup catches the bottom end of the iPhone up with the top in these respects.

That said, the SE had maintained one positive differentiator, too: It was small enough to be used one-handed by almost anyone. With the end of the SE and the release of the 16e, the one-handed iPhone is well and truly dead. Of course, most people have been clear they want big screens and batteries above almost all else, so the writing had been on the wall for a while for smaller phones.

The death of the iPhone SE ushers in a new era for the iPhone with bigger and better features—but also bigger price tags.

A more expensive cheap phone

Assessing the iPhone 16e is a challenge. It’s objectively a good phone—good enough for the vast majority of people. It has a nearly top-tier screen (though it clocks in at 60Hz, while some Android phones close to this price point manage 120Hz), a camera system that delivers on quality even if it lacks special features seen in flagships, strong connectivity, and performance far above what you’d expect at this price.

If you don’t care about extra camera features or nice-to-haves like MagSafe or the Dynamic Island, it’s easy to recommend saving a couple hundred bucks compared to the iPhone 16.

The chief criticism I have that relates to the 16e has less to do with the phone itself than Apple’s overall lineup. The iPhone SE retailed for $430, nearly half the price of the 16. By making the 16e the new bottom of the lineup, Apple has significantly raised the financial barrier to entry for iOS.

Now, it’s worth mentioning that a pretty big swath of the target market for the 16e will buy it subsidized through a carrier, so they might not pay that much up front. I always recommend buying a phone directly if you can, though, as carrier subsidization deals are usually worse for the consumer.

The 16e’s price might push more people to go for the subsidy. Plus, it’s just more phone than some people need. For example, I love a high-quality OLED display for watching movies, but I don’t think the typical iPhone SE customer was ever going to care about that.

That’s why I believe the iPhone 16e solves more problems for Apple than it does for the consumer. In multiple ways, it allows Apple to streamline production, software support, and marketing messaging. It also drives up the average price per unit across the whole iPhone line and will probably encourage some people who would have spent $430 to spend $600 instead, possibly improving revenue. All told, it’s a no-brainer for Apple.

It’s just a mixed bag for the sort of no-frills consumer who wants a minimum viable phone and who for one reason or another didn’t want to go the Android route. The iPhone 16e is definitely a good phone—I just wish there were more options for that consumer.

The good

  • Dramatically improved display than the iPhone SE
  • Likely stronger long-term software support than most previous entry-level iPhones
  • Good battery life and incredibly good performance for this price point
  • A high-quality camera, especially for the price

The bad

  • No ultra-wide camera
  • No MagSafe
  • No Dynamic Island

The ugly

  • Significantly raises the entry price point for buying an iPhone

Photo of Samuel Axon

Samuel Axon is a senior editor at Ars Technica. He covers Apple, software development, gaming, AI, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

iPhone 16e review: The most expensive cheap iPhone yet Read More »

starlink-benefits-as-trump-admin-rewrites-rules-for-$42b-grant-program

Starlink benefits as Trump admin rewrites rules for $42B grant program

Don’t be “technology-blind,” broadband group says

The Benton Institute for Broadband & Society criticized what it called “Trump’s BEAD meddling,” saying it would “leave millions of Americans with broadband that is slower, less reliable, and more expensive.” The shift to a “technology-neutral” approach should not be “technology-blind,” the advocacy group said.

“Fiber broadband is widely understood to be better than other Internet options—like Starlink’s satellites—because it delivers significantly faster speeds, is more reliable due to its resistance to interference (from weather, foliage, terrain, etc), has higher bandwidth capacity, and offers symmetrical upload and download speeds, making it ideal for activities like telehealth, online learning, streaming, and gaming that require consistent high performance,” the group said.

It’s ultimately up to individual states to distribute funds to ISPs after getting their allocations from the US government, though the states have to follow rules issued by federal officials. No one knows exactly how much each Internet provider will receive, but a Wall Street Journal report this week said the new rules could help Starlink get nearly half of the available funding.

“Under the BEAD program’s original rules, Starlink was expected to get up to $4.1 billion, said people familiar with the matter. With Lutnick’s overhaul, Starlink, a unit of Musk’s SpaceX, could receive $10 billion to $20 billion, they said,” according to the WSJ report.

The end of BEAD’s fiber preference would also help cable and fixed wireless providers access grant funding. Lobby groups for those industries have been calling for rule changes to help their members obtain grants.

While the Commerce Department is moving ahead with BEAD changes on its own, Republicans are also proposing a rewrite of the law. House Communications and Technology Subcommittee Chairman Richard Hudson (R-N.C.) yesterday announced legislation that his office said would eliminate “burdensome conditions imposed by the Biden-Harris Administration, including those related to labor, climate change, and rate regulation, that made deployment more expensive and participation less attractive.”

Starlink benefits as Trump admin rewrites rules for $42B grant program Read More »

you-knew-it-was-coming:-google-begins-testing-ai-only-search-results

You knew it was coming: Google begins testing AI-only search results

Google has become so integral to online navigation that its name became a verb, meaning “to find things on the Internet.” Soon, Google might just tell you what’s on the Internet instead of showing you. The company has announced an expansion of its AI search features, powered by Gemini 2.0. Everyone will soon see more AI Overviews at the top of the results page, but Google is also testing a more substantial change in the form of AI Mode. This version of Google won’t show you the 10 blue links at all—Gemini completely takes over the results in AI Mode.

This marks the debut of Gemini 2.0 in Google search. Google announced the first Gemini 2.0 models in December 2024, beginning with the streamlined Gemini 2.0 Flash. The heavier versions of Gemini 2.0 are still in testing, but Google says it has tuned AI Overviews with this model to offer help with harder questions in the areas of math, coding, and multimodal queries.

With this update, you will begin seeing AI Overviews on more results pages, and minors with Google accounts will see AI results for the first time. In fact, even logged out users will see AI Overviews soon. This is a big change, but it’s only the start of Google’s plans for AI search.

Gemini 2.0 also powers the new AI Mode for search. It’s launching as an opt-in feature via Google’s Search Labs, offering a totally new alternative to search as we know it. This custom version of the Gemini large language model (LLM) skips the standard web links that have been part of every Google search thus far. The model uses “advanced reasoning, thinking, and multimodal capabilities” to build a response to your search, which can include web summaries, Knowledge Graph content, and shopping data. It’s essentially a bigger, more complex AI Overview.

As Google has previously pointed out, many searches are questions rather than a string of keywords. For those kinds of queries, an AI response could theoretically provide an answer more quickly than a list of 10 blue links. However, that relies on the AI response being useful and accurate, something that often still eludes generative AI systems like Gemini.

You knew it was coming: Google begins testing AI-only search results Read More »

yes,-we-are-about-to-be-treated-to-a-second-lunar-landing-in-a-week

Yes, we are about to be treated to a second lunar landing in a week

Because the space agency now has some expectation that Intuitive Machines will be fully successful with its second landing attempt, it has put some valuable experiments on board. Principal among them is the PRIME-1 experiment, which has an ice drill to sample any ice that lies below the surface. Drill, baby, drill.

The Athena lander also is carrying a NASA-funded “hopper” that will fire small hydrazine rockets to bounce around the Moon and explore lunar craters near the South Pole. It might even fly into a lava tube. If this happens it will be insanely cool.

Because this is a commercial program, NASA has encouraged the delivery companies to find additional, private payloads. Athena has some nifty ones, including a small rover from Lunar Outpost, a data center from Lonestar Data Holdings, and a 4G cellular network from Nokia. So there’s a lot riding on Athena‘s success.

So will it be a success?

“Of course, everybody’s wondering, are we gonna land upright?” Tim Crain, Intuitive Machines’ chief technology officer, told Ars. “So, I can tell you our laser test plan is much more comprehensive than those last time.”

During the first landing about a year ago, Odysseus‘ laser-based system for measuring altitude failed during the descent. Because Odysseus did not have access to altitude data, the spacecraft touched down faster, and on a 12-degree slope, which exceeded the 10-degree limit. As a result, the lander skidded across the surface, and one of its six legs broke, causing it to fall over.

Crain said about 10 major changes were made to the spacecraft and its software for the second mission. On top of that, about 30 smaller things, such as more efficient file management, were updated on the new vehicle.

In theory, everything should work this time. Intuitive Machines has the benefit of all of its learnings from the last time, and nearly everything worked right during this first attempt. But the acid test comes on Thursday.

The company and NASA will provide live coverage of the attempt beginning at 11: 30 am ET (16: 30 UTC) on NASA+, with landing set for just about one hour later. The Moon may be a harsh mistress, but hopefully not too harsh.

Yes, we are about to be treated to a second lunar landing in a week Read More »