Tech

hp-and-dell-disable-hevc-support-built-into-their-laptops’-cpus

HP and Dell disable HEVC support built into their laptops’ CPUs

The OEMs disabling codec hardware also comes as associated costs for the international video compression standard are set to increase in January, as licensing administrator Access Advance announced in July. Per a breakdown from patent pool administration VIA Licensing Alliance, royalty rates for HEVC for over 100,001 units are increasing from $0.20 each to $0.24 each in the United States. To put that into perspective, in Q3 2025, HP sold 15,002,000 laptops and desktops, and Dell sold 10,166,000 laptops and desktops, per Gartner.

Last year, NAS company Synology announced that it was ending support for HEVC, as well as H.264/AVC and VCI, transcoding on its DiskStation Manager and BeeStation OS platforms, saying that “support for video codecs is widespread on end devices, such as smartphones, tablets, computers, and smart TVs.”

“This update reduces unnecessary resource usage on the server and significantly improves media processing efficiency. The optimization is particularly effective in high-user environments compared to traditional server-side processing,” the announcement said.

Despite the growing costs and complications with HEVC licenses and workarounds, breaking features that have been widely available for years will likely lead to confusion and frustration.

“This is pretty ridiculous, given these systems are $800+ a machine, are part of a ‘Pro’ line (jabs at branding names are warranted – HEVC is used professionally), and more applications these days outside of Netflix and streaming TV are getting around to adopting HEVC,” a Redditor wrote.

HP and Dell disable HEVC support built into their laptops’ CPUs Read More »

microsoft-makes-zork-i,-ii,-and-iii-open-source-under-mit-license

Microsoft makes Zork I, II, and III open source under MIT License

Zork, the classic text-based adventure game of incalculable influence, has been made available under the MIT License, along with the sequels Zork II and Zork III.

The move to take these Zork games open source comes as the result of the shared work of the Xbox and Activision teams along with Microsoft’s Open Source Programs Office (OSPO). Parent company Microsoft owns the intellectual property for the franchise.

Only the code itself has been made open source. Ancillary items like commercial packaging and marketing assets and materials remain proprietary, as do related trademarks and brands.

“Rather than creating new repositories, we’re contributing directly to history. In collaboration with Jason Scott, the well-known digital archivist of Internet Archive fame, we have officially submitted upstream pull requests to the historical source repositories of Zork I, Zork II, and Zork III. Those pull requests add a clear MIT LICENSE and formally document the open-source grant,” says the announcement co-written by Stacy Haffner (director of the OSPO at Microsoft) and Scott Hanselman (VP of Developer Community at the company).

Microsoft gained control of the Zork IP when it acquired Activision in 2022; Activision had come to own it when it acquired original publisher Infocom in the late ’80s. There was an attempt to sell Zork publishing rights directly to Microsoft even earlier in the ’80s, as founder Bill Gates was a big Zork fan, but it fell through, so it’s funny that it eventually ended up in the same place.

To be clear, this is not the first time the original Zork source code has been available to the general public. Scott uploaded it to GitHub in 2019, but the license situation was unresolved, and Activision or Microsoft could have issued a takedown request had they wished to.

Now that’s obviously not at risk of happening anymore.

Microsoft makes Zork I, II, and III open source under MIT License Read More »

flying-with-whales:-drones-are-remaking-marine-mammal-research

Flying with whales: Drones are remaking marine mammal research

In 2010, the Deepwater Horizon oil rig exploded in the Gulf of Mexico, causing one of the largest marine oil spills ever. In the aftermath of the disaster, whale scientist Iain Kerr traveled to the area to study how the spill had affected sperm whales, aiming specialized darts at the animals to collect pencil eraser-sized tissue samples.

It wasn’t going well. Each time his boat approached a whale surfacing for air, the animal vanished beneath the waves before he could reach it. “I felt like I was playing Whac-A-Mole,” he says.

As darkness fell, a whale dove in front of Kerr and covered him in whale snot. That unpleasant experience gave Kerr, who works at the conservation group Ocean Alliance, an idea: What if he could collect that same snot by somehow flying over the whale? Researchers can glean much information from whale snot, including the animal’s DNA sequence, its sex, whether it is pregnant, and the makeup of its microbiome.

After many experiments, Kerr’s idea turned into what is today known as the SnotBot: a drone fitted with six petri dishes that collect a whale’s snot by flying over the animal as it surfaces and exhales through its blowhole. Today, drones like this are deployed to gather snot all over the world, and not just from sperm whales: They’re also collecting this scientifically valuable mucus from other species, such as blue whales and dolphins. “I would say drones have changed my life,” says Kerr.

S’not just mucus

Gathering snot is one of many ways that drones are being used to study whales. In the past 10 to 15 years, drone technology has made great strides, becoming affordable and easy to use. This has been a boon for researchers. Scientists “are finding applications for drones in virtually every aspect of marine mammal research,” says Joshua Stewart, an ecologist at the Marine Mammal Institute at Oregon State University.

Flying with whales: Drones are remaking marine mammal research Read More »

google’s-new-nano-banana-pro-uses-gemini-3-power-to-generate-more-realistic-ai-images

Google’s new Nano Banana Pro uses Gemini 3 power to generate more realistic AI images

Detecting less sloppy slop

Google is not just blowing smoke—the new image generator is much better. Its grasp of the world and the nuance of language is apparent, producing much more realistic results. Even before this, AI images were getting so good that it could be hard to spot them at a glance. Gone are the days when you could just count fingers to identify AI. Google is making an effort to help identify AI content, though.

Images generated with Nano Banana Pro continue to have embedded SynthID watermarks that Google’s tools can detect. The company is also adding more C2PA metadata to further label AI images. The Gemini app is part of this effort, too. Starting now, you can upload an image and ask something like “Is this AI?” The app won’t detect just any old AI image, but it will tell you if it’s a product of Google AI by checking for SynthID.

Gemini can now detect its own AI images.

At the same time, Google is making it slightly harder for people to know an image was generated with AI. Operating with the knowledge that professionals may want to generate images with Nano Banana Pro, Google has removed the visible watermark from images for AI Ultra subscribers. These images still have SynthID, but only the lower tiers have the Gemini twinkle in the corner.

While everyone can access the new Nano Banana Pro today, AI Ultra subscribers will enjoy the highest usage limits. Gemini Pro users will get a bit less access, and free users will get the lowest limits before being booted down to the non-pro version.

Google’s new Nano Banana Pro uses Gemini 3 power to generate more realistic AI images Read More »

in-1982,-a-physics-joke-gone-wrong-sparked-the-invention-of-the-emoticon

In 1982, a physics joke gone wrong sparked the invention of the emoticon


A simple proposal on a 1982 electronic bulletin board helped sarcasm flourish online.

Credit: Benj Edwards / DEC

On September 19, 1982, Carnegie Mellon University computer science research assistant professor Scott Fahlman posted a message to the university’s bulletin board software that would later come to shape how people communicate online. His proposal: use 🙂 and 🙁 as markers to distinguish jokes from serious comments. While Fahlman describes himself as “the inventor… or at least one of the inventors” of what would later be called the smiley face emoticon, the full story reveals something more interesting than a lone genius moment.

The whole episode started three days earlier when computer scientist Neil Swartz posed a physics problem to colleagues on Carnegie Mellon’s “bboard,” which was an early online message board. The discussion thread had been exploring what happens to objects in a free-falling elevator, and Swartz presented a specific scenario involving a lit candle and a drop of mercury.

That evening, computer scientist Howard Gayle responded with a facetious message titled “WARNING!” He claimed that an elevator had been “contaminated with mercury” and suffered “some slight fire damage” due to a physics experiment. Despite clarifying posts noting the warning was a joke, some people took it seriously.

A DECSYSTEM-20 KL-10 (1974) that was once located at the Living Computer Museum in Seattle.

A DECSYSTEM-20 KL-10 (1974) seen at the Living Computer Museum in Seattle. Scott Fahlman used a similar system with a terminal to propose his smiley concept. Credit: Jason Scott

The incident sparked immediate discussion about how to prevent such misunderstandings and the “flame wars” (heated arguments) that could result from misread intent.

“This problem caused some of us to suggest (only half seriously) that maybe it would be a good idea to explicitly mark posts that were not to be taken seriously,” Fahlman later wrote in a retrospective post published on his CMU website. “After all, when using text-based online communication, we lack the body language or tone-of-voice cues that convey this information when we talk in person or on the phone.”

On September 17, 1982, the next day after the misunderstanding on the CMU bboard, Swartz made the first concrete proposal: “Maybe we should adopt a convention of putting a star

in the subject field of any notice which is to be taken as a joke.”

Within hours, multiple Carnegie Mellon computer scientists weighed in with alternative proposals. Joseph Ginder suggested using % instead of *. Anthony Stentz proposed a nuanced system: “How about using for good jokes and % for bad jokes?” Keith Wright championed the ampersand (&), arguing it “looks funny” and “sounds funny.” Leonard Hamey suggested # because “it looks like two lips with teeth showing between them.”

Meanwhile, some Carnegie Mellon users were already using their own solution. A group on the Gandalf VAX system later revealed they had been using __/ as “universally known as a smile” to mark jokes. But it apparently didn’t catch on beyond that local system.

The winning formula

Two days after Swartz’s initial proposal, Fahlman entered the discussion with his now-famous post: “I propose that the following character sequence for joke markers: 🙂 Read it sideways.” He added that serious messages could use :-(, noting, “Maybe we should mark things that are NOT jokes, given current trends.”

What made Fahlman’s proposal work wasn’t that he invented the concept of joke markers—Swartz had done that. It wasn’t that he invented smile symbols at Carnegie Mellon, since the __/ already existed. Rather, Fahlman synthesized the best elements from the ongoing discussion: the simplicity of single-character proposals, the visual clarity of face-like symbols, the sideways-reading principle hinted at by Hamey’s #, and a complete binary system that covered both humor 🙂 and seriousness :-(.

Early computer terminals like the DEC VT-100 did not support graphics, requiring typographic solutions for displaying

Early computer terminals like the DEC VT-100 did not support graphics, requiring typographic solutions for displaying “images.” Credit: Digital Equipment Corporation

The simplicity of Fahlman’s emoticons was key to their adoption. The university’s network ran on large DEC mainframes accessed via video terminals (Fahlman himself made his posts from a terminal attached to a DECSYSTEM-20) that were strictly limited to the 95 printable characters of the US-ASCII set. With no ability to display graphics or draw pixels, Fahlman’s solution used the only tools available: standard punctuation marks rearranging the strict grid of the terminal screen into a “picture.”

The emoticons spread quickly across ARPAnet, the precursor to the modern Internet, reaching other universities and research labs. By November 10, 1982—less than two months later—Carnegie Mellon researcher James Morris began introducing the smiley emoticon concept to colleagues at Xerox PARC, complete with a growing list of variations. What started as an internal Carnegie Mellon convention over time became a standard feature of online communication, often simplified without the hyphen nose to 🙂 or :(, among many other variations.

Lost backup tapes

There’s an interesting coda to this story: For years, the original bboard thread existed only in fading memory. The bulletin board posts had been deleted, and Carnegie Mellon’s computer science department had moved to new systems. The old messages seemed lost forever.

Between 2001 and 2002, Mike Jones, a former Carnegie Mellon researcher then working at Microsoft, sponsored what Fahlman calls a “digital archaeology” project. Jeff Baird and the Carnegie Mellon facilities staff undertook a painstaking effort: locating backup tapes from 1982, finding working tape drives that could read the obsolete media, decoding old file formats, and searching for the actual posts. The team recovered the thread, revealing not just Fahlman’s famous post but the entire three-day community discussion that led to it.

The recovered messages, which you can read here, show how collaboratively the emoticon was developed—not a lone genius moment but an ongoing conversation proposing, refining, and building on the group’s ideas. Fahlman had no idea his synthesis would become a fundamental part of how humans express themselves in digital text, but neither did Swartz, who first suggested marking jokes, or the Gandalf VAX users who were already using their own smile symbols.

From emoticon to emoji

While Fahlman’s text-based emoticons spread across Western online culture and remained text-character-based for a long time, Japanese mobile phone users in the late 1990s developed a parallel system: emoji. For years, Shigetaka Kurita’s 1999 set for NTT DoCoMo was widely cited as the original. However, recent discoveries have revealed earlier origins. SoftBank released a picture-based character set on mobile phones in 1997, and the Sharp PA-8500 personal organizer featured selectable icon characters as early as 1988.

Unlike emoticons that required reading sideways, emoji were small pictographic images that could convey emotion, objects, and ideas with more detail. When Unicode standardized emoji in 2010 and Apple added an emoji keyboard to iOS in 2011, the format exploded globally. Today, emoji have largely replaced emoticons in casual communication, though Fahlman’s sideways faces still appear regularly in text messages and social media posts.

IBM's Code Page 437 character set included a smiley face as early as 1981.

IBM’s Code Page 437 character set included a smiley face as early as 1981. Credit: Matt Giuca

As Fahlman himself notes on his website, he may not have been “the first person ever to type these three letters in sequence.” Others, including teletype operators and private correspondents, may have used similar symbols before 1982, perhaps even as far back as 1648. Author Vladimir Nabokov suggested before 1982 that “there should exist a special typographical sign for a smile.” And the original IBM PC included a dedicated smiley character as early as 1981 (perhaps that should be considered the first emoji).

What made Fahlman’s contribution significant wasn’t absolute originality but rather proposing the right solution at the right time in the right context. From there, the smiley could spread across the emerging global computer network, and no one would ever misunderstand a joke online again. 🙂

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

In 1982, a physics joke gone wrong sparked the invention of the emoticon Read More »

testing-shows-apple-n1-wi-fi-chip-improves-on-older-broadcom-chips-in-every-way

Testing shows Apple N1 Wi-Fi chip improves on older Broadcom chips in every way

This year’s newest iPhones included one momentous change that marked a new phase in the evolution of Apple Silicon: the Apple N1, Apple’s first in-house chip made to handle local wireless connections. The N1 supports Wi-Fi 7, Bluetooth 6, and the Thread smart home communication protocol, and it replaces the third-party wireless chips (mostly made by Broadcom) that Apple used in older iPhones.

Apple claimed that the N1 would enable more reliable connectivity for local communication features like AirPlay and AirDrop but didn’t say anything about how users could expect it to perform. But Ookla, the folks behind the SpeedTest app and website, have analyzed about five weeks’ worth of users’ testing data to get an idea of how the iPhone 17 lineup stacks up to the iPhone 16, as well as Android phones with Wi-Fi chips from Qualcomm, MediaTek, and others.

While the N1 isn’t at the top of the charts, Ookla says Apple’s Wi-Fi chip “delivered higher download and upload speeds on Wi-Fi compared to the iPhone 16 across every studied percentile and virtually every region.” The median download speed for the iPhone 17 series was 329.56Mbps, compared to 236.46Mbps for the iPhone 16; the upload speed also jumped from 73.68Mbps to 103.26Mbps.

Ookla noted that the N1’s best performance seemed to improve scores most of all in the bottom 10th percentile of performance tests, “implying Apple’s custom silicon lifts the floor more than the ceiling.” The iPhone 17 also didn’t top Ookla’s global performance charts—Ookla found that the Pixel 10 Pro series slightly edges out the iPhone 17 in download speed, while a Xiaomi 15T Pro with MediaTek Wi-Fi silicon featured better upload speeds.

Testing shows Apple N1 Wi-Fi chip improves on older Broadcom chips in every way Read More »

celebrated-game-developer-rebecca-heineman-dies-at-age-62

Celebrated game developer Rebecca Heineman dies at age 62

From champion to advocate

During her later career, Heineman served as a mentor and advisor to many, never shy about celebrating her past as a game developer during the golden age of the home computer.

Her mentoring skills became doubly important when she publicly came out as transgender in 2003. She became a vocal advocate for LGBTQ+ representation in gaming and served on the board of directors for GLAAD. Earlier this year, she received the Gayming Icon Award from Gayming Magazine.

Andrew Borman, who serves as director of digital preservation at The Strong National Museum of Play in Rochester, New York, told Ars Technica that her influence made a personal impact wider than electronic entertainment. “Her legacy goes beyond her groundbreaking work in video games,” he told Ars. “She was a fierce advocate for LGBTQ rights and an inspiration to people around the world, including myself.”

The front cover of

The front cover of Dragon Wars on the Commodore 64, released in 1989. Credit: MobyGames

In the Netflix documentary series High Score, Heineman explained her early connection to video games. “It allowed me to be myself,” she said. “It allowed me to play as female.”

“I think her legend grew as she got older, in part because of her openness and approachability,” journalist Ernie Smith told Ars. “As the culture of gaming grew into an online culture of people ready to dig into the past, she remained a part of it in a big way, where her war stories helped fill in the lore about gaming’s formative eras.”

Celebrated to the end

Heineman was diagnosed with adenocarcinoma in October 2025 after experiencing shortness of breath at the PAX game convention. After diagnostic testing, doctors found cancer in her lungs and liver. That same month, she launched a GoFundMe campaign to help with medical costs. The campaign quickly surpassed its $75,000 goal, raising more than $157,000 from fans, friends, and industry colleagues.

Celebrated game developer Rebecca Heineman dies at age 62 Read More »

oneplus-15-review:-the-end-of-range-anxiety

OnePlus 15 review: The end of range anxiety


It keeps going and going and…

OnePlus delivers its second super-fast phone of 2025.

OnePlus 15 back

The OnePlus 15 represents a major design change. Credit: Ryan Whitwam

The OnePlus 15 represents a major design change. Credit: Ryan Whitwam

OnePlus got its start courting the enthusiast community by offering blazing-fast phones for a low price. While the prices aren’t quite as low as they once were, the new OnePlus 15 still delivers on value. Priced at $899, this phone sports the latest and most powerful Snapdragon processor, the largest battery in a mainstream smartphone, and a super-fast screen.

The OnePlus 15 still doesn’t deliver the most satisfying software experience, and the camera may actually be a step back for the company, but the things OnePlus gets right are very right. It’s a fast, sleek phone that runs for ages on a charge, and it’s a little cheaper than the competition. But its shortcomings make it hard to recommend this device over the latest from Google or Samsung—or even the flagship phone OnePlus released 10 months ago.

US buyers have time to mull it over, though. Because of the recent government shutdown, Federal Communications Commission approval of the OnePlus 15 has been delayed. The company says it will release the phone as soon as it can, but there’s no exact date yet.

A sleek but conventional design

After a few years of phones with a distinctly “OnePlus” look, the OnePlus 15 changes up the formula by looking more like everything else. The overall shape is closer to that of phones from Samsung, Apple, and Google than the OnePlus 13. That said, the OnePlus 15 is extremely well-designed, and it’s surprisingly lightweight (211g) for how much power it packs. It’s sturdy, offering full IP69K sealing, and it uses the latest Gorilla Glass Victus 2 on the screen. An ultrasonic fingerprint scanner under the display works just as well as any other flagship phone’s fingerprint unlock.

Specs at a glance: OnePlus 15
SoC Snapdragon 8 Elite Gen 5
Memory 12GB, 16GB
Storage 256GB, 512GB
Display 2772 x 1272 6.78″ OLED, 1-165 Hz
Cameras 50 MP primary, f/1.8, OIS; 50 MP ultrawide, f/2.0; 50 MP 3.5x telephoto, OIS, f/2.8; 32 MP selfie, f/2.4
Software Android 16, 4 years of OS updates, six years of security patches
Battery 7,300 mAh, 100 W wired charging (80 W with included plug), 50 W wireless charging
Connectivity Wi-Fi 7, NFC, Bluetooth 6.0, sub-6 GHz 5G, USB-C 3.2 Gen 1
Measurements 161.4 x 76.7 x 8.1 mm; 211 g

OnePlus managed to cram a 7,300 mAh battery in this phone without increasing the weight compared to last year’s model. Flagship phones like the Samsung Galaxy S25 Ultra and Pixel 10 Pro XL are at 5,000 mAh or a little more, and they weigh the same or a bit more. Adding almost 50 percent capacity on top of that without making the phone ungainly is an impressive feat of engineering.

OnePlus 15 in hand

The display is big, bright, and fast.

Credit: Ryan Whitwam

The display is big, bright, and fast. Credit: Ryan Whitwam

That said, this is still a very large phone. The OLED screen measures 6.78 inches and has a resolution of 1272 x 2772. That’s a little lower than last year’s phone, which almost exactly matched the Galaxy S25 Ultra’s 1440p screen. Even looking at the OP13 and OP15 side-by-side, the difference in display resolution is negligible. You might notice the increased refresh rate, though. During normal use, the OnePlus 15 can hit 120 Hz (or as low as 1 Hz to save power), but in supported games, it can reach 165 Hz.

While the phone’s peak brightness is a bit lower than last year’s phone (3,600 vs. 4,500 nits), that’s not the full-screen brightness you’ll see day to day. The standard high-brightness mode (HMB) rating is a bit higher at 1,800 nits, which is even better than what you’ll get on phones like the Galaxy S25 Ultra. The display is not just readable outside—it looks downright good.

OnePlus offers the phone in a few colors, but the differences are more significant than in your average smartphone lineup. The Sand Storm unit we’ve tested is a light tan color that would be impossible to anodize. Instead, this version of the phone uses a finish known as micro-arc oxidation (MAO), which is supposedly even more durable than PVD titanium. OnePlus says this is the first phone with this finish, but it’s actually wrong about that. The 2012 HTC One S also had an MAO finish that was known to chip over time. OnePlus says its take on MAO is more advanced and was tested with a device known as a nanoindenter that can assess the mechanical properties of a material with microscopic precision.

OnePlus 15 keyboard glamour shot

The OnePlus 15 looks nice, but it also looks more like everything else. It does have an IR blaster, though.

Credit: Ryan Whitwam

The OnePlus 15 looks nice, but it also looks more like everything else. It does have an IR blaster, though. Credit: Ryan Whitwam

Durability aside, the MAO finish feels very interesting—it’s matte and slightly soft to the touch but cool like bare metal. It’s very neat, but it’s probably not neat enough to justify an upgrade if you’re looking at the base model. You can only get Sand Storm with the upgraded $999 model, which has 512GB of storage and 16GB of RAM.

The Sand Storm variant also has a fiberglass back panel rather than the glass used on other versions of the phone. All colorways have the same squircle camera module in the corner, sporting three large-ish sensors. Unlike some competing devices, the camera bump isn’t too prominent. So the phone almost lies flat—it still rocks a bit when sitting on a table, but not as much as phones like the Galaxy S25 Ultra.

For years, OnePlus set itself apart with the alert slider, but this is the company’s first flagship phone to drop that feature. Instead, you get a configurable action button similar to the iPhone. By default, the “Plus Key” connects to the Plus Mind AI platform, allowing you to take screenshots and record voice notes to load them instantly into the AI. More on that later.

Alert slider and button

The Plus Key (bottom) has replaced the alert slider (top). We don’t like this.

Credit: Ryan Whitwam

The Plus Key (bottom) has replaced the alert slider (top). We don’t like this. Credit: Ryan Whitwam

You can change the key to controlling ring mode, the flashlight, or several other features. However, the button feels underutilized, and the default behavior is odd. You don’t exactly need an entire physical control to take screenshots when that’s already possible by holding the power and volume down buttons like on any other phone. The alert slider will be missed.

Software and AI

The OnePlus 15 comes with OxygenOS 16, which is based on Android 16. The software is essentially the same as what you’d find on OnePlus and Oppo phones in China but with the addition of Google services. The device inherits some quirks from the Chinese version of the software, known as ColorOS. Little by little, the international OxygenOS has moved closer to the software used in China. For example, OnePlus is very invested in slick animations in OxygenOS, which can be a bit distracting at times.

Some things that should be simple often take multiple confirmation steps in OxygenOS. Case in point: Removing an app from your home screen requires a long-press and two taps, and OnePlus chose to separate icon colors and system colors in the labyrinthian theming menu. There are also so many little features vying for your attention that it takes a day or two just to encounter all of them and tap through the on-screen tutorials.

Mind Space OnePlus

Plus Mind aims to organize your data in screenshots and voice notes.

Credit: Ryan Whitwam

Plus Mind aims to organize your data in screenshots and voice notes. Credit: Ryan Whitwam

OnePlus has continued aping the iPhone to an almost embarrassing degree with this phone. There are Dynamic Island-style notifications for Android’s live alerts, which look totally alien in this interface. The app drawer also has a category view like iOS, but the phone doesn’t know what most of our installed apps are. Thus, “Other” becomes the largest category, making this view rather useless.

OnePlus was a bit slower than most to invest in generative AI features, but there are plenty baked into the OnePlus 15. The most prominent AI feature is Mind Space, which lets you save voice notes and screenshots with the Plus Key; they become searchable after being processed with AI. This is most similar to Nothing’s Essential Space. Google’s Pixel Screenshots app doesn’t do voice, but it offers a more conversational interface that can pull information from your screens rather than just find them, which is all Mind Space can do.

While OnePlus has arguably the most capable on-device AI hardware with the Snapdragon 8 Elite Gen 5, it’s not relying on it for much AI processing. Only some content from Plus Mind is processed locally, and the rest is uploaded to the company’s Private Computing Cloud. Features like AI Writer and the AI Recorder operate entirely in the cloud system. There’s also an AI universal search feature that sends information to the cloud, but this is thankfully disabled by default. OnePlus says it has full control of these servers, noting that encryption prevents anyone else (even OnePlus itself) from accessing your data.

OnePlus apps

The categorized app drawer is bad at recognizing apps.

Credit: Ryan Whitwam

The categorized app drawer is bad at recognizing apps. Credit: Ryan Whitwam

So OnePlus is at least saying the right things about privacy—Google has a similar pitch for its new private AI cloud compute environment. Regardless of whether you believe that, though, there are other drawbacks to leaning so heavily on the cloud. Features that run workloads in the Private Computing Cloud will have more latency and won’t work without a solid internet connection. It also just seems like a bit of a waste not to take advantage of Qualcomm’s super-powerful on-device capabilities.

AI features on the OnePlus 15 are no more or less useful than the versions on other current smartphones. If you want a robot to write Internet comments for you, the OnePlus 15 can do that just fine. If you don’t want to use AI on your phone, you can remap the Plus Key to something else and ignore the AI-infused stock apps. There are plenty of third-party alternatives that don’t have AI built in.

OnePlus doesn’t have the best update policy, but it’s gotten better over time. The OnePlus 15 is guaranteed four years of OS updates and six years of security patches. The market leaders are Google and Samsung, which offer seven years of full support.

Performance and battery

There’s no two ways about it: The OnePlus 15 is a ridiculously fast phone. This is the first Snapdragon 8 Elite Gen 5 device we’ve tested, and it definitely puts Qualcomm’s latest silicon to good use. This chip has eight Oryon CPU cores, with clock speeds as high as 4.6 GHz. It’s almost as fast as the Snapdragon X Elite laptop chips.

Even though OnePlus has some unnecessarily elaborate animations, you never feel like you’re waiting on the phone to catch up. Every tap is detected accurately, and app launches are near instantaneous. The Gen 5 is faster than last year’s flagship processor, but don’t expect the OnePlus 15 to run at full speed indefinitely.

In our testing, the phone pulls back 10 to 20 percent under thermal load to manage heat. The OP15 has a new, larger vapor chamber that seems to keep the chipset sufficiently cool during extended gaming sessions. That heat has to go somewhere, though. The phone gets noticeably toasty in the hand during sustained use.

The OnePlus 15 behaves a bit differently in benchmark apps, maintaining high speeds longer to attain higher scores. This tuning reveals just how much heat an unrestrained Snapdragon 8 Elite Gen 5 can produce. After running flat-out for 20 minutes, the phone loses only a little additional speed, but the case gets extremely hot. Parts of the phone reached a scorching 130° Fahrenheit, which is hot enough to burn your skin after about 30 seconds. During a few stress tests, the phone completely closed all apps and disabled functions like the LED flash to manage heat.

The unthrottled benchmarks do set a new record. The OnePlus 15 tops almost every test—Apple’s iPhone 17 Pro eked out the only win in Geekbench single-core—Snapdragon has always fallen short in single-core throughput in past Apple-Qualcomm matchups, but it wins on multicore performance.

The Snapdragon chip uses a lot of power when it’s cranked up, but the OnePlus 15 has battery to spare. The 7,300 mAh silicon-carbide cell is enormous compared to the competition, which hovers around 5,000 mAh in other big phones. This is one of the very few smartphones that you don’t have to charge every night. In fact, making it through two or three days with this device is totally doable. And that’s without toggling on the phone’s battery-saving mode.

OnePlus also shames the likes of Google and Samsung when it comes to charging speed. The phone comes with a charger in the box—a rarity these days. This adapter can charge the phone at an impressive 80 W, and OnePlus will offer a 100 W charger on its site. With the stock charger, you can completely charge the massive battery in a little over 30 minutes. It almost doesn’t matter that the battery is so big because a few minutes plugged in gives you more than enough to head out the door. Just plug the phone in while you look for your keys, and you’re good to go. The phone also supports 50 W wireless charging with a OnePlus dock, but that’s obviously not included.

OnePlus 15 side

There is somehow a 7,300 mAh battery in there.

Credit: Ryan Whitwam

There is somehow a 7,300 mAh battery in there. Credit: Ryan Whitwam

Unfortunately, only chargers and cables compatible with Oppo’s SuperVOOC system will reach these speeds. It’s nice to see one in the box because spares will cost you the better part of $100. Even if you aren’t using an official OnePlus charger/cable, a standard USB-PD plug can still hit 36 W, which is faster than phones like the Pixel 10 Pro and Galaxy S25 and about the same as the iPhone 17.

Cameras

OnePlus partnered with imaging powerhouse Hasselblad on its last several flagship phones, but that pairing is over with the launch of the OnePlus 15. The phone maker is now going it alone, swapping Hasselblad’s processing for a new imaging engine called DetailMax. The hardware is changing, too.

OnePlus 15 cameras

The OnePlus 15 camera setup is a slight downgrade from the 13.

Credit: Ryan Whitwam

The OnePlus 15 camera setup is a slight downgrade from the 13. Credit: Ryan Whitwam

OnePlus 15 has new camera sensors despite featuring the same megapixel count. There’s a 50 MP primary wide-angle, a 50 MP telephoto with 3.5x effective zoom, and a 50 MP ultrawide with support for macro shots. There’s a 32 MP selfie camera peeking through the OLED as well.

Each of these sensors is physically smaller than last year’s OnePlus cameras by a small margin. That means they can’t collect as much light, but good processing can make up for minor physical changes like that. That’s the problem, though.

Taking photos with the OnePlus 15 can be frustrating because the image processing misses as much as it hits. The colors, temperature, dynamic range, and detail are not very consistent. Images taken in similar conditions of similar objects—even those taken one after the other—can have dramatically different results. Color balance is also variable across the three rear sensors.

Bright outdoor light, fast movement. Ryan Whitwam

By that token, some of the photos we’ve taken on the OnePlus 15 are great. These are usually outdoor shots, where the phone has plenty of light. It’s not bad at capturing motion in these instances, and photos are sharp as long as the frame isn’t too busy. However, DetailMax has a tendency to oversharpen, which obliterates fine details and makes images look the opposite of detailed. This is much more obvious in dim lighting, with longer exposures that lead to blurry subjects more often than not.

Adding any digital zoom to your framing is generally a bad idea on the OnePlus 15. The processing just doesn’t have the capacity to clean up those images like a Google Pixel or even a Samsung Galaxy. The telephoto lens is good for getting closer to your subject, but the narrow aperture and smaller pixels make it tough to rely on indoors. Again, outdoor images are substantially better.

Shooting landscapes with the ultrawide is a good experience. The oversharpening isn’t as apparent in bright outdoor conditions, and there’s very little edge distortion. However, the field of view is narrower than on the OnePlus 13’s ultrawide camera, so that makes sense. Macro shots are accomplished with this same lens, and the results are better than you’ll get with any dedicated macro lens on a phone. That said, blurriness and funky processing creep in often enough that backing up and shooting a normal photo can serve you better, particularly if there isn’t much light.

A tale of two flagships

The OnePlus 15 is not the massive leap you might expect from skipping a number. The formula is largely unchanged from its last few devices—it’s blazing fast and well-built, but everything else is something of an afterthought.

You probably won’t be over the moon for the OnePlus 15, but it’s a good, pragmatic choice. It runs for days on a charge, you barely have to touch it with a power cable to get a full day’s use, and it manages that incredible battery life while being fast as hell. Honestly, it’s a little too fast in benchmarks, with the frame reaching borderline dangerous temperatures. The phone might get a bit warm in games, but it will maintain frame rates better than anything else on the market, up to 165 fps in titles that support its ultra-fast screen.

OnePlus 13 and 15

The OnePlus 13 (left) looked quite different compared to the 15 (right)

Credit: Ryan Whitwam

The OnePlus 13 (left) looked quite different compared to the 15 (right) Credit: Ryan Whitwam

However, the software can be frustrating at times, with inconsistent interfaces and unnecessarily arduous usage flows. OnePlus is also too dependent on sending your data to the cloud for AI analysis. You can avoid that by simply not using OnePlus’ AI features, and luckily, it’s pretty easy to avoid them.

It’s been less than a year since the OnePlus 13 arrived, but the company really wanted to be the first to get the new Snapdragon in everyone’s hands. So here we are with a second 2025 OnePlus flagship. If you have the OnePlus 13, there’s no reason to upgrade. That phone is arguably better, even though it doesn’t have the latest Snapdragon chip or an enormous battery. It still lasts more than long enough on a charge, and the cameras perform a bit better. You also can’t argue with that alert slider.

The Good

  • Incredible battery life and charging speed
  • Great display
  • Durable design, cool finish on Sand Storm colorway
  • Blazing fast

The Bad

  • Lots of AI features that run in the cloud
  • Cameras a step down from OnePlus 13
  • OxygenOS is getting cluttered
  • RIP the alert slider
  • Blazing hot

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

OnePlus 15 review: The end of range anxiety Read More »

microsoft-tries-to-head-off-the-“novel-security-risks”-of-windows-11-ai-agents

Microsoft tries to head off the “novel security risks” of Windows 11 AI agents

Microsoft has been adding AI features to Windows 11 for years, but things have recently entered a new phase, with both generative and so-called “agentic” AI features working their way deeper into the bedrock of the operating system. A new build of Windows 11 released to Windows Insider Program testers yesterday includes a new “experimental agentic features” toggle in the Settings to support a feature called Copilot Actions, and Microsoft has published a detailed support article detailing more about just how those “experimental agentic features” will work.

If you’re not familiar, “agentic” is a buzzword that Microsoft has used repeatedly to describe its future ambitions for Windows 11—in plainer language, these agents are meant to accomplish assigned tasks in the background, allowing the user’s attention to be turned elsewhere. Microsoft says it wants agents to be capable of “everyday tasks like organizing files, scheduling meetings, or sending emails,” and that Copilot Actions should give you “an active digital collaborator that can carry out complex tasks for you to enhance efficiency and productivity.”

But like other kinds of AI, these agents can be prone to error and confabulations and will often proceed as if they know what they’re doing even when they don’t. They also present, in Microsoft’s own words, “novel security risks,” mostly related to what can happen if an attacker is able to give instructions to one of these agents. As a result, Microsoft’s implementation walks a tightrope between giving these agents access to your files and cordoning them off from the rest of the system.

Possible risks and attempted fixes

For now, these “experimental agentic features” are optional, only available in early test builds of Windows 11, and off by default. Credit: Microsoft

For example, AI agents running on a PC will be given their own user accounts separate from your personal account, ensuring that they don’t have permission to change everything on the system and giving them their own “desktop” to work with that won’t interfere with what you’re working with on your screen. Users need to approve requests for their data, and “all actions of an agent are observable and distinguishable from those taken by a user.” Microsoft also says agents need to be able to produce logs of their activities and “should provide a means to supervise their activities,” including showing users a list of actions they’ll take to accomplish a multi-step task.

Microsoft tries to head off the “novel security risks” of Windows 11 AI agents Read More »

google-unveils-gemini-3-ai-model-and-ai-first-ide-called-antigravity

Google unveils Gemini 3 AI model and AI-first IDE called Antigravity


Google’s flagship AI model is getting its second major upgrade this year.

Google has kicked its Gemini rollout into high gear over the past year, releasing the much-improved Gemini 2.5 family and cramming various flavors of the model into Search, Gmail, and just about everything else the company makes.

Now, Google’s increasingly unavoidable AI is getting an upgrade. Gemini 3 Pro is available in a limited form today, featuring more immersive, visual outputs and fewer lies, Google says. The company also says Gemini 3 sets a new high-water mark for vibe coding, and Google is announcing a new AI-first integrated development environment (IDE) called Antigravity, which is also available today.

The first member of the Gemini 3 family

Google says the release of Gemini 3 is yet another step toward artificial general intelligence (AGI). The new version of Google’s flagship AI model has expanded simulated reasoning abilities and shows improved understanding of text, images, and video. So far, testers like it—Google’s latest LLM is once again atop the LMArena leaderboard with an ELO score of 1,501, besting Gemini 2.5 Pro by 50 points.

Gemini 3 LMArena

Credit: Google

Factuality has been a problem for all gen AI models, but Google says Gemini 3 is a big step in the right direction, and there are myriad benchmarks to tell the story. In the 1,000-question SimpleQA Verified test, Gemini 3 scored a record 72.1 percent. Yes, that means the state-of-the-art LLM still screws up almost 30 percent of general knowledge questions, but Google says this still shows substantial progress. On the much more difficult Humanity’s Last Exam, which tests PhD-level knowledge and reasoning, Gemini set another record, scoring 37.5 percent without tool use.

Math and coding are also a focus of Gemini 3. The model set new records in MathArena Apex (23.4 percent) and WebDev Arena (1487 ELO). In the SWE-bench Verified, which tests a model’s ability to generate code, Gemini 3 hit an impressive 76.2 percent.

So there are plenty of respectable but modest benchmark improvements, but Gemini 3 also won’t make you cringe as much. Google says it has tamped down on sycophancy, a common problem in all these overly polite LLMs. Outputs from Gemini 3 Pro are reportedly more concise, with less of what you want to hear and more of what you need to hear.

You can also expect Gemini 3 Pro to produce noticeably richer outputs. Google claims Gemini’s expanded reasoning capabilities keep it on task more effectively, allowing it to take action on your behalf. For example, Gemini 3 can triage and take action on your emails, creating to-do lists, summaries, recommended replies, and handy buttons to trigger suggested actions. This differs from the current Gemini models, which would only create a text-based to-do list with similar prompts.

The model also has what Google calls a “generative interface,” which comes in the form of two experimental output modes called visual layout and dynamic view. The former is a magazine-style interface that includes lots of images in a scrollable UI. Dynamic view leverages Gemini’s coding abilities to create custom interfaces—for example, a web app that explores the life and work of Vincent Van Gogh.

There will also be a Deep Think mode for Gemini 3, but that’s not ready for prime time yet. Google says it’s being tested by a small group for later release, but you should expect big things. Deep Think mode manages 41 percent in Humanity’s Last Exam without tools. Believe it or not, that’s an impressive score.

Coding with vibes

Google has offered several ways of generating and modifying code with Gemini models, but the launch of Gemini 3 adds a new one: Google Antigravity. This is Google’s new agentic development platform—it’s essentially an IDE designed around agentic AI, and it’s available in preview today.

With Antigravity, Google promises that you (the human) can get more work done by letting intelligent agents do the legwork. Google says you should think of Antigravity as a “mission control” for creating and monitoring multiple development agents. The AI in Antigravity can operate autonomously across the editor, terminal, and browser to create and modify projects, but everything they do is relayed to the user in the form of “Artifacts.” These sub-tasks are designed to be easily verifiable so you can keep on top of what the agent is doing. Gemini will be at the core of the Antigravity experience, but it’s not just Google’s bot. Antigravity also supports Claude Sonnet 4.5 and GPT-OSS agents.

Of course, developers can still plug into the Gemini API for coding tasks. With Gemini 3, Google is adding a client-side bash tool, which lets the AI generate shell commands in its workflow. The model can access file systems and automate operations, and a server-side bash tool will help generate code in multiple languages. This feature is starting in early access, though.

AI Studio is designed to be a faster way to build something with Gemini 3. Google says Gemini 3 Pro’s strong instruction following makes it the best vibe coding model yet, allowing non-programmers to create more complex projects.

A big experiment

Google will eventually have a whole family of Gemini 3 models, but there’s just the one for now. Gemini 3 Pro is rolling out in the Gemini app, AI Studio, Vertex AI, and the API starting today as an experiment. If you want to tinker with the new model in Google’s Antigravity IDE, that’s also available for testing today on Windows, Mac, and Linux.

Gemini 3 will also launch in the Google search experience on day one. You’ll have the option to enable Gemini 3 Pro in AI Mode, where Google says it will provide more useful information about a query. The generative interface capabilities from the Gemini app will be available here as well, allowing Gemini to create tools and simulations when appropriate to answer the user’s question. Google says these generative interfaces are strongly preferred in its user testing. This feature is available today, but only for AI Pro and Ultra subscribers.

Because the Pro model is the only Gemini 3 variant available in the preview, AI Overviews isn’t getting an immediate upgrade. That will come, but for now, Overviews will only reach out to Gemini 3 Pro for especially difficult search queries—basically the kind of thing Google thinks you should have used AI Mode to do in the first place.

There’s no official timeline for releasing more Gemini 3 models or graduating the Pro variant to general availability. However, given the wide rollout of the experimental release, it probably won’t be long.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google unveils Gemini 3 AI model and AI-first IDE called Antigravity Read More »

with-a-new-company,-jeff-bezos-will-become-a-ceo-again

With a new company, Jeff Bezos will become a CEO again

Jeff Bezos is one of the world’s richest and most famous tech CEOs, but he hasn’t actually been a CEO of anything since 2021. That’s now changing as he takes on the role of co-CEO of a new AI company, according to a New York Times report citing three people familiar with the company.

Grandiosely named Project Prometheus (and not to be confused with the NASA project of the same name), the company will focus on using AI to pursue breakthroughs in research, engineering, manufacturing, and other fields that are dubbed part of “the physical economy”—in contrast to the software applications that are likely the first thing most people in the general public think of when they hear “AI.”

Bezos’ co-CEO will be Vik Bajaj, a chemist and physicist who previously led life sciences work at Google X, an Alphabet-backed research group that worked on speculative projects that could lead to more product categories. (For example, it developed technologies that would later underpin Google’s Waymo service.) Bajaj also worked at Verily, another Alphabet-backed research group focused on life sciences, and Foresite Labs, an incubator for new AI companies.

With a new company, Jeff Bezos will become a CEO again Read More »

report-claims-that-apple-has-yet-again-put-the-mac-pro-“on-the-back-burner”

Report claims that Apple has yet again put the Mac Pro “on the back burner”

Do we still need a Mac Pro, though?

Regardless of what Apple does with the Mac Pro, the desktop makes less sense than ever in the Apple Silicon era. Part of the appeal of the early 2010s and the 2019 Mac Pro towers was their internal expandability, particularly with respect to storage, graphics cards, and RAM. But while the Apple Silicon Mac Pro does include six internal PCI Express slots, it supports neither RAM upgrades nor third-party GPUs from Nvidia, AMD, or Intel. Thunderbolt 5’s 120 Gbps transfer speeds are also more than fast enough to support high-speed external storage devices.

That leaves even the most powerful of power users with few practical reasons to prefer a $7,000 Mac Pro tower to a $4,000 Mac Studio. And that would be true even if both desktops used the same chip—currently, the M3 Ultra Studio comes with more and newer CPU cores, newer GPU cores, and 32GB more RAM for that price, making the comparison even more lopsided.

Mac Pro aside, the Mac should have a pretty active 2026. Every laptop other than the entry-level 14-inch MacBook Pro should get an Apple M5 upgrade, with Pro and Max chips coming for the higher-end Pros. Those chips, plus the M5 Ultra, would give Apple all the ingredients it would need to refresh the iMac, Mac mini, and Mac Studio lineups as well.

Insistent rumors also indicate that Apple will be introducing a new, lower-cost MacBook model with an iPhone-class chip inside, a device that seems made to replace the 2020 M1 MacBook Air that Apple has continued to sell via Walmart for between $600 and $650. It remains to be seen whether this new MacBook would remain a Walmart exclusive or if Apple also plans to offer the laptop through other retailers and its own store.

Report claims that Apple has yet again put the Mac Pro “on the back burner” Read More »