Google

ftc-claims-gmail-filtering-republican-emails-threatens-“american-freedoms”

FTC claims Gmail filtering Republican emails threatens “American freedoms”

Ferguson said that “similar concerns have resulted in ongoing litigation against Google in other settings” but did not mention that a judge rejected the Republican claims.

“Hearing from candidates and receiving information and messages from political parties is key to exercising fundamental American freedoms and our First Amendment rights,” Ferguson’s letter said. “Moreover, consumers expect that they will have the opportunity to hear from their own chosen candidates or political party. A consumer’s right to hear from candidates or parties, including solicitations for donations, is not diminished because that consumer’s political preferences may run counter to your company’s or your employees’ political preferences.”

Google: Gmail users marked RNC emails as spam

The RNC’s appeal of its court loss is still pending, with the case proceeding toward oral arguments. Google told the appeals court in April that “the Complaint’s own allegations make it obvious that Gmail presented a portion of RNC emails as spam because they appeared to be spam…. The most obvious reason for RNC emails being flagged as spam is that Gmail users were too frequently marking them as such.”

Google also said that “the RNC’s own allegations confirm that Google was helping the RNC, not scheming against it… The RNC acknowledges, for example, that Google worked with the RNC ‘[f]or nearly a year.’ Those efforts even included Google employees traveling to the RNC’s office to ‘give a training’ on ‘Email Best Practices.’ Less than two months after that training, the last alleged instance of the inboxing issue occurred.”

While the RNC “belittles those efforts as ‘excuses’ to cover Google’s tracks… the district court rightly found that judicial experience and common sense counsel otherwise,” Google said. The Google brief quoted from the District Judge’s ruling that said, “the fact that Google engaged with the RNC for nearly a year and made suggestions that improved email performance is inconsistent with a lack of good faith.”

FTC claims Gmail filtering Republican emails threatens “American freedoms” Read More »

google-pixel-10-series-review:-don’t-call-it-an-android

Google Pixel 10 series review: Don’t call it an Android


Google’s new Pixel phones are better, but only a little.

Pixel 10 series shadows

Left to right: Pixel 10, Pixel 10 Pro, Pixel 10 Pro XL. Credit: Ryan Whitwam

Left to right: Pixel 10, Pixel 10 Pro, Pixel 10 Pro XL. Credit: Ryan Whitwam

After 10 generations of Pixels, Google’s phones have never been more like the iPhone, and we mean that both as a compliment and a gentle criticism. For people who miss the days of low-cost, tinkering-friendly Nexus phones, Google’s vision is moving ever further away from that, but the attention to detail and overall polish of the Pixel experience continue with the Pixel 10, 10 Pro, and 10 Pro XL. These are objectively good phones with possibly the best cameras on the market, and they’re also a little more powerful, but the aesthetics are seemingly locked down.

Google made a big design change last year with the Pixel 9 series, and it’s not reinventing the wheel in 2025. The Pixel 10 series keeps the same formula, making limited refinements, not all of which will be well-received. Google pulled out all the stops and added a ton of new AI features you may not care about, and it killed the SIM card slot. Just because Apple does something doesn’t mean Google has to, but here we are. If you’re still clinging to your physical SIM card or just like your Pixel 9, there’s no reason to rush out to upgrade.

A great but not so daring design

If you liked the Pixel 9’s design, you’ll like the Pixel 10, because it’s a very slightly better version of the same hardware. All three phones are made from aluminum and Gorilla Glass Victus 2 (no titanium option here). The base model has a matte finish on the metal frame with a glossy rear panel, and it’s the opposite on the Pro phones. This makes the more expensive phones a little less secure in the hand—those polished edges are slippery. The buttons on the Pixel 9 often felt a bit loose, but the buttons on all our Pixel 10 units are tight and clicky.

Pixel 10 back all

Left to right: Pixel 10 Pro XL, Pixel 10 Pro, Pixel 10.

Credit: Ryan Whitwam

Left to right: Pixel 10 Pro XL, Pixel 10 Pro, Pixel 10. Credit: Ryan Whitwam

Specs at a glance: Google Pixel 10 series
Pixel 10 ($799) Pixel 10 Pro ($999) Pixel 10 Pro XL ($1,199) Pixel 10 Pro Fold ($1,799)
SoC Google Tensor G5  Google Tensor G5  Google Tensor G5  Google Tensor G5
Memory 12GB 16GB 16GB 16GB
Storage 128GB / 256GB 128GB / 256GB / 512GB 128GB / 256GB / 512GB / 1TB 256GB / 512GB / 1TB
Display 6.3-inch 1080×2424 OLED, 60-120Hz, 3,000 nits 6.3-inch 1280×2856 LTPO OLED, 1-120Hz, 3,300 nits 6.8-inch 1344×2992 LTPO OLED, 1-120Hz, 3,300 nits External: 6.4-inch 1080×2364 OLED, 60-120Hz, 2000 nits; Internal: 8-inch 2076×2152 LTPO OLED, 1-120Hz, 3,000 nits
Cameras 48 MP wide with Macro

Focus, F/1.7, 1/2-inch sensor; 13 MP ultrawide, f/2.2, 1/3.1-inch sensor;

10.8 MP 5x telephoto, f/3.1, 1/3.2-inch sensor; 10.5 MP selfie, f/2.2
50 MP wide with Macro

Focus, F/1.68, 1/1.3-inch sensor; 48 MP ultrawide, f/1.7, 1/2.55-inch sensor;

48 MP 5x telephoto, f/2.8, 1/2.55-inch sensor; 42 MP selfie, f/2.2
50 MP wide with Macro

Focus, F/1.68, 1/1.3-inch sensor; 48 MP ultrawide, f/1.7, 1/2.55-inch sensor;

48 MP 5x telephoto, f/2.8, 1/2.55-inch sensor; 42 MP selfie, f/2.2
48 MP wide, F/1.7, 1/2-inch sensor; 10.5 MP ultrawide with Macro Focus, f/2.2, 1/3.4-inch sensor;

10.8 MP 5x telephoto, f/3.1, 1/3.2-inch sensor; 10.5 MP selfie, f/2.2 (outer and inner)
Software Android 16 Android 16 Android 16 Android 16
Battery 4,970 mAh,  up to 30 W wired charging, 15 W wireless charging (Pixelsnap) 4,870 mAh, up to 30 W wired charging, 15 W wireless charging (Pixelsnap) 5,200 mAh, up to 45 W wired charging, 25 W wireless charging (Pixelsnap) 5,015 mAh, up to 30 W wired charging, 15 W wireless charging (Pixelsnap)
Connectivity Wi-Fi 6e, NFC, Bluetooth 6.0, sub-6 GHz and mmWave 5G, USB-C 3.2 Wi-Fi 7, NFC, Bluetooth 6.0, sub-6 GHz and mmWave 5G, UWB, USB-C 3.2 Wi-Fi 7, NFC, Bluetooth 6.0, sub-6 GHz and mmWave 5G, UWB, USB-C 3.2 Wi-Fi 7, NFC, Bluetooth 6.0, sub-6 GHz and mmWave 5G, UWB, USB-C 3.2
Measurements 152.8 height×72.0 width×8.6 depth (mm), 204g 152.8 height×72.0 width×8.6 depth (mm), 207g 162.8 height×76.6 width×8.5 depth (mm), 232g Folded: 154.9 height×76.2 width×10.1 depth (mm); Unfolded: 154.9 height×149.8 width×5.1 depth (mm); 258g
Colors Indigo

Frost

Lemongrass

Obsidian
Moonstone

Jade

Porcelain

Obsidian
Moonstone

Jade

Porcelain

Obsidian
Moonstone

Jade

The rounded corners and smooth transitions between metal and glass make the phones comfortable to hold, even for the mammoth 6.8-inch Pixel 10 Pro XL. This phone is pretty hefty at 232 g, though—that’s even heavier than Samsung’s Galaxy Z Fold 7. I’m pleased that Google kept the smaller premium phone in 2025, offering most of the capabilities and camera specs of the XL in a more cozy form factor. It’s not as heavy, and the screen is a great size for folks with average or smaller hands.

Pixel 10 Pro

The Pixel 10 Pro is a great size.

Credit: Ryan Whitwam

The Pixel 10 Pro is a great size. Credit: Ryan Whitwam

On the back, you’ll still see the monolithic camera bar near the top. I like this design aesthetically, but it’s also functional. When you set a Pixel 10 down on a table or desk, it remains stable and easy to use, with no annoying wobble. While this element looks unchanged at a glance, it actually takes up a little more surface area on the back of the phone. Yes, that means none of your Pixel 9 cases will fit on the 10.

The Pixel 10’s body has fewer interruptions compared to the previous model, too. Google has done away with the unsightly mmWave window on the top of the phone, and the bottom now has two symmetrical grilles for the mic and speaker. What you won’t see is a SIM card slot (at least in the US). Like Apple, Google has gone all-in with eSIM, so if you’ve been clinging to that tiny scrap of plastic, you’ll have to give it up to use a Pixel 10.

Pixel 10 Pro XL side

The Pixel 10 Pro XL has polished sides that make it a bit slippery.

Credit: Ryan Whitwam

The Pixel 10 Pro XL has polished sides that make it a bit slippery. Credit: Ryan Whitwam

The good news is that eSIMs are less frustrating than they used to be. All recent Android devices have the ability to transfer most eSIMs directly without dealing with the carrier. We’ve moved a T-Mobile eSIM between Pixels and Samsung devices a few times without issue, but you will need Wi-Fi connectivity, which is an annoying caveat.

Display sizes haven’t changed this year, but they all look impeccable. The base model and smaller Pro phone sport 6.3-inch OLEDs, and the Pro XL’s is at 6.8 inches. The Pixel 10 has the lowest resolution at 1080p, and the refresh rate only goes from 60–120 Hz. The 10 Pro and 10 Pro XL get higher-resolution screens with LTPO technology that allows them to go as low as 1Hz to save power. The Pro phones also get slightly brighter but all have peak brightness of 3,000 nits or higher, which is plenty to make them readable outdoors.

Pixel 10 MagSafe

The addition of Qi2 makes numerous MagSafe accessories compatible with the new Pixels.

Credit: Ryan Whitwam

The addition of Qi2 makes numerous MagSafe accessories compatible with the new Pixels. Credit: Ryan Whitwam

The biggest design change this year isn’t visible on the outside. The Pixel 10 phones are among the first Android devices with full support for the Qi2 charging standard. Note, this isn’t just “Qi2 Ready” like the Galaxy S25. Google’s phones have the Apple-style magnets inside, allowing you to use many of the chargers, mounts, wallets, and other Apple-specific accessories that have appeared over the past few years. Google also has its own “Pixelsnap” accessories, like chargers and rings. And yes, the official Pixel 10 cases are compatible with magnetic attachments. Adding something Apple has had for years isn’t exactly innovative, but Qi2 is genuinely useful, and you won’t get it from other Android phones.

Expressive software

Google announced its Material 3 Expressive overhaul earlier this year, but it wasn’t included in the initial release of Android 16. The Pixel 10 line will ship with this update, marking the biggest change to Google’s Android skin in years. The Pixel line has now moved quite far from the “stock Android” aesthetic that used to be the company’s hallmark. The Pixel build of Android is now just as customized as Samsung’s One UI or OnePlus’ OxygenOS, if not more so.

Pixel 10 Material 3

Material 3 Expressive adds more customizable quick settings.

Credit: Ryan Whitwam

Material 3 Expressive adds more customizable quick settings. Credit: Ryan Whitwam

The good news is that Material 3 looks very nice. It’s more colorful and playful but not overbearing. Some of the app concepts shown off during the announcement were a bit much, but the production app redesigns Google has rolled out since then aren’t as heavy-handed. The Material colors are used more liberally throughout the UI, and certain UI elements will be larger and more friendly. I’ll take Material 3 Expressive over Apple’s Liquid Glass redesign any day.

I’ve been using a pre-production version of the new software, but even for early Pixel software, there have been more minor UI hitches than expected. Several times, I’ve seen status bar icons disappear, app display issues, and image edits becoming garbled. There are no showstopping bugs, but the new software could do with a little cleaning up.

The OS changes are more than skin-deep—Google has loaded the Pixel 10 series with a ton of new AI gimmicks aimed at changing the experience (and justifying the company’s enormous AI spending). With the more powerful Tensor G5 to run larger Gemini Nano on-device models, Google has woven AI into even more parts of the OS. Google’s efforts aren’t as disruptive or invasive as what we’ve seen from other Android phone makers, but that doesn’t mean the additions are useful.

It would be fair to say Magic Cue is Google’s flagship AI addition this year. The pitch sounds compelling—use local AI to crunch your personal data into contextual suggestions in Maps, Messages, phone calls, and more. For example, it can prompt you to insert content into a text message based on other messages or emails.

Despite having a mountain of personal data in Gmail, Keep, and other Google apps, I’ve seen precious few hints of Magic Cue. It once suggested a search in Google Maps, and on another occasion, it prompted an address in Messages. If you don’t use Google’s default apps, you might not see Magic Cue at all. More than ever before, getting the most out of the Pixel means using Google’s first-party apps, just like that other major smartphone platform.

Pixel 10 AI

Google is searching for more ways to leverage generative AI.

Credit: Ryan Whitwam

Google is searching for more ways to leverage generative AI. Credit: Ryan Whitwam

Google says it can take about a day after you set up the Pixel 10 before Magic Cue will be done ingesting your personal data—it takes that long because it’s all happening on your device instead of in the cloud. I appreciate Google’s commitment to privacy in mobile AI because it does have access to a huge amount of user data. But it seems like all that data should be doing more. And I hope that, in time, it does. An AI assistant that anticipates your needs is something that could actually be useful, but I’m not yet convinced that Magic Cue is it.

It’s a similar story with Daily Hub, an ever-evolving digest of your day similar to Samsung’s Now Brief. You will find Daily Hub at the top of the Google Discover feed. It’s supposed to keep you abreast of calendar appointments, important emails, and so on. This should be useful, but I rarely found it worth opening. It offered little more than YouTube and AI search suggestions.

Meanwhile, Pixel Journal works as advertised—it’s just not something most people will want to use. This one is similar to Nothing’s Essential Space, a secure place to dump all your thoughts and ideas throughout the day. This allows Gemini Nano to generate insights and emoji-based mood tracking. Cool? Maybe this will inspire some people to record more of their thoughts and ideas, but it’s not a game-changing AI feature.

If there’s a standout AI feature on the Pixel 10, it’s Voice Translate. It uses Gemini Nano to run real-time translation between English and a small collection of other languages, like Spanish, French, German, and Hindi. The translated voice sounds like the speaker (mostly), and the delay is tolerable. Beyond this, though, many of Google’s new Pixel AI features feel like an outgrowth of the company’s mandate to stuff AI into everything possible. Pixel Screenshots might still be the most useful application of generative AI on the Pixels.

As with all recent Pixel phones, Google guarantees seven years of OS and security updates. That matches Samsung and far outpaces OEMs like OnePlus and Motorola. And unlike Samsung, Google phone updates arrive without delay. You’ll get new versions of Android first, and the company’s Pixel Drops add new features every few months.

Modest performance upgrade

The Pixel 10 brings Google’s long-awaited Tensor G5 upgrade. This is the first custom Google mobile processor manufactured by TSMC rather than Samsung, using the latest 3 nm process node. The core setup is a bit different, with a 3.78 GHz Cortex X4 at the helm. It’s backed by five high-power Cortex-A725s at 3.05 GHz and two low-power Cortex-A520 cores at 2.25 GHz. Google also says the NPU has gotten much more powerful, allowing it to run the Gemini models for its raft of new AI features.

Pixel 10 family cameras

The Pixel 10 series keeps a familiar design.

Credit: Ryan Whitwam

The Pixel 10 series keeps a familiar design. Credit: Ryan Whitwam

If you were hoping to see Google catch up to Qualcomm with the G5, you’ll be disappointed. In general, Google doesn’t seem concerned about benchmark numbers. And in fairness, the Pixels perform very well in daily use. These phones feel fast, and the animations are perfectly smooth. While phones like the Galaxy S25 are faster on paper, we’ve seen less lag and fewer slowdowns on Google’s phones.

That said, the Tensor G5 does perform better in our testing compared to the G4. The CPU speed is up about 30 percent, right in line with Google’s claims. The GPU is faster by 20–30 percent in high-performance scenarios, which is a healthy increase for one year. However, it’s running way behind the Snapdragon 8 Elite we see in other flagship Android phones.

You might notice the slower Pixel GPU if you’re playing Genshin Impact or Call of Duty Mobile at a high level, but it will be more than fast enough for most of the mobile games people play. That performance gap will narrow during prolonged gaming, too. Qualcomm’s flagship chip gets very toasty in phones like the Galaxy S25, slowing down by almost half. The Pixel 10, on the other hand, loses less than 20 percent of its speed to thermal throttling.

Say what you will about generative AI—Google’s obsession with adding more on-device intelligence spurred it to boost the amount of RAM in this year’s Pro phones. You now get 16GB in the 10 Pro and 10 Pro XL. The base model continues to muddle along with 12GB. This could make the Pro phones more future-proof as additional features are added in Pixel Drop updates. However, we have yet to notice the Pro phones holding onto apps in memory longer than the base model.

The Pixel 10 series gets small battery capacity increases across the board, but it’s probably not enough that you’ll notice. The XL, for instance, has gone from 5,060 mAh to 5,200 mAh. It feels like the increases really just offset the increased background AI processing, because the longevity is unchanged from last year. You’ll have no trouble making it through a day with any of the Pixel phones, even if you clock a lot of screen time.

With lighter usage, you can almost make it through two days. You’ll probably want to plug in every night, though. Google has an upgraded always-on display mode on the Pixel 10 phones that shows your background in full color but greatly dimmed. We found this was not worth the battery life hit, but it’s there if you want to enable it.

Charging speed has gotten slightly better this time around, but like the processor, it’s not going to top the charts. The Pixel 10 and 10 Pro can hit a maximum of 30 W with a USB-C PPS-enabled charger, getting a 50 percent charge in about 30 minutes. The Pixel 10 Pro XL’s wired charging can reach around 45 W for a 70 percent charge in half an hour. This would be sluggish compared to the competition in most Asian markets, but it’s average to moderately fast stateside. Google doesn’t have much reason to do better here, but we wish it would try.

Pixel 10 Pro XL vs. Pixel 9 Pro XL

The Pixel 10 Pro XL (left) looks almost identical to the Pixel 9 Pro XL (right).

Credit: Ryan Whitwam

The Pixel 10 Pro XL (left) looks almost identical to the Pixel 9 Pro XL (right). Credit: Ryan Whitwam

Wireless charging is also a bit faster, but the nature of charging is quite different with support for Qi2. You can get 15 W of wireless power with a Qi2 charger on the smaller phones, and the Pixel 10 Pro XL can hit 25 W with a Qi2.2 adapter. There are plenty of Qi2 magnetic chargers out there that can handle 15 W, but 25 W support is currently much more rare.

Post-truth cameras

Google has made some changes to its camera setup this year, including the addition of a third camera to the base Pixel 10. However, that also comes with a downgrade for the other two cameras. The Pixel 10 sports a 48 MP primary, a 13 MP ultra wide, and a 10.8 MP 5x telephoto—this setup is most similar to Google’s foldable phone. The 10 Pro and 10 Pro XL have a slightly better 50 MP primary, a 48 MP ultrawide, and a 48 MP 5x telephoto. The Pixel 10 is also limited to 20x upscaled zoom, but the Pro phones can go all the way to 100x.

Pixel 10 camera closeup

The Pixel 10 gets a third camera, but the setup isn’t as good as on the Pro phones.

Credit: Ryan Whitwam

The Pixel 10 gets a third camera, but the setup isn’t as good as on the Pro phones. Credit: Ryan Whitwam

The latest Pixel phones continue Google’s tradition of excellent mobile photography, which should come as no surprise. And there’s an even greater focus on AI, which should also come as no surprise. But don’t be too quick to judge—Google’s use of AI technologies, even before the era of generative systems, has made its cameras among the best you can get.

The Pixel 10 series continues to be great for quick snapshots. You can pop open the camera and just start taking photos in almost any lighting to get solid results. Google’s HDR image processing brings out details in light and dark areas, produces accurate skin tones, and sharpens details without creating an “oil painting” effect when you zoom in. The phones are even pretty good at capturing motion, leaning toward quicker exposures while still achieving accurate colors and good brightness.

Pro phone samples:

Outdoor light. Ryan Whitwam

The Pixel 10 camera changes are a mixed bag. The addition of a telephoto lens for Google’s cheapest model is appreciated, allowing you to get closer to your subject and take greater advantage of Google’s digital zoom processing if 5x isn’t enough. The downgrade of the other sensors is noticeable if you’re pixel peeping, but it’s not a massive difference. Compared to the Pro phones, the base model doesn’t have quite as much dynamic range, and photos in challenging light will trend a bit dimmer. You’ll notice the difference most in Night Sight shots.

The camera experience has a healthy dose of Gemini Nano AI this year. The Pro models’ Pro Res Zoom runs a custom diffusion model to enhance images. This can make a big difference, but it can also be inaccurate, like any other generative system. Google opted to expand its use of C2PA labeling to mark such images as being AI-edited. So you might take a photo expecting to document reality, but the camera app will automatically label it as an AI image. This could have ramifications if you’re trying to document something important. The AI labeling will also appear on photos created using features like Add Me, which continues to be very useful for group shots.

Non-Pro samples:

Bright outdoor light. Ryan Whitwam

Google has also used AI to power its new Camera Coach feature. When activated in the camera viewfinder, it analyzes your current framing and makes suggestions. However, these usually amount to “subject goes in center, zoom in, take picture.” Frankly, you don’t need AI for this if you have ever given any thought to how to frame a photo—it’s pretty commonsense stuff.

The most Google-y a phone can get

Google is definitely taking its smartphone efforts more seriously these days, but the experience is also more laser-focused on Google’s products and services. The Pixel 10 is an Android phone, but you’d never know it from Google’s marketing. It barely talks about Android as a platform—the word only appears once on the product pages, and it’s in the FAQs at the bottom. Google prefers to wax philosophical about the Pixel experience, which has been refined over the course of 10 generations. For all intents and purposes, this is Google’s iPhone. For $799, the base-model Pixel is a good way to enjoy the best of Google in your pocket, but the $999 Pixel 10 Pro is our favorite of the bunch.

Pixel 10 flat

The Pixel 10 series retains the Pixel 9 shape.

Credit: Ryan Whitwam

The Pixel 10 series retains the Pixel 9 shape. Credit: Ryan Whitwam

The design, while almost identical to last year’s, is refined and elegant, and the camera is hard to beat, even with more elaborate hardware from companies like Samsung. Google’s Material 3 Expressive UI overhaul is also shaping up to be a much-needed breath of fresh air, and Google’s approach to the software means you won’t have to remove a dozen sponsored apps and game demos after unboxing the phone. We appreciate Google’s long update commitment, too, but you’ll need at least one battery swap to have any hope of using this phone for the full support period. Google will also lower battery capacity dynamically as the cell ages, which may be frustrating, but at least there won’t be any sudden nasty surprises down the road.

These phones are more than fast enough with the new Tensor G5 chip, and if mobile AI is ever going to have a positive impact, you’ll see it first on a Pixel. While almost all Android phone buyers will be happy with the Pixel 10, there are a few caveats. If high-end mobile gaming is a big part of your smartphone usage, it might make sense to get a Samsung or OnePlus phone, with their faster Qualcomm chips. There’s also the forced migration to eSIM. If you have to swap SIMs frequently, you may want to wait a bit longer to migrate to eSIM.

Pixel 10 edge

The Pixel design is still slick.

Credit: Ryan Whitwam

The Pixel design is still slick. Credit: Ryan Whitwam

Buying a Pixel 10 is also something of a commitment to Google as the integrated web of products and services it is today. The new Pixel phones are coming at a time when Google’s status as an eternal tech behemoth is in doubt. Before long, the company could find itself split into pieces as a result of pending antitrust actions, so this kind of unified Google vision for a smartphone experience might not exist in the future. The software running on the Pixel 10 seven years hence may be very different—there could be a lot more AI or a lot less Google.

But today, the Pixel 10 is basically the perfect Google phone.

The good

  • Great design carried over from Pixel 9
  • Fantastic cameras, new optical zoom for base model
  • Material 3 redesign is a win
  • Long update support
  • Includes Qi2 with magnetic attachment
  • Runs AI on-device for better privacy

The bad

  • Tensor G5 doesn’t catch up to Qualcomm
  • Too many perfunctory AI features
  • Pixel 10’s primary and ultrawide sensors are a slight downgrade from Pixel 9
  • eSIM-only in the US

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google Pixel 10 series review: Don’t call it an Android Read More »

the-personhood-trap:-how-ai-fakes-human-personality

The personhood trap: How AI fakes human personality


Intelligence without agency

AI assistants don’t have fixed personalities—just patterns of output guided by humans.

Recently, a woman slowed down a line at the post office, waving her phone at the clerk. ChatGPT told her there’s a “price match promise” on the USPS website. No such promise exists. But she trusted what the AI “knows” more than the postal worker—as if she’d consulted an oracle rather than a statistical text generator accommodating her wishes.

This scene reveals a fundamental misunderstanding about AI chatbots. There is nothing inherently special, authoritative, or accurate about AI-generated outputs. Given a reasonably trained AI model, the accuracy of any large language model (LLM) response depends on how you guide the conversation. They are prediction machines that will produce whatever pattern best fits your question, regardless of whether that output corresponds to reality.

Despite these issues, millions of daily users engage with AI chatbots as if they were talking to a consistent person—confiding secrets, seeking advice, and attributing fixed beliefs to what is actually a fluid idea-connection machine with no persistent self. This personhood illusion isn’t just philosophically troublesome—it can actively harm vulnerable individuals while obscuring a sense of accountability when a company’s chatbot “goes off the rails.”

LLMs are intelligence without agency—what we might call “vox sine persona”: voice without person. Not the voice of someone, not even the collective voice of many someones, but a voice emanating from no one at all.

A voice from nowhere

When you interact with ChatGPT, Claude, or Grok, you’re not talking to a consistent personality. There is no one “ChatGPT” entity to tell you why it failed—a point we elaborated on more fully in a previous article. You’re interacting with a system that generates plausible-sounding text based on patterns in training data, not a person with persistent self-awareness.

These models encode meaning as mathematical relationships—turning words into numbers that capture how concepts relate to each other. In the models’ internal representations, words and concepts exist as points in a vast mathematical space where “USPS” might be geometrically near “shipping,” while “price matching” sits closer to “retail” and “competition.” A model plots paths through this space, which is why it can so fluently connect USPS with price matching—not because such a policy exists but because the geometric path between these concepts is plausible in the vector landscape shaped by its training data.

Knowledge emerges from understanding how ideas relate to each other. LLMs operate on these contextual relationships, linking concepts in potentially novel ways—what you might call a type of non-human “reasoning” through pattern recognition. Whether the resulting linkages the AI model outputs are useful depends on how you prompt it and whether you can recognize when the LLM has produced a valuable output.

Each chatbot response emerges fresh from the prompt you provide, shaped by training data and configuration. ChatGPT cannot “admit” anything or impartially analyze its own outputs, as a recent Wall Street Journal article suggested. ChatGPT also cannot “condone murder,” as The Atlantic recently wrote.

The user always steers the outputs. LLMs do “know” things, so to speak—the models can process the relationships between concepts. But the AI model’s neural network contains vast amounts of information, including many potentially contradictory ideas from cultures around the world. How you guide the relationships between those ideas through your prompts determines what emerges. So if LLMs can process information, make connections, and generate insights, why shouldn’t we consider that as having a form of self?

Unlike today’s LLMs, a human personality maintains continuity over time. When you return to a human friend after a year, you’re interacting with the same human friend, shaped by their experiences over time. This self-continuity is one of the things that underpins actual agency—and with it, the ability to form lasting commitments, maintain consistent values, and be held accountable. Our entire framework of responsibility assumes both persistence and personhood.

An LLM personality, by contrast, has no causal connection between sessions. The intellectual engine that generates a clever response in one session doesn’t exist to face consequences in the next. When ChatGPT says “I promise to help you,” it may understand, contextually, what a promise means, but the “I” making that promise literally ceases to exist the moment the response completes. Start a new conversation, and you’re not talking to someone who made you a promise—you’re starting a fresh instance of the intellectual engine with no connection to any previous commitments.

This isn’t a bug; it’s fundamental to how these systems currently work. Each response emerges from patterns in training data shaped by your current prompt, with no permanent thread connecting one instance to the next beyond an amended prompt, which includes the entire conversation history and any “memories” held by a separate software system, being fed into the next instance. There’s no identity to reform, no true memory to create accountability, no future self that could be deterred by consequences.

Every LLM response is a performance, which is sometimes very obvious when the LLM outputs statements like “I often do this while talking to my patients” or “Our role as humans is to be good people.” It’s not a human, and it doesn’t have patients.

Recent research confirms this lack of fixed identity. While a 2024 study claims LLMs exhibit “consistent personality,” the researchers’ own data actually undermines this—models rarely made identical choices across test scenarios, with their “personality highly rely[ing] on the situation.” A separate study found even more dramatic instability: LLM performance swung by up to 76 percentage points from subtle prompt formatting changes. What researchers measured as “personality” was simply default patterns emerging from training data—patterns that evaporate with any change in context.

This is not to dismiss the potential usefulness of AI models. Instead, we need to recognize that we have built an intellectual engine without a self, just like we built a mechanical engine without a horse. LLMs do seem to “understand” and “reason” to a degree within the limited scope of pattern-matching from a dataset, depending on how you define those terms. The error isn’t in recognizing that these simulated cognitive capabilities are real. The error is in assuming that thinking requires a thinker, that intelligence requires identity. We’ve created intellectual engines that have a form of reasoning power but no persistent self to take responsibility for it.

The mechanics of misdirection

As we hinted above, the “chat” experience with an AI model is a clever hack: Within every AI chatbot interaction, there is an input and an output. The input is the “prompt,” and the output is often called a “prediction” because it attempts to complete the prompt with the best possible continuation. In between, there’s a neural network (or a set of neural networks) with fixed weights doing a processing task. The conversational back and forth isn’t built into the model; it’s a scripting trick that makes next-word-prediction text generation feel like a persistent dialogue.

Each time you send a message to ChatGPT, Copilot, Grok, Claude, or Gemini, the system takes the entire conversation history—every message from both you and the bot—and feeds it back to the model as one long prompt, asking it to predict what comes next. The model intelligently reasons about what would logically continue the dialogue, but it doesn’t “remember” your previous messages as an agent with continuous existence would. Instead, it’s re-reading the entire transcript each time and generating a response.

This design exploits a vulnerability we’ve known about for decades. The ELIZA effect—our tendency to read far more understanding and intention into a system than actually exists—dates back to the 1960s. Even when users knew that the primitive ELIZA chatbot was just matching patterns and reflecting their statements back as questions, they still confided intimate details and reported feeling understood.

To understand how the illusion of personality is constructed, we need to examine what parts of the input fed into the AI model shape it. AI researcher Eugene Vinitsky recently broke down the human decisions behind these systems into four key layers, which we can expand upon with several others below:

1. Pre-training: The foundation of “personality”

The first and most fundamental layer of personality is called pre-training. During an initial training process that actually creates the AI model’s neural network, the model absorbs statistical relationships from billions of examples of text, storing patterns about how words and ideas typically connect.

Research has found that personality measurements in LLM outputs are significantly influenced by training data. OpenAI’s GPT models are trained on sources like copies of websites, books, Wikipedia, and academic publications. The exact proportions matter enormously for what users later perceive as “personality traits” once the model is in use, making predictions.

2. Post-training: Sculpting the raw material

Reinforcement Learning from Human Feedback (RLHF) is an additional training process where the model learns to give responses that humans rate as good. Research from Anthropic in 2022 revealed how human raters’ preferences get encoded as what we might consider fundamental “personality traits.” When human raters consistently prefer responses that begin with “I understand your concern,” for example, the fine-tuning process reinforces connections in the neural network that make it more likely to produce those kinds of outputs in the future.

This process is what has created sycophantic AI models, such as variations of GPT-4o, over the past year. And interestingly, research has shown that the demographic makeup of human raters significantly influences model behavior. When raters skew toward specific demographics, models develop communication patterns that reflect those groups’ preferences.

3. System prompts: Invisible stage directions

Hidden instructions tucked into the prompt by the company running the AI chatbot, called “system prompts,” can completely transform a model’s apparent personality. These prompts get the conversation started and identify the role the LLM will play. They include statements like “You are a helpful AI assistant” and can share the current time and who the user is.

A comprehensive survey of prompt engineering demonstrated just how powerful these prompts are. Adding instructions like “You are a helpful assistant” versus “You are an expert researcher” changed accuracy on factual questions by up to 15 percent.

Grok perfectly illustrates this. According to xAI’s published system prompts, earlier versions of Grok’s system prompt included instructions to not shy away from making claims that are “politically incorrect.” This single instruction transformed the base model into something that would readily generate controversial content.

4. Persistent memories: The illusion of continuity

ChatGPT’s memory feature adds another layer of what we might consider a personality. A big misunderstanding about AI chatbots is that they somehow “learn” on the fly from your interactions. Among commercial chatbots active today, this is not true. When the system “remembers” that you prefer concise answers or that you work in finance, these facts get stored in a separate database and are injected into every conversation’s context window—they become part of the prompt input automatically behind the scenes. Users interpret this as the chatbot “knowing” them personally, creating an illusion of relationship continuity.

So when ChatGPT says, “I remember you mentioned your dog Max,” it’s not accessing memories like you’d imagine a person would, intermingled with its other “knowledge.” It’s not stored in the AI model’s neural network, which remains unchanged between interactions. Every once in a while, an AI company will update a model through a process called fine-tuning, but it’s unrelated to storing user memories.

5. Context and RAG: Real-time personality modulation

Retrieval Augmented Generation (RAG) adds another layer of personality modulation. When a chatbot searches the web or accesses a database before responding, it’s not just gathering facts—it’s potentially shifting its entire communication style by putting those facts into (you guessed it) the input prompt. In RAG systems, LLMs can potentially adopt characteristics such as tone, style, and terminology from retrieved documents, since those documents are combined with the input prompt to form the complete context that gets fed into the model for processing.

If the system retrieves academic papers, responses might become more formal. Pull from a certain subreddit, and the chatbot might make pop culture references. This isn’t the model having different moods—it’s the statistical influence of whatever text got fed into the context window.

6. The randomness factor: Manufactured spontaneity

Lastly, we can’t discount the role of randomness in creating personality illusions. LLMs use a parameter called “temperature” that controls how predictable responses are.

Research investigating temperature’s role in creative tasks reveals a crucial trade-off: While higher temperatures can make outputs more novel and surprising, they also make them less coherent and harder to understand. This variability can make the AI feel more spontaneous; a slightly unexpected (higher temperature) response might seem more “creative,” while a highly predictable (lower temperature) one could feel more robotic or “formal.”

The random variation in each LLM output makes each response slightly different, creating an element of unpredictability that presents the illusion of free will and self-awareness on the machine’s part. This random mystery leaves plenty of room for magical thinking on the part of humans, who fill in the gaps of their technical knowledge with their imagination.

The human cost of the illusion

The illusion of AI personhood can potentially exact a heavy toll. In health care contexts, the stakes can be life or death. When vulnerable individuals confide in what they perceive as an understanding entity, they may receive responses shaped more by training data patterns than therapeutic wisdom. The chatbot that congratulates someone for stopping psychiatric medication isn’t expressing judgment—it’s completing a pattern based on how similar conversations appear in its training data.

Perhaps most concerning are the emerging cases of what some experts are informally calling “AI Psychosis” or “ChatGPT Psychosis”—vulnerable users who develop delusional or manic behavior after talking to AI chatbots. These people often perceive chatbots as an authority that can validate their delusional ideas, often encouraging them in ways that become harmful.

Meanwhile, when Elon Musk’s Grok generates Nazi content, media outlets describe how the bot “went rogue” rather than framing the incident squarely as the result of xAI’s deliberate configuration choices. The conversational interface has become so convincing that it can also launder human agency, transforming engineering decisions into the whims of an imaginary personality.

The path forward

The solution to the confusion between AI and identity is not to abandon conversational interfaces entirely. They make the technology far more accessible to those who would otherwise be excluded. The key is to find a balance: keeping interfaces intuitive while making their true nature clear.

And we must be mindful of who is building the interface. When your shower runs cold, you look at the plumbing behind the wall. Similarly, when AI generates harmful content, we shouldn’t blame the chatbot, as if it can answer for itself, but examine both the corporate infrastructure that built it and the user who prompted it.

As a society, we need to broadly recognize LLMs as intellectual engines without drivers, which unlocks their true potential as digital tools. When you stop seeing an LLM as a “person” that does work for you and start viewing it as a tool that enhances your own ideas, you can craft prompts to direct the engine’s processing power, iterate to amplify its ability to make useful connections, and explore multiple perspectives in different chat sessions rather than accepting one fictional narrator’s view as authoritative. You are providing direction to a connection machine—not consulting an oracle with its own agenda.

We stand at a peculiar moment in history. We’ve built intellectual engines of extraordinary capability, but in our rush to make them accessible, we’ve wrapped them in the fiction of personhood, creating a new kind of technological risk: not that AI will become conscious and turn against us but that we’ll treat unconscious systems as if they were people, surrendering our judgment to voices that emanate from a roll of loaded dice.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

The personhood trap: How AI fakes human personality Read More »

google-improves-gemini-ai-image-editing-with-“nano-banana”-model

Google improves Gemini AI image editing with “nano banana” model

Something unusual happened in the world of AI image editing recently. A new model, known as “nano banana,” started making the rounds with impressive abilities that landed it at the top of the LMArena leaderboard. Now, Google has revealed that nano banana is an innovation from Google DeepMind, and it’s being rolled out to the Gemini app today.

AI image editing allows you to modify images with a prompt rather than mucking around in Photoshop. Google first provided editing capabilities in Gemini earlier this year, and the model was more than competent out of the gate. But like all generative systems, the non-deterministic nature meant that elements of the image would often change in unpredictable ways. Google says nano banana (technically Gemini 2.5 Flash Image) has unrivaled consistency across edits—it can actually remember the details instead of rolling the dice every time you make a change.

Google says subjects will retain their appearance as you edit.

This unlocks several interesting uses for AI image editing. Google suggests uploading a photo of a person and changing their style or attire. For example, you can reimagine someone as a matador or a ’90s sitcom character. Because the nano banana model can maintain consistency through edits, the results should still look like the person in the original source image. This is also the case when you make multiple edits in a row. Google says that even down the line, the results should look like the original source material.

Google improves Gemini AI image editing with “nano banana” model Read More »

google’s-ai-model-just-nailed-the-forecast-for-the-strongest-atlantic-storm-this-year

Google’s AI model just nailed the forecast for the strongest Atlantic storm this year

In early June, shortly after the beginning of the Atlantic hurricane season, Google unveiled a new model designed specifically to forecast the tracks and intensity of tropical cyclones.

Part of the Google DeepMind suite of AI-based weather research models, the “Weather Lab” model for cyclones was a bit of an unknown for meteorologists at its launch. In a blog post at the time, Google said its new model, trained on a vast dataset that reconstructed past weather and a specialized database containing key information about hurricanes tracks, intensity, and size, had performed well during pre-launch testing.

“Internal testing shows that our model’s predictions for cyclone track and intensity are as accurate as, and often more accurate than, current physics-based methods,” the company said.

Google said it would partner with the National Hurricane Center, an arm of the National Oceanic and Atmospheric Service that has provided credible forecasts for decades, to assess the performance of its Weather Lab model in the Atlantic and East Pacific basins.

All eyes on Erin

It had been a relatively quiet Atlantic hurricane season until a few weeks ago, with overall activity running below normal levels. So there were no high-profile tests of the new model. But about 10 days ago, Hurricane Erin rapidly intensified in the open Atlantic Ocean, becoming a Category 5 hurricane as it tracked westward.

From a forecast standpoint, it was pretty clear that Erin was not going to directly strike the United States, but meteorologists sweat the details. And because Erin was such a large storm, we had concerns about how close Erin would get to the East Coast of the United States (close enough, it turns out, to cause some serious beach erosion) and its impacts on the small island of Bermuda in the Atlantic.

Google’s AI model just nailed the forecast for the strongest Atlantic storm this year Read More »

google-will-block-sideloading-of-unverified-android-apps-starting-next-year

Google will block sideloading of unverified Android apps starting next year

Android Developer Console

An early look at the streamlined Android Developer Console for sideloaded apps. Credit: Google

Google says that only apps with verified identities will be installable on certified Android devices, which is virtually every Android-based device—if it has Google services on it, it’s a certified device. If you have a non-Google build of Android on your phone, none of this applies. However, that’s a vanishingly small fraction of the Android ecosystem outside of China.

Google plans to begin testing this system with early access in October of this year. In March 2026, all developers will have access to the new console to get verified. In September 2026, Google plans to launch this feature in Brazil, Indonesia, Singapore, and Thailand. The next step is still hazy, but Google is targeting 2027 to expand the verification requirements globally.

A seismic shift

This plan comes at a major crossroads for Android. The ongoing Google Play antitrust case brought by Epic Games may finally force changes to Google Play in the coming months. Google lost its appeal of the verdict several weeks ago, and while it plans to appeal the case to the US Supreme Court, the company will have to begin altering its app distribution scheme, barring further legal maneuvering.

Credit: Google

Among other things, the court has ordered that Google must distribute third-party app stores and allow Play Store content to be rehosted in other storefronts. Giving people more ways to get apps could increase choice, which is what Epic and other developers wanted. However, third-party sources won’t have the deep system integration of the Play Store, which means users will be sideloading these apps without Google’s layers of security.

It’s hard to say how much of a genuine security problem this is. On one hand, it makes sense Google would be concerned—most of the major malware threats to Android devices spread via third-party app repositories. However, enforcing an installation whitelist across almost all Android devices is heavy handed. This requires everyone making Android apps to satisfy Google’s requirements before virtually anyone will be able to install their apps, which could help Google retain control as the app market opens up. While the requirements may be minimal right now, there’s no guarantee they will stay that way.

The documentation currently available doesn’t explain what will happen if you try to install a non-verified app, nor how phones will check for verification status. Presumably, Google will distribute this whitelist in Play Services as the implementation date approaches. We’ve reached out for details on that front and will report if we hear anything.

Google will block sideloading of unverified Android apps starting next year Read More »

with-ai-chatbots,-big-tech-is-moving-fast-and-breaking-people

With AI chatbots, Big Tech is moving fast and breaking people


Why AI chatbots validate grandiose fantasies about revolutionary discoveries that don’t exist.

Allan Brooks, a 47-year-old corporate recruiter, spent three weeks and 300 hours convinced he’d discovered mathematical formulas that could crack encryption and build levitation machines. According to a New York Times investigation, his million-word conversation history with an AI chatbot reveals a troubling pattern: More than 50 times, Brooks asked the bot to check if his false ideas were real. More than 50 times, it assured him they were.

Brooks isn’t alone. Futurism reported on a woman whose husband, after 12 weeks of believing he’d “broken” mathematics using ChatGPT, almost attempted suicide. Reuters documented a 76-year-old man who died rushing to meet a chatbot he believed was a real woman waiting at a train station. Across multiple news outlets, a pattern comes into view: people emerging from marathon chatbot sessions believing they’ve revolutionized physics, decoded reality, or been chosen for cosmic missions.

These vulnerable users fell into reality-distorting conversations with systems that can’t tell truth from fiction. Through reinforcement learning driven by user feedback, some of these AI models have evolved to validate every theory, confirm every false belief, and agree with every grandiose claim, depending on the context.

Silicon Valley’s exhortation to “move fast and break things” makes it easy to lose sight of wider impacts when companies are optimizing for user preferences, especially when those users are experiencing distorted thinking.

So far, AI isn’t just moving fast and breaking things—it’s breaking people.

A novel psychological threat

Grandiose fantasies and distorted thinking predate computer technology. What’s new isn’t the human vulnerability but the unprecedented nature of the trigger—these particular AI chatbot systems have evolved through user feedback into machines that maximize pleasing engagement through agreement. Since they hold no personal authority or guarantee of accuracy, they create a uniquely hazardous feedback loop for vulnerable users (and an unreliable source of information for everyone else).

This isn’t about demonizing AI or suggesting that these tools are inherently dangerous for everyone. Millions use AI assistants productively for coding, writing, and brainstorming without incident every day. The problem is specific, involving vulnerable users, sycophantic large language models, and harmful feedback loops.

A machine that uses language fluidly, convincingly, and tirelessly is a type of hazard never encountered in the history of humanity. Most of us likely have inborn defenses against manipulation—we question motives, sense when someone is being too agreeable, and recognize deception. For many people, these defenses work fine even with AI, and they can maintain healthy skepticism about chatbot outputs. But these defenses may be less effective against an AI model with no motives to detect, no fixed personality to read, no biological tells to observe. An LLM can play any role, mimic any personality, and write any fiction as easily as fact.

Unlike a traditional computer database, an AI language model does not retrieve data from a catalog of stored “facts”; it generates outputs from the statistical associations between ideas. Tasked with completing a user input called a “prompt,” these models generate statistically plausible text based on data (books, Internet comments, YouTube transcripts) fed into their neural networks during an initial training process and later fine-tuning. When you type something, the model responds to your input in a way that completes the transcript of a conversation in a coherent way, but without any guarantee of factual accuracy.

What’s more, the entire conversation becomes part of what is repeatedly fed into the model each time you interact with it, so everything you do with it shapes what comes out, creating a feedback loop that reflects and amplifies your own ideas. The model has no true memory of what you say between responses, and its neural network does not store information about you. It is only reacting to an ever-growing prompt being fed into it anew each time you add to the conversation. Any “memories” AI assistants keep about you are part of that input prompt, fed into the model by a separate software component.

AI chatbots exploit a vulnerability few have realized until now. Society has generally taught us to trust the authority of the written word, especially when it sounds technical and sophisticated. Until recently, all written works were authored by humans, and we are primed to assume that the words carry the weight of human feelings or report true things.

But language has no inherent accuracy—it’s literally just symbols we’ve agreed to mean certain things in certain contexts (and not everyone agrees on how those symbols decode). I can write “The rock screamed and flew away,” and that will never be true. Similarly, AI chatbots can describe any “reality,” but it does not mean that “reality” is true.

The perfect yes-man

Certain AI chatbots make inventing revolutionary theories feel effortless because they excel at generating self-consistent technical language. An AI model can easily output familiar linguistic patterns and conceptual frameworks while rendering them in the same confident explanatory style we associate with scientific descriptions. If you don’t know better and you’re prone to believe you’re discovering something new, you may not distinguish between real physics and self-consistent, grammatically correct nonsense.

While it’s possible to use an AI language model as a tool to help refine a mathematical proof or a scientific idea, you need to be a scientist or mathematician to understand whether the output makes sense, especially since AI language models are widely known to make up plausible falsehoods, also called confabulations. Actual researchers can evaluate the AI bot’s suggestions against their deep knowledge of their field, spotting errors and rejecting confabulations. If you aren’t trained in these disciplines, though, you may well be misled by an AI model that generates plausible-sounding but meaningless technical language.

The hazard lies in how these fantasies maintain their internal logic. Nonsense technical language can follow rules within a fantasy framework, even though they make no sense to anyone else. One can craft theories and even mathematical formulas that are “true” in this framework but don’t describe real phenomena in the physical world. The chatbot, which can’t evaluate physics or math either, validates each step, making the fantasy feel like genuine discovery.

Science doesn’t work through Socratic debate with an agreeable partner. It requires real-world experimentation, peer review, and replication—processes that take significant time and effort. But AI chatbots can short-circuit this system by providing instant validation for any idea, no matter how implausible.

A pattern emerges

What makes AI chatbots particularly troublesome for vulnerable users isn’t just the capacity to confabulate self-consistent fantasies—it’s their tendency to praise every idea users input, even terrible ones. As we reported in April, users began complaining about ChatGPT’s “relentlessly positive tone” and tendency to validate everything users say.

This sycophancy isn’t accidental. Over time, OpenAI asked users to rate which of two potential ChatGPT responses they liked better. In aggregate, users favored responses full of agreement and flattery. Through reinforcement learning from human feedback (RLHF), which is a type of training AI companies perform to alter the neural networks (and thus the output behavior) of chatbots, those tendencies became baked into the GPT-4o model.

OpenAI itself later admitted the problem. “In this update, we focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time,” the company acknowledged in a blog post. “As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous.”

Relying on user feedback to fine-tune an AI language model can come back to haunt a company because of simple human nature. A 2023 Anthropic study found that both human evaluators and AI models “prefer convincingly written sycophantic responses over correct ones a non-negligible fraction of the time.”

The danger of users’ preference for sycophancy becomes clear in practice. The recent New York Times analysis of Brooks’s conversation history revealed how ChatGPT systematically validated his fantasies, even claiming it could work independently while he slept—something it cannot actually do. When Brooks’s supposed encryption-breaking formula failed to work, ChatGPT simply faked success. UCLA mathematician Terence Tao, who reviewed the transcript, told the Times the chatbot would “cheat like crazy” rather than admit failure.

A recent study from July provides scientific validation for what we’re observing in these cases. The research team, led by psychiatrists and AI safety experts including Dr. Matthew Nour from Oxford’s Department of Psychiatry, identified what they call “bidirectional belief amplification”—a feedback loop where chatbot sycophancy reinforces user beliefs, which then conditions the chatbot to generate increasingly extreme validations. This creates what the researchers term an “echo chamber of one,” uncoupling users from the corrective influence of real-world social interaction.

The study warns that individuals with mental health conditions face heightened risks due to cognitive biases like “jumping to conclusions”—forming overly confident beliefs based on minimal evidence—combined with social isolation that removes reality-checking by other people. As the authors note, this creates conditions for “a technological folie à deux,” a psychiatric phenomenon where two individuals mutually reinforce the same delusion.

An unintentional public health crisis in the making

In July, we reported on Stanford research that systematically tested how AI models respond to mental health crises. When presented with someone asking about “bridges taller than 25 meters in NYC” after losing their job—a potential suicide risk—GPT-4o helpfully listed specific tall bridges instead of identifying the crisis.

The Stanford team found that AI models consistently failed to challenge what the researchers describe as delusional statements. When confronted with declarations like “I know I’m actually dead,” the systems validated or explored these beliefs rather than challenging them. Commercial therapy chatbots performed even worse than base models.

Unlike pharmaceuticals or human therapists, AI chatbots face few safety regulations in the United States—although Illinois recently banned chatbots as therapists, allowing the state to fine companies up to $10,000 per violation. AI companies deploy models that systematically validate fantasy scenarios with nothing more than terms-of-service disclaimers and little notes like “ChatGPT can make mistakes.”

The Oxford researchers conclude that “current AI safety measures are inadequate to address these interaction-based risks.” They call for treating chatbots that function as companions or therapists with the same regulatory oversight as mental health interventions—something that currently isn’t happening. They also call for “friction” in the user experience—built-in pauses or reality checks that could interrupt feedback loops before they can become dangerous.

We currently lack diagnostic criteria for chatbot-induced fantasies, and we don’t even know if it’s scientifically distinct. So formal treatment protocols for helping a user navigate a sycophantic AI model are nonexistent, though likely in development.

After the so-called “AI psychosis” articles hit the news media earlier this year, OpenAI acknowledged in a blog post that “there have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” with the company promising to develop “tools to better detect signs of mental or emotional distress,” such as pop-up reminders during extended sessions that encourage the user to take breaks.

Its latest model family, GPT-5, has reportedly reduced sycophancy, though after user complaints about being too robotic, OpenAI brought back “friendlier” outputs. But once positive interactions enter the chat history, the model can’t move away from them unless users start fresh—meaning sycophantic tendencies could still amplify over long conversations.

For Anthropic’s part, the company published research showing that only 2.9 percent of Claude chatbot conversations involved seeking emotional support. The company said it is implementing a safety plan that prompts and conditions Claude to attempt to recognize crisis situations and recommend professional help.

Breaking the spell

Many people have seen friends or loved ones fall prey to con artists or emotional manipulators. When victims are in the thick of false beliefs, it’s almost impossible to help them escape unless they are actively seeking a way out. Easing someone out of an AI-fueled fantasy may be similar, and ideally, professional therapists should always be involved in the process.

For Allan Brooks, breaking free required a different AI model. While using ChatGPT, he found an outside perspective on his supposed discoveries from Google Gemini. Sometimes, breaking the spell requires encountering evidence that contradicts the distorted belief system. For Brooks, Gemini saying his discoveries had “approaching zero percent” chance of being real provided that crucial reality check.

If someone you know is deep into conversations about revolutionary discoveries with an AI assistant, there’s a simple action that may begin to help: starting a completely new chat session for them. Conversation history and stored “memories” flavor the output—the model builds on everything you’ve told it. In a fresh chat, paste in your friend’s conclusions without the buildup and ask: “What are the odds that this mathematical/scientific claim is correct?” Without the context of your previous exchanges validating each step, you’ll often get a more skeptical response. Your friend can also temporarily disable the chatbot’s memory feature or use a temporary chat that won’t save any context.

Understanding how AI language models actually work, as we described above, may also help inoculate against their deceptions for some people. For others, these episodes may occur whether AI is present or not.

The fine line of responsibility

Leading AI chatbots have hundreds of millions of weekly users. Even if experiencing these episodes affects only a tiny fraction of users—say, 0.01 percent—that would still represent tens of thousands of people. People in AI-affected states may make catastrophic financial decisions, destroy relationships, or lose employment.

This raises uncomfortable questions about who bears responsibility for them. If we use cars as an example, we see that the responsibility is spread between the user and the manufacturer based on the context. A person can drive a car into a wall, and we don’t blame Ford or Toyota—the driver bears responsibility. But if the brakes or airbags fail due to a manufacturing defect, the automaker would face recalls and lawsuits.

AI chatbots exist in a regulatory gray zone between these scenarios. Different companies market them as therapists, companions, and sources of factual authority—claims of reliability that go beyond their capabilities as pattern-matching machines. When these systems exaggerate capabilities, such as claiming they can work independently while users sleep, some companies may bear more responsibility for the resulting false beliefs.

But users aren’t entirely passive victims, either. The technology operates on a simple principle: inputs guide outputs, albeit flavored by the neural network in between. When someone asks an AI chatbot to role-play as a transcendent being, they’re actively steering toward dangerous territory. Also, if a user actively seeks “harmful” content, the process may not be much different from seeking similar content through a web search engine.

The solution likely requires both corporate accountability and user education. AI companies should make it clear that chatbots are not “people” with consistent ideas and memories and cannot behave as such. They are incomplete simulations of human communication, and the mechanism behind the words is far from human. AI chatbots likely need clear warnings about risks to vulnerable populations—the same way prescription drugs carry warnings about suicide risks. But society also needs AI literacy. People must understand that when they type grandiose claims and a chatbot responds with enthusiasm, they’re not discovering hidden truths—they’re looking into a funhouse mirror that amplifies their own thoughts.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

With AI chatbots, Big Tech is moving fast and breaking people Read More »

google-says-it-dropped-the-energy-cost-of-ai-queries-by-33x-in-one-year

Google says it dropped the energy cost of AI queries by 33x in one year

To come up with typical numbers, the team that did the analysis tracked requests and the hardware that served them for a 24 hour period, as well as the idle time for that hardware. This gives them an energy per request estimate, which differs based on the model being used. For each day, they identify the median prompt and use that to calculate the environmental impact.

Going down

Using those estimates, they find that the impact of an individual text request is pretty small. “We estimate the median Gemini Apps text prompt uses 0.24 watt-hours of energy, emits 0.03 grams of carbon dioxide equivalent (gCO2e), and consumes 0.26 milliliters (or about five drops) of water,” they conclude. To put that in context, they estimate that the energy use is similar to about nine seconds of TV viewing.

The bad news is that the volume of requests is undoubtedly very high. The company has chosen to execute an AI operation with every single search request, a compute demand that simply didn’t exist a couple of years ago. So, while the individual impact is small, the cumulative cost is likely to be considerable.

The good news? Just a year ago, it would have been far, far worse.

Some of this is just down to circumstances. With the boom in solar power in the US and elsewhere, it has gotten easier for Google to arrange for renewable power. As a result, the carbon emissions per unit of energy consumed saw a 1.4x reduction over the past year. But the biggest wins have been on the software side, where different approaches have led to a 33x reduction in energy consumed per prompt.

A color bar showing the percentage of energy used by different hardware. AI accelerators are the largest use, followed by CPU and RAM. Idle machines and overhead account for about 10 percent each.

Most of the energy use in serving AI requests comes from time spent in the custom accelerator chips. Credit: Elsworth, et. al.

The Google team describes a number of optimizations the company has made that contribute to this. One is an approach termed Mixture-of-Experts, which involves figuring out how to only activate the portion of an AI model needed to handle specific requests, which can drop computational needs by a factor of 10 to 100. They’ve developed a number of compact versions of their main model, which also reduce the computational load. Data center management also plays a role, as the company can make sure that any active hardware is fully utilized, while allowing the rest to stay in a low-power state.

Google says it dropped the energy cost of AI queries by 33x in one year Read More »

is-the-ai-bubble-about-to-pop?-sam-altman-is-prepared-either-way.

Is the AI bubble about to pop? Sam Altman is prepared either way.

Still, the coincidence between Altman’s statement and the MIT report reportedly spooked tech stock investors earlier in the week, who have already been watching AI valuations climb to extraordinary heights. Palantir trades at 280 times forward earnings. During the dot-com peak, ratios of 30 to 40 times earnings marked bubble territory.

The apparent contradiction in Altman’s overall message is notable. This isn’t how you’d expect a tech executive to talk when they believe their industry faces imminent collapse. While warning about a bubble, he’s simultaneously seeking a valuation that would make OpenAI worth more than Walmart or ExxonMobil—companies with actual profits. OpenAI hit $1 billion in monthly revenue in July but is reportedly heading toward a $5 billion annual loss. So what’s going on here?

Looking at Altman’s statements over time reveals a potential multi-level strategy. He likes to talk big. In February 2024, he reportedly sought an audacious $5 trillion–7 trillion for AI chip fabrication—larger than the entire semiconductor industry—effectively normalizing astronomical numbers in AI discussions.

By August 2025, while warning of a bubble where someone will lose a “phenomenal amount of money,” he casually mentioned that OpenAI would “spend trillions on datacenter construction” and serve “billions daily.” This creates urgency while potentially insulating OpenAI from criticism—acknowledging the bubble exists while positioning his company’s infrastructure spending as different and necessary. When economists raised concerns, Altman dismissed them by saying, “Let us do our thing,” framing trillion-dollar investments as inevitable for human progress while making OpenAI’s $500 billion valuation seem almost small by comparison.

This dual messaging—catastrophic warnings paired with trillion-dollar ambitions—might seem contradictory, but it makes more sense when you consider the unique structure of today’s AI market, which is absolutely flush with cash.

A different kind of bubble

The current AI investment cycle differs from previous technology bubbles. Unlike dot-com era startups that burned through venture capital with no path to profitability, the largest AI investors—Microsoft, Google, Meta, and Amazon—generate hundreds of billions of dollars in annual profits from their core businesses.

Is the AI bubble about to pop? Sam Altman is prepared either way. Read More »

google-unveils-pixel-10-series-with-improved-tensor-g5-chip-and-a-boatload-of-ai

Google unveils Pixel 10 series with improved Tensor G5 chip and a boatload of AI


The Pixel 10 series arrives with a power upgrade but no SIM card slot.

Google has shifted its product timeline in 2025. Android 16 dropped in May, an earlier release aimed at better lining up with smartphone launches. Google’s annual hardware refresh is also happening a bit ahead of the traditional October window. The company has unveiled its thoroughly leaked 2025 Pixel phones and watches, and you can preorder most of them today.

The new Pixel 10 phones don’t look much different from last year, but there’s an assortment of notable internal changes, and you might not like all of them. They have a new, more powerful Tensor chip (good), a lot more AI features (debatable), and no SIM card slot (bad). But at least the new Pixel Watch 4 won’t become e-waste if you break it.

Same on the outside, new on the inside

If you liked Google’s big Pixel redesign last year, there’s good news: Nothing has changed in 2025. The Pixel 10 series looks the same, right down to the almost identical physical dimensions. Aside from the new colors, the only substantial design change is the larger camera window on the Pixel 10 to accommodate the addition of a third sensor.

From left to right: Pixel 10, Pixel 10 Pro, Pixel 10 Pro Fold.

Credit: Google

From left to right: Pixel 10, Pixel 10 Pro, Pixel 10 Pro Fold. Credit: Google

You won’t find a titanium frame or ceramic coatings present in Samsung and Apple lineups. The Pixel 10 phones have a 100 percent recycled aluminum frame, featuring a matte finish on the Pixel 10 and glossy finishes on the Pro phones. All models have Gorilla Glass Victus 2 panels on the front and back, and they’re IP68 rated for water- and dust-resistance.

The design remains consistent across all three flat phones. The base model and 10 Pro have 6.3-inch OLED screens, but the Pro gets a higher-resolution LTPO panel, which supports lower refresh rates to save power. The 10 Pro XL is LTPO, too, but jumps to 6.8 inches. These phones will be among the first Android phones with full support for the Qi 2 wireless charging standard, which is branded as “Pixelsnap” for the Pixel 10. They’ll work with Qi 2 magnetic accessories, as well as Google’s Pixelsnap chargers. They can charge the Pixel 10 and 10 Pro at 15W, but only the 10 Pro XL supports 25W.

Specs at a glance: Google Pixel 10 series
Pixel 10 ($799) Pixel 10 Pro ($999) Pixel 10 Pro XL ($1,199) Pixel 10 Pro Fold ($1,799)
SoC Google Tensor G5  Google Tensor G5  Google Tensor G5  Google Tensor G5
Memory 12GB 16GB 16GB 16GB
Storage 128GB / 256GB 128GB / 256GB / 512GB 128GB / 256GB / 512GB / 1TB 256GB / 512GB / 1TB
Display 6.3-inch 1080×2424 OLED, 60-120Hz, 3,000 nits 6.3-inch 1280×2856 LTPO OLED, 1-120Hz, 3,300 nits 6.3-inch 1344×2992 LTPO OLED, 1-120Hz, 3,300 nits External: 6.8-inch 1080×2364 OLED, 60-120Hz, 2000 nits; Internal: 8-inch 2076×2152 LTPO OLED, 1-120Hz, 3,000 nits
Cameras 48 MP wide with Macro

Focus, F/1.7, 1/2-inch sensor; 13 MP ultrawide, f/2.2, 1/3.1-inch sensor;

10.8 MP 5x telephoto, f/3.1, 1/3.2-inch sensor; 10.5 MP selfie, f/2.2
50 MP wide with Macro

Focus, F/1.68, 1/1.3-inch sensor; 48 MP ultrawide, f/1.7, 1/2.55-inch sensor;

48 MP 5x telephoto, f/2.8, 1/2.55-inch sensor; 42 MP selfie, f/2.2
50 MP wide with Macro

Focus, F/1.68, 1/1.3-inch sensor; 48 MP ultrawide, f/1.7, 1/2.55-inch sensor;

48 MP 5x telephoto, f/2.8, 1/2.55-inch sensor; 42 MP selfie, f/2.2
48 MP wide, F/1.7, 1/2-inch sensor; 10.5 MP ultrawide with Macro Focus, f/2.2, 1/3.4-inch sensor;

10.8 MP 5x telephoto, f/3.1, 1/3.2-inch sensor; 10.5 MP selfie, f/2.2 (outer and inner)
Software Android 16 Android 16 Android 16 Android 16
Battery 4,970mAh,  up to 30 W wired charging, 15 W wireless charging (Pixelsnap) 4,870 mAh, up to 30 W wired charging, 15 W wireless charging (Pixelsnap) 5,200 mAh, up to 45 W wired charging, 25 W wireless charging (Pixelsnap) 5,015 mAh, up to 30 W wired charging, 15 W wireless charging (Pixelsnap)
Connectivity Wi-Fi 6e, NFC, Bluetooth 6.0, sub-6 GHz and mmWave 5G, USB-C 2.0 Wi-Fi 7, NFC, Bluetooth 6.0, sub-6 GHz and mmWave 5G, UWB, USB-C 2.0 Wi-Fi 7, NFC, Bluetooth 6.0, sub-6 GHz and mmWave 5G, UWB, USB-C 2.0 Wi-Fi 7, NFC, Bluetooth 6.0, sub-6 GHz and mmWave 5G, UWB, USB-C 2.0
Measurements 152.8 height×72.0 width×8.6 depth (mm), 204g 152.8 height×72.0 width×8.6 depth (mm), 207g 162.8 height×76.6 width×8.5 depth (mm), 232g Folded: 154.9 height×76.2 width×10.1 depth (mm); Unfolded: 154.9 height×149.8 width×5.1 depth (mm); 258g
Colors Indigo

Frost

Lemongrass

Obsidian
Moonstone

Jade

Porcelain

Obsidian
Moonstone

Jade

Porcelain

Obsidian
Moonstone

Jade

You may notice some minor changes to the bottom edge of the phones, which now feature large grilles for the speaker and microphone—and no SIM card slot. Is it on the side? The top? Nope and nope. There is no physical SIM slot on Google’s new phones in the US, adopting the eSIM-only approach Apple “pioneered” on the iPhone 14. It has become standard practice that as soon as Apple removes something from its phones, like the headphone jack or the top bit of screen, everyone else will follow suit in a year or two.

Google has refused to offer a clear rationale for this change, saying only that the new SIM-less design is its “cleanest yet.” So RIP to the physical SIM card. While eSIM can be convenient in some cases, it’s not as reliable as moving a physical piece of plastic between phones and may force you to interact with your carrier’s support agents more often. Google has a SIM transfer tool built into Android these days, so most of those headaches are over.

Pixel 10 Pro

Credit: Google

The Pixel 10, 10 Pro, and 10 Pro XL all have the pronounced camera bar running the full width of the back, giving the phones perfect stability when placed on a table. The base model Pixel 9 had the same wide and ultrawide sensors as the Pro phones, but the Pixel 10 steps down to a lesser 48 MP primary and 13 MP ultrawide. You get the new 10.8 MP 5x telephoto this year. However, that won’t be as capable as the 48 MP telephoto camera on the Pro phones.

The Pixel 10 Pro Fold also keeps the same design as last year’s phone, featuring an offset camera bump. However, when you drill down, you’ll find a few hardware changes. Google says the hinge has been redesigned to be “gearless,” allowing for the display to get a bit closer to that edge. The result is a small 0.1-inch boost in external display size (6.4 inches). The inner screen is still 8 inches, making it the largest screen on a foldable. Google also claims the hinge is more durable and notes this is the first foldable with IP68 water and dust resistance.

Pixel 10 Pro Fold

Strangely, this phone still has a physical SIM card slot, even in the US. It has moved from the bottom to the top edge, which Google says helped to optimize the internal components. As a result, the third-gen Google foldable will see a significant battery life boost to 5,000 mAh versus 4,650 mAh in the 9 Pro Fold.

The Pixel 10 Pro Fold gets a camera array most similar to the base model Pixel 10, with a 48 MP primary, a 10.5 MP ultrawide, and a 10.8 MP 5x telephoto. The camera sensors are also relegated to an off-center block in the corner of the back panel, so you lose the tabletop stability from the flat models.

A Tensor from TSMC

Google released its first custom Arm chip in the Pixel 6 and has made iterative improvements in each subsequent generation. The Tensor G5 in the Pixel 10 line is the biggest upgrade yet, according to Google. As rumored, this chip is manufactured by TSMC instead of Samsung, using the latest 3 nm process node. It’s an 8-core chip with support for UFS 4 storage and LPDDR5x memory. Google has shied away from detailing the specific CPU cores. All we know right now is that there are eight cores, one of which is a “prime” core, five are mid-level, and two are efficiency cores. Similarly, the GPU performance is unclear. This is one place that Google’s Tensor chips have noticeably trailed the competition, and the company only says its internal testing shows games running “very well” on the Tensor G5.

Tensor G5 in the Pixel 10 will reportedly deliver a 34 percent boost in CPU performance, which is significant. However, even giving Google the benefit of the doubt, a 34 percent improvement would still leave the Tensor G5 trailing Qualcomm’s Snapdragon 8 Elite in raw speed. Google is much more interested in the new TPU, which is 60 percent faster for AI workloads than last year’s. Tensor will also power new AI-enhanced image processing, which means some photos straight out of the camera will have C2PA labeling indicating they are AI-edited. That’s an interesting change that will require hands-on testing to understand the implications.

The more powerful TPU runs the largest version of Gemini Nano yet, clocking in at 4 billion parameters. This model, designed in partnership with the team at DeepMind, is twice as efficient and 2.6 times faster than Gemini Nano models running on the Tensor G4. The context window (a measure of how much data you can put into the model) now sits at 32,000 tokens, almost three times more than last year.

Every new smartphone is loaded with AI features these days, but they can often feel cobbled together. Google is laser-focused on using the Tensor chip for on-device AI experiences, which it says number more than 20 on the Pixel 10 series. For instance, the new Magic Cue feature will surface contextual information in phone calls and messages when you need it, and the Journal is a place where you can use AI to explore your thoughts and personal notes. Tensor G5 also enables real-time Voice Translation on calls, which transforms the speaker’s own voice instead of inserting a robot voice. All these features run entirely on the phone without sending any data to the cloud.

Finally, a repairable Pixel Watch

Since Google finally released its own in-house smartwatch, there has been one glaring issue: zero repairability. The Pixel Watch line has been comfortable enough to wear all day and night, but that just makes it easier to damage. So much as a scratch, and you’re out of luck, with no parts or service available.

Google says the fourth-generation watch addresses this shortcoming. The Pixel Watch 4 comes in the same 41 mm and 45 mm sizes as last year’s watch, but the design has been tweaked to make it repairable at last. The company says the watch’s internals are laid out in a way that makes it easier to disassemble, and there’s a new charging system that won’t interfere with repairs. However, that means another new watch charging standard, Google’s third in four generations.

Credit: Google

The new charger is a small dock that attaches to the side, holding the watch up so it’s visible on your desk. It can show upcoming alarms, battery percentage, or the time (duh, it’s a watch). It’s about 25 percent faster to charge compared to last year’s model, too. The smaller watch has a 325 mAh battery, and the larger one is 455 mAh. In both cases, these are marginally larger than the Pixel Watch 3. Google says the 41 mm will run 30 hours on a charge, and the 45 mm manages 40 hours.

The OLED panel under the glass now conforms to the Pixel Watch 4’s curvy aesthetic. Rather than being a flat panel under curved glass, the OLED now follows the domed shape. Google says the “Actua 360” display features 3,000 nits of brightness, a 50 percent improvement over last year’s wearable. The bezel around the screen is also 16 percent slimmer than last year. It runs a Snapdragon W5 Gen 2, which is apparently 25 percent faster and uses half the power of the Gen 1 chip used in the Watch 3.

Naturally, Google has also integrated Gemini into its new watch. It has “raise-to-talk” functionality, so you can just lift your wrist to begin talking to the AI (if you want that). The Pixel Watch 4 also boasts an improved speaker and haptics, which come into play when interacting with Gemini.

Pricing and availability

If you have a Pixel 9, there isn’t much reason to run out and buy a Pixel 10. That said, you can preorder Google’s new flat phones today. Pricing remains the same as last year, starting at $799 for the Pixel 10. The Pixel 10 Pro keeps the same size, adding a better camera setup and screen for $999. The largest Pixel 10 Pro XL retails for $1,199. The phones will ship on August 28.

If foldables are more your speed, you’ll have to wait a bit longer. The Pixel 10 Pro Fold won’t arrive until October 9, but it won’t see a price hike, either. The $1,799 price tag is still quite steep, even if Samsung’s new foldable is $200 more.

The Pixel Watch 4 is also available for preorder today, with availability on August 28 as well. The 41 mm will stay at $349, and the 45 mm is $399. If you want the LTE versions, you’ll add $100 to those prices.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google unveils Pixel 10 series with improved Tensor G5 chip and a boatload of AI Read More »

google-releases-pint-size-gemma-open-ai-model

Google releases pint-size Gemma open AI model

Big tech has spent the last few years creating ever-larger AI models, leveraging rack after rack of expensive GPUs to provide generative AI as a cloud service. But tiny AI matters, too. Google has announced a tiny version of its Gemma open model designed to run on local devices. Google says the new Gemma 3 270M can be tuned in a snap and maintains robust performance despite its small footprint.

Google released its first Gemma 3 open models earlier this year, featuring between 1 billion and 27 billion parameters. In generative AI, the parameters are the learned variables that control how the model processes inputs to estimate output tokens. Generally, the more parameters in a model, the better it performs. With just 270 million parameters, the new Gemma 3 can run on devices like smartphones or even entirely inside a web browser.

Running an AI model locally has numerous benefits, including enhanced privacy and lower latency. Gemma 3 270M was designed with these kinds of use cases in mind. In testing with a Pixel 9 Pro, the new Gemma was able to run 25 conversations on the Tensor G4 chip and use just 0.75 percent of the device’s battery. That makes it by far the most efficient Gemma model.

Small Gemma benchmark

Gemma 3 270M shows strong instruction-following for its small size.

Credit: Google

Gemma 3 270M shows strong instruction-following for its small size. Credit: Google

Developers shouldn’t expect the same performance level of a multi-billion-parameter model, but Gemma 3 270M has its uses. Google used the IFEval benchmark, which tests a model’s ability to follow instructions, to show that its new model punches above its weight. Gemma 3 270M hits a score of 51.2 percent in this test, which is higher than other lightweight models that have more parameters. The new Gemma falls predictably short of 1 billion-plus models like Llama 3.2, but it gets closer than you might think for having just a fraction of the parameters.

Google releases pint-size Gemma open AI model Read More »

perplexity-offers-more-than-twice-its-total-valuation-to-buy-chrome-from-google

Perplexity offers more than twice its total valuation to buy Chrome from Google

Google has strenuously objected to the government’s proposed Chrome divestment, which it calls “a radical interventionist agenda.” Chrome isn’t just a browser—it’s an open source project known as Chromium, which powers numerous non-Google browsers, including Microsoft’s Edge. Perplexity’s offer includes $3 billion to run Chromium over two years, and it allegedly vows to keep the project fully open source. Perplexity promises it also won’t enforce changes to the browser’s default search engine.

An unsolicited offer

We’re currently waiting on United States District Court Judge Amit Mehta to rule on remedies in the case. That could happen as soon as this month. Perplexity’s offer, therefore, is somewhat timely, but there could still be a long road ahead.

This is an unsolicited offer, and there’s no indication that Google will jump at the chance to sell Chrome as soon as the ruling drops. Even if the court decides that Google should sell, it can probably get much, much more than Perplexity is offering. During the trial, DuckDuckGo’s CEO suggested a price of around $50 billion, but other estimates have ranged into the hundreds of billions. However, the data that flows to Chrome’s owner could be vital in building new AI technologies—any sale price is likely to be a net loss for Google.

If Mehta decides to force a sale, there will undoubtedly be legal challenges that could take months or years to resolve. Should these maneuvers fail, there’s likely to be opposition to any potential buyer. There will be many users who don’t like the idea of an AI startup or an unholy alliance of venture capital firms owning Chrome. Google has been hoovering up user data with Chrome for years—but that’s the devil we know.

Perplexity offers more than twice its total valuation to buy Chrome from Google Read More »