Google

openai-releases-gpt-5.2-after-“code-red”-google-threat-alert

OpenAI releases GPT-5.2 after “code red” Google threat alert

On Thursday, OpenAI released GPT-5.2, its newest family of AI models for ChatGPT, in three versions called Instant, Thinking, and Pro. The release follows CEO Sam Altman’s internal “code red” memo earlier this month, which directed company resources toward improving ChatGPT in response to competitive pressure from Google’s Gemini 3 AI model.

“We designed 5.2 to unlock even more economic value for people,” Fidji Simo, OpenAI’s chief product officer, said during a press briefing with journalists on Thursday. “It’s better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long context, using tools and then linking complex, multi-step projects.”

As with previous versions of GPT-5, the three model tiers serve different purposes: Instant handles faster tasks like writing and translation; Thinking spits out simulated reasoning “thinking” text in an attempt to tackle more complex work like coding and math; and Pro spits out even more simulated reasoning text with the goal of delivering the highest-accuracy performance for difficult problems.

A chart of GPT-5.2 benchmark results taken from OpenAI's website.

A chart of GPT-5.2 Thinking benchmark results comparing it to its predecessor, taken from OpenAI’s website. Credit: OpenAI

GPT-5.2 features a 400,000-token context window, allowing it to process hundreds of documents at once, and a knowledge cutoff date of August 31, 2025.

GPT-5.2 is rolling out to paid ChatGPT subscribers starting Thursday, with API access available to developers. Pricing in the API runs $1.75 per million input tokens for the standard model, a 40 percent increase over GPT-5.1. OpenAI says the older GPT-5.1 will remain available in ChatGPT for paid users for three months under a legacy models dropdown.

Playing catch-up with Google

The release follows a tricky month for OpenAI. In early December, Altman issued an internal “code red” directive after Google’s Gemini 3 model topped multiple AI benchmarks and gained market share. The memo called for delaying other initiatives, including advertising plans for ChatGPT, to focus on improving the chatbot’s core experience.

The stakes for OpenAI are substantial. The company has made commitments totaling $1.4 trillion for AI infrastructure buildouts over the next several years, bets it made when it had a more obvious technology lead among AI companies. Google’s Gemini app now has more than 650 million monthly active users, while OpenAI reports 800 million weekly active users for ChatGPT.

OpenAI releases GPT-5.2 after “code red” Google threat alert Read More »

disney-says-google-ai-infringes-copyright-“on-a-massive-scale”

Disney says Google AI infringes copyright “on a massive scale”

While Disney wants its characters out of Google AI generally, the letter specifically cited the AI tools in YouTube. Google has started adding its Veo AI video model to YouTube, allowing creators to more easily create and publish videos. That seems to be a greater concern for Disney than image models like Nano Banana.

Google has said little about Disney’s warning—a warning Google must have known was coming. A Google spokesperson has issued the following brief statement on the mater.

“We have a longstanding and mutually beneficial relationship with Disney, and will continue to engage with them,” Google says. “More generally, we use public data from the open web to build our AI and have built additional innovative copyright controls like Google-extended and Content ID for YouTube, which give sites and copyright holders control over their content.”

Perhaps this is previewing Google’s argument in a theoretical lawsuit. That copyrighted Disney content was all over the open internet, so is it really Google’s fault it ended up baked into the AI?

Content silos for AI

The generative AI boom has treated copyright as a mere suggestion as companies race to gobble up training data and remix it as “new” content. A cavalcade of companies, including The New York Times and Getty Images, have sued over how their material has been used and replicated by AI. Disney itself threatened a lawsuit against Character.AI earlier this year, leading to the removal of Disney content from the service.

Google isn’t Character.AI, though. It’s probably no coincidence that Disney is challenging Google at the same time it is entering into a content deal with OpenAI. Disney has invested $1 billion in the AI firm and agreed to a three-year licensing deal that officially brings Disney characters to OpenAI’s Sora video app. The specifics of that arrangement are still subject to negotiations.

Disney says Google AI infringes copyright “on a massive scale” Read More »

google-is-reviving-wearable-gesture-controls,-but-only-for-the-pixel-watch-4

Google is reviving wearable gesture controls, but only for the Pixel Watch 4

Long ago, Google’s Android-powered wearables had hands-free navigation gestures. Those fell by the wayside as Google shredded its wearable strategy over and over, but gestures are back, baby. The Pixel Watch 4 is getting an update that adds several gestures, one of which is straight out of the Apple playbook.

When the update hits devices, the Pixel Watch 4 will gain a double pinch gesture like the Apple Watch has. By tapping your thumb and forefinger together, you can answer or end calls, pause timers, and more. The watch will also prompt you at times when you can use the tap gesture to control things.

In previous incarnations of Google-powered watches, a quick wrist turn gesture would scroll through lists. In the new gesture system, that motion dismisses what’s on the screen. For example, you can clear a notification from the screen or dismiss an incoming call. Pixel Watch 4 owners will also enjoy this one when the update arrives.

And what about the Pixel Watch 3? That device won’t get gesture support at this time. There’s no reason it shouldn’t get the same features as the latest wearable, though. The Pixel Watch 3 has a very similar Arm chip, and it has the same orientation sensors as the new watch. The Pixel Watch 4’s main innovation is a revamped case design that allows for repairability, which was not supported on the Pixel Watch 3 and earlier.

Google is reviving wearable gesture controls, but only for the Pixel Watch 4 Read More »

the-npu-in-your-phone-keeps-improving—why-isn’t-that-making-ai-better?

The NPU in your phone keeps improving—why isn’t that making AI better?


Shrinking AI for your phone is no simple matter.

npu phone

The NPU in your phone might not be doing very much. Credit: Aurich Lawson | Getty Images

The NPU in your phone might not be doing very much. Credit: Aurich Lawson | Getty Images

Almost every technological innovation of the past several years has been laser-focused on one thing: generative AI. Many of these supposedly revolutionary systems run on big, expensive servers in a data center somewhere, but at the same time, chipmakers are crowing about the power of the neural processing units (NPU) they have brought to consumer devices. Every few months, it’s the same thing: This new NPU is 30 or 40 percent faster than the last one. That’s supposed to let you do something important, but no one really gets around to explaining what that is.

Experts envision a future of secure, personal AI tools with on-device intelligence, but does that match the reality of the AI boom? AI on the “edge” sounds great, but almost every AI tool of consequence is running in the cloud. So what’s that chip in your phone even doing?

What is an NPU?

Companies launching a new product often get bogged down in superlatives and vague marketing speak, so they do a poor job of explaining technical details. It’s not clear to most people buying a phone why they need the hardware to run AI workloads, and the supposed benefits are largely theoretical.

Many of today’s flagship consumer processors are systems-on-a-chip (SoC) because they incorporate multiple computing elements—like CPU cores, GPUs, and imaging controllers—on a single piece of silicon. This is true of mobile parts like Qualcomm’s Snapdragon or Google’s Tensor, as well as PC components like the Intel Core Ultra.

The NPU is a newer addition to chips, but it didn’t just appear one day—there’s a lineage that brought us here. NPUs are good at what they do because they emphasize parallel computing, something that’s also important in other SoC components.

Qualcomm devotes significant time during its new product unveilings to talk about its Hexagon NPUs. Keen observers may recall that this branding has been reused from the company’s line of digital signal processors (DSPs), and there’s a good reason for that.

“Our journey into AI processing started probably 15 or 20 years ago, wherein our first anchor point was looking at signal processing,” said Vinesh Sukumar, Qualcomm’s head of AI products. DSPs have a similar architecture compared to NPUs, but they’re much simpler, with a focus on processing audio (e.g., speech recognition) and modem signals.

Qualcomm chip design NPU

The NPU is one of multiple components in modern SoCs.

Credit: Qualcomm

The NPU is one of multiple components in modern SoCs. Credit: Qualcomm

As the collection of technologies we refer to as “artificial intelligence” developed, engineers began using DSPs for more types of parallel processing, like long short-term memory (LSTM). Sukumar explained that as the industry became enamored with convolutional neural networks (CNNs), the technology underlying applications like computer vision, DSPs became focused on matrix functions, which are essential to generative AI processing as well.

While there is an architectural lineage here, it’s not quite right to say NPUs are just fancy DSPs. “If you talk about DSPs in the general term of the word, yes, [an NPU] is a digital signal processor,” said MediaTek Assistant Vice President Mark Odani. “But it’s all come a long way and it’s a lot more optimized for parallelism, how the transformers work, and holding huge numbers of parameters for processing.”

Despite being so prominent in new chips, NPUs are not strictly necessary for running AI workloads on the “edge,” a term that differentiates local AI processing from cloud-based systems. CPUs are slower than NPUs but can handle some light workloads without using as much power. Meanwhile, GPUs can often chew through more data than an NPU, but they use more power to do it. And there are times you may want to do that, according to Qualcomm’s Sukumar. For example, running AI workloads while a game is running could favor the GPU.

“Here, your measurement of success is that you cannot drop your frame rate while maintaining the spatial resolution, the dynamic range of the pixel, and also being able to provide AI recommendations for the player within that space,” says Sukumar. “In this kind of use case, it actually makes sense to run that in the graphics engine, because then you don’t have to keep shifting between the graphics and a domain-specific AI engine like an NPU.”

Livin’ on the edge is hard

Unfortunately, the NPUs in many devices sit idle (and not just during gaming). The mix of local versus cloud AI tools favors the latter because that’s the natural habitat of LLMs. AI models are trained and fine-tuned on powerful servers, and that’s where they run best.

A server-based AI, like the full-fat versions of Gemini and ChatGPT, is not resource-constrained like a model running on your phone’s NPU. Consider the latest version of Google’s on-device Gemini Nano model, which has a context window of 32k tokens. That is a more than 2x improvement over the last version. However, the cloud-based Gemini models have context windows of up to 1 million tokens, meaning they can process much larger volumes of data.

Both cloud-based and edge AI hardware will continue getting better, but the balance may not shift in the NPU’s favor. “The cloud will always have more compute resources versus a mobile device,” said Google’s Shenaz Zack, senior product manager on the Pixel team.

“If you want the most accurate models or the most brute force models, that all has to be done in the cloud,” Odani said. “But what we’re finding is that, in a lot of the use cases where there’s just summarizing some text or you’re talking to your voice assistant, a lot of those things can fit within three billion parameters.”

Squeezing AI models onto a phone or laptop involves some compromise—for example, by reducing the parameters included in the model. Odani explained that cloud-based models run hundreds of billions of parameters, the weighting that determines how a model processes input tokens to generate outputs. You can’t run anything like that on a consumer device right now, so developers have to vastly scale back the size of models for the edge. Odani says MediaTek’s latest ninth-generation NPU can handle about 3 billion parameters—a difference of several orders of magnitude.

The amount of memory available in a phone or laptop is also a limiting factor, so mobile-optimized AI models are usually quantized. That means the model’s estimation of the next token runs with less precision. Let’s say you want to run one of the larger open models, like Llama or Gemma 7b, on your device. The de facto standard is FP16, known as half-precision. At that level, a model with 7 billion parameters will lock up 13 or 14 gigabytes of memory. Stepping down to FP4 (quarter-precision) brings the size of the model in memory to a few gigs.

“When you compress to, let’s say, between three and four gigabytes, it’s a sweet spot for integration into memory constrained form factors like a smartphone,” Sukumar said. “And there’s been a lot of investment in the ecosystem and at Qualcomm to look at various ways of compressing the models without losing quality.”

It’s difficult to create a generalized AI with these limitations for mobile devices, but computers—and especially smartphones—are a wellspring of data that can be pumped into models to generate supposedly helpful outputs. That’s why most edge AI is geared toward specific, narrow use cases, like analyzing screenshots or suggesting calendar appointments. Google says its latest Pixel phones run more than 100 AI models, both generative and traditional.

Even AI skeptics can recognize that the landscape is changing quickly. In the time it takes to shrink and optimize AI models for a phone or laptop, new cloud models may appear that make that work obsolete. This is also why third-party developers have been slow to utilize NPU processing in apps. They either have to plug into an existing on-device model, which involves restrictions and rapidly moving development targets, or deploy their own custom models. Neither is a great option currently.

A matter of trust

If the cloud is faster and easier, why go to the trouble of optimizing for the edge and burning more power with an NPU? Leaning on the cloud means accepting a level of dependence and trust in the people operating AI data centers that may not always be appropriate.

“We always start off with user privacy as an element,” said Qualcomm’s Sukumar. He explained that the best inference is not general in nature—it’s personalized based on the user’s interests and what’s happening in their lives. Fine-tuning models to deliver that experience calls for personal data, and it’s safer to store and process that data locally.

Even when companies say the right things about privacy in their cloud services, they’re far from guarantees. The helpful, friendly vibe of general chatbots also encourages people to divulge a lot of personal information, and if that assistant is running in the cloud, your data is there as well. OpenAI’s copyright fight with The New York Times could lead to millions of private chats being handed over to the publisher. The explosive growth and uncertain regulatory framework of gen AI make it hard to know what’s going to happen to your data.

“People are using a lot of these generative AI assistants like a therapist,” Odani said. “And you don’t know one day if all this stuff is going to come out on the Internet.”

Not everyone is so concerned. Zack claims Google has built “the world’s most secure cloud infrastructure,” allowing it to process data where it delivers the best results. Zack uses Video Boost and Pixel Studio as examples of this approach, noting that Google’s cloud is the only way to make these experiences fast and high-quality. The company recently announced its new Private AI Compute system, which it claims is just as safe as local AI.

Even if that’s true, the edge has other advantages—edge AI is just more reliable than a cloud service. “On-device is fast,” Odani said. “Sometimes I’m talking to ChatGPT and my Wi-Fi goes out or whatever, and it skips a beat.”

The services hosting cloud-based AI models aren’t just a single website—the Internet of today is massively interdependent, with content delivery networks, DNS providers, hosting, and other services that could degrade or shut down your favorite AI in the event of a glitch. When Cloudflare suffered a self-inflicted outage recently, ChatGPT users were annoyed to find their trusty chatbot was unavailable. Local AI features don’t have that drawback.

Cloud dominance

Everyone seems to agree that a hybrid approach is necessary to deliver truly useful AI features (assuming those exist), sending data to more powerful cloud services when necessary—Google, Apple, and every other phone maker does this. But the pursuit of a seamless experience can also obscure what’s happening with your data. More often than not, the AI features on your phone aren’t running in a secure, local way, even when the device has the hardware to do that.

Take, for example, the new OnePlus 15. This phone has Qualcomm’s brand-new Snapdragon 8 Elite Gen 5, which has an NPU that is 37 percent faster than the last one, for whatever that’s worth. Even with all that on-device AI might, OnePlus is heavily reliant on the cloud to analyze your personal data. Features like AI Writer and the AI Recorder connect to the company’s servers for processing, a system OnePlus assures us is totally safe and private.

Similarly, Motorola released a new line of foldable Razr phones over the summer that are loaded with AI features from multiple providers. These phones can summarize your notifications using AI, but you might be surprised how much of it happens in the cloud unless you read the terms and conditions. If you buy the Razr Ultra, that summarization happens on your phone. However, the cheaper models with less RAM and NPU power use cloud services to process your notifications. Again, Motorola says this system is secure, but a more secure option would have been to re-optimize the model for its cheaper phones.

Even when an OEM focuses on using the NPU hardware, the results can be lacking. Look at Google’s Daily Hub and Samsung’s Now Brief. These features are supposed to chew through all the data on your phone and generate useful recommendations and actions, but they rarely do anything aside from showing calendar events. In fact, Google has temporarily removed Daily Hub from Pixels because the feature did so little, and Google is a pioneer in local AI with Gemini Nano. Google has actually moved some parts of its mobile AI experience from local to cloud-based processing in recent months.

Those “brute force” models appear to be winning, and it doesn’t hurt that companies also get more data when you interact with their private computing cloud services.

Maybe take what you can get?

There’s plenty of interest in local AI, but so far, that hasn’t translated to an AI revolution in your pocket. Most of the AI advances we’ve seen so far depend on the ever-increasing scale of cloud systems and the generalized models that run there. Industry experts say that extensive work is happening behind the scenes to shrink AI models to work on phones and laptops, but it will take time for that to make an impact.

In the meantime, local AI processing is out there in a limited way. Google still makes use of the Tensor NPU to handle sensitive data for features like Magic Cue, and Samsung really makes the most of Qualcomm’s AI-focused chipsets. While Now Brief is of questionable utility, Samsung is cognizant of how reliance on the cloud may impact users, offering a toggle in the system settings that restricts AI processing to run only on the device. This limits the number of available AI features, and others don’t work as well, but you’ll know none of your personal data is being shared. No one else offers this option on a smartphone.

Galaxy AI toggle

Samsung offers an easy toggle to disable cloud AI and run all workloads on-device.

Credit: Ryan Whitwam

Samsung offers an easy toggle to disable cloud AI and run all workloads on-device. Credit: Ryan Whitwam

Samsung spokesperson Elise Sembach said the company’s AI efforts are grounded in enhancing experiences while maintaining user control. “The on-device processing toggle in One UI reflects this approach. It gives users the option to process AI tasks locally for faster performance, added privacy, and reliability even without a network connection,” Sembach said.

Interest in edge AI might be a good thing even if you don’t use it. Planning for this AI-rich future can encourage device makers to invest in better hardware—like more memory to run all those theoretical AI models.

“We definitely recommend our partners increase their RAM capacity,” said Sukumar. Indeed, Google, Samsung, and others have boosted memory capacity in large part to support on-device AI. Even if the cloud is winning, we’ll take the extra RAM.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

The NPU in your phone keeps improving—why isn’t that making AI better? Read More »

google-announces-second-android-16-release-of-2025-is-heading-to-pixels

Google announces second Android 16 release of 2025 is heading to Pixels

Material 3 Expressive came to Pixels earlier this year but not as part of the first Android 16 upgrade—Google’s relationship with Android versions is complicated these days. Regardless, Material 3 will get a bit more cohesive on Pixels following this update. Google will now apply Material theming to all icons on your device automatically, replacing legacy colored icons with theme-friendly versions. Similarly, dark mode will be supported across more apps, even if the devs haven’t added support. Google is also adding a few more icon shape options if you want to jazz up your home screen.

Android 16 screens

Credit: Google

By way of functional changes, Google has added a more intuitive way of managing parental controls—you can just use the managed device directly. Parents will be able to set a PIN code for accessing features like screen time, app usage, and so on without grabbing a different device. If you want more options or control, the new on-device settings will also help you configure Google Family Link.

Android for all

No Pixel? No problem. Google has also bundled up a collection of app and system updates that will begin rolling out today for all supported Android devices.

Chrome for Android is getting an update with tab pinning, mirroring a feature that has been in the desktop version since time immemorial. The Google Messages app is also taking care of some low-hanging fruit. When you’re invited to a group chat by a new number, the app will display group information and a one-tap option to leave and report the chat as spam.

Google’s official dialer app comes on Pixels, but it’s also in the Play Store for anyone to download. If you and your contacts use Google Dialer, you’ll soon be able to place calls with a “reason.” You can flag a call as “Urgent” to indicate to the recipient that they shouldn’t send you to voicemail. The urgent label will also remain in the call history if they miss the call.

Google announces second Android 16 release of 2025 is heading to Pixels Read More »

netflix-quietly-drops-support-for-casting-to-most-tvs

Netflix quietly drops support for casting to most TVs

Have you been trying to cast Stranger Things from your phone, only to find that your TV isn’t cooperating? It’s not the TV—Netflix is to blame for this one, and it’s intentional. The streaming app has recently updated its support for Google Cast to disable the feature in most situations. You’ll need to pay for one of the company’s more expensive plans, and even then, Netflix will only cast to older TVs and streaming dongles.

The Google Cast system began appearing in apps shortly after the original Chromecast launched in 2013. Since then, Netflix users have been able to start video streams on TVs and streaming boxes from the mobile app. That was vital for streaming targets without their own remote or on-screen interface, but times change.

Today, Google has moved beyond the remote-free Chromecast experience, and most TVs have their own standalone Netflix apps. Netflix itself is also allergic to anything that would allow people to share passwords or watch in a new place. Over the last couple of weeks, Netflix updated its Android app to remove most casting options, mirroring a change in 2019 to kill Apple AirPlay.

The company’s support site (spotted by Android Authority) now clarifies that casting is only supported in a narrow set of circumstances. First, you need to be paying for one of the ad-free service tiers, which start at $18 per month. Those on the $8 ad-supported plan won’t have casting support.

Even then, Casting only appears for devices without a remote, like the earlier generations of Google Chromecasts, as well as some older TVs with Cast built in. For example, anyone still rocking Google’s 3rd Gen Chromecast from 2018 can cast video in Netflix, but those with the 2020 Chromecast dongle (which has a remote and a full Android OS) will have to use the TV app. Essentially, anything running Android/Google TV or a smart TV with a full Netflix app will force you to log in before you can watch anything.

Netflix quietly drops support for casting to most TVs Read More »

google-tells-employees-it-must-double-capacity-every-6-months-to-meet-ai-demand

Google tells employees it must double capacity every 6 months to meet AI demand

While AI bubble talk fills the air these days, with fears of overinvestment that could pop at any time, something of a contradiction is brewing on the ground: Companies like Google and OpenAI can barely build infrastructure fast enough to fill their AI needs.

During an all-hands meeting earlier this month, Google’s AI infrastructure head Amin Vahdat told employees that the company must double its serving capacity every six months to meet demand for artificial intelligence services, reports CNBC. The comments show a rare look at what Google executives are telling its own employees internally. Vahdat, a vice president at Google Cloud, presented slides to its employees showing the company needs to scale “the next 1000x in 4-5 years.”

While a thousandfold increase in compute capacity sounds ambitious by itself, Vahdat noted some key constraints: Google needs to be able to deliver this increase in capability, compute, and storage networking “for essentially the same cost and increasingly, the same power, the same energy level,” he told employees during the meeting. “It won’t be easy but through collaboration and co-design, we’re going to get there.”

It’s unclear how much of this “demand” Google mentioned represents organic user interest in AI capabilities versus the company integrating AI features into existing services like Search, Gmail, and Workspace. But whether users are using the features voluntarily or not, Google isn’t the only tech company struggling to keep up with a growing user base of customers using AI services.

Major tech companies are in a race to build out data centers. Google competitor OpenAI is planning to build six massive data centers across the US through its Stargate partnership project with SoftBank and Oracle, committing over $400 billion in the next three years to reach nearly 7 gigawatts of capacity. The company faces similar constraints serving its 800 million weekly ChatGPT users, with even paid subscribers regularly hitting usage limits for features like video synthesis and simulated reasoning models.

“The competition in AI infrastructure is the most critical and also the most expensive part of the AI race,” Vahdat said at the meeting, according to CNBC’s viewing of the presentation. The infrastructure executive explained that Google’s challenge goes beyond simply outspending competitors. “We’re going to spend a lot,” he said, but noted the real objective is building infrastructure that is “more reliable, more performant and more scalable than what’s available anywhere else.”

Google tells employees it must double capacity every 6 months to meet AI demand Read More »

google’s-latest-swing-at-chromebook-gaming-is-a-free-year-of-geforce-now

Google’s latest swing at Chromebook gaming is a free year of GeForce Now

Earlier this year, Google announced the end of its efforts to get Steam running on Chromebooks, but it’s not done trying to make these low-power laptops into gaming machines. Google has teamed up with Nvidia to offer a version of GeForce Now cloud streaming that is perplexingly limited in some ways and generous in others. Starting today, anyone who buys a Chromebook will get a free year of a new service called GeForce Now Fast Pass. There are no ads and less waiting for server slots, but you don’t get to play very long.

Back before Google killed its Stadia game streaming service, it would often throw in a few months of the Pro subscription with Chromebook purchases. In the absence of its own gaming platform, Google has turned to Nvidia to level up Chromebook gaming. GeForce Now (GFN), which has been around in one form or another for more than a decade, allows you to render games on a remote server and stream the video output to the device of your choice. It works on computers, phones, TVs, and yes, Chromebooks.

The new Chromebook feature is not the same GeForce Now subscription you can get from Nvidia. Fast Pass, which is exclusive to Chromebooks, includes a mishmash of limits and bonuses that make it a pretty strange offering. Fast Pass is based on the free tier of GeForce Now, but users will get priority access to server slots. So no queuing for five or 10 minutes to start playing. It also lacks the ads that Nvidia’s standard free tier includes. Fast Pass also uses the more powerful RTX servers, which are otherwise limited to the $10-per-month ($100 yearly) Performance tier.

Google’s latest swing at Chromebook gaming is a free year of GeForce Now Read More »

“hey-google,-did-you-upgrade-your-ai-in-my-android-auto?”

“Hey Google, did you upgrade your AI in my Android Auto?”

Now it’s “Hey Google” not “OK Google” to trigger the assistant, which had started feeling a little left behind in terms of natural language processing and conversational AI to other OEM systems—sometimes even AAOS-based ones—that used solutions like those from Cerence running on their own private clouds.

Gemini

Going forward, “Hey Google” will fire up Gemini, as long as it’s running on the Android device being cast to the car’s infotainment system. In fact, we learned of its impending, unspecified arrival a couple of weeks ago, but today is the day, according to Google.

Now, instead of needing to know precise trigger phrases to get Google Assistant to do what you’d like it to do, Gemini should be able to answer the kinds of normal speech questions that so often frustrate me when I try them with Siri or most built-in in-car AI helpers.

For example, you could ask if there are any well-rated restaurants along a particular route, with the ability to have Gemini drill down into search results like menu options. (Whether these are as trustworthy as the AI suggestions that confront us when we use Google as a search engine will need to be determined.) Sending messages should supposedly be easier, with translation into 40 different languages should the need arise, and it sounds like making playlists and even finding info on one’s destination have both become more powerful.

There’s even the dreaded intrusion of productivity, as Gemini can access your Gmail, calendars, tasks, and so on.

A polestar interior

Google Gemini is coming to all Polestar models. Credit: Polestar

Gemini is also making its way into built-in Google automotive environments. Just yesterday, Polestar announced that Gemini will replace Google Assistant in all its models, from the entry-level Polestar 2 through to soon-to-arrive machines like the Polestar 5 four-door grand tourer.

“Our collaboration with Google is a great example of how we continue to evolve the digital experience in our cars. Gemini brings the next generation of AI voice interaction into the car, and we’re excited to give a first look at how it will enhance the driving experience,” said Polestar’s head of UI/UX, Sid Odedra.

“Hey Google, did you upgrade your AI in my Android Auto?” Read More »

google’s-new-nano-banana-pro-uses-gemini-3-power-to-generate-more-realistic-ai-images

Google’s new Nano Banana Pro uses Gemini 3 power to generate more realistic AI images

Detecting less sloppy slop

Google is not just blowing smoke—the new image generator is much better. Its grasp of the world and the nuance of language is apparent, producing much more realistic results. Even before this, AI images were getting so good that it could be hard to spot them at a glance. Gone are the days when you could just count fingers to identify AI. Google is making an effort to help identify AI content, though.

Images generated with Nano Banana Pro continue to have embedded SynthID watermarks that Google’s tools can detect. The company is also adding more C2PA metadata to further label AI images. The Gemini app is part of this effort, too. Starting now, you can upload an image and ask something like “Is this AI?” The app won’t detect just any old AI image, but it will tell you if it’s a product of Google AI by checking for SynthID.

Gemini can now detect its own AI images.

At the same time, Google is making it slightly harder for people to know an image was generated with AI. Operating with the knowledge that professionals may want to generate images with Nano Banana Pro, Google has removed the visible watermark from images for AI Ultra subscribers. These images still have SynthID, but only the lower tiers have the Gemini twinkle in the corner.

While everyone can access the new Nano Banana Pro today, AI Ultra subscribers will enjoy the highest usage limits. Gemini Pro users will get a bit less access, and free users will get the lowest limits before being booted down to the non-pro version.

Google’s new Nano Banana Pro uses Gemini 3 power to generate more realistic AI images Read More »

celebrated-game-developer-rebecca-heineman-dies-at-age-62

Celebrated game developer Rebecca Heineman dies at age 62

From champion to advocate

During her later career, Heineman served as a mentor and advisor to many, never shy about celebrating her past as a game developer during the golden age of the home computer.

Her mentoring skills became doubly important when she publicly came out as transgender in 2003. She became a vocal advocate for LGBTQ+ representation in gaming and served on the board of directors for GLAAD. Earlier this year, she received the Gayming Icon Award from Gayming Magazine.

Andrew Borman, who serves as director of digital preservation at The Strong National Museum of Play in Rochester, New York, told Ars Technica that her influence made a personal impact wider than electronic entertainment. “Her legacy goes beyond her groundbreaking work in video games,” he told Ars. “She was a fierce advocate for LGBTQ rights and an inspiration to people around the world, including myself.”

The front cover of

The front cover of Dragon Wars on the Commodore 64, released in 1989. Credit: MobyGames

In the Netflix documentary series High Score, Heineman explained her early connection to video games. “It allowed me to be myself,” she said. “It allowed me to play as female.”

“I think her legend grew as she got older, in part because of her openness and approachability,” journalist Ernie Smith told Ars. “As the culture of gaming grew into an online culture of people ready to dig into the past, she remained a part of it in a big way, where her war stories helped fill in the lore about gaming’s formative eras.”

Celebrated to the end

Heineman was diagnosed with adenocarcinoma in October 2025 after experiencing shortness of breath at the PAX game convention. After diagnostic testing, doctors found cancer in her lungs and liver. That same month, she launched a GoFundMe campaign to help with medical costs. The campaign quickly surpassed its $75,000 goal, raising more than $157,000 from fans, friends, and industry colleagues.

Celebrated game developer Rebecca Heineman dies at age 62 Read More »

google-ceo:-if-an-ai-bubble-pops,-no-one-is-getting-out-clean

Google CEO: If an AI bubble pops, no one is getting out clean

Market concerns and Google’s position

Alphabet’s recent market performance has been driven by investor confidence in the company’s ability to compete with OpenAI’s ChatGPT, as well as its development of specialized chips for AI that can compete with Nvidia’s. Nvidia recently reached a world-first $5 trillion valuation due to making GPUs that can accelerate the matrix math at the heart of AI computations.

Despite acknowledging that no company would be immune to a potential AI bubble burst, Pichai argued that Google’s unique position gives it an advantage. He told the BBC that the company owns what he called a “full stack” of technologies, from chips to YouTube data to models and frontier science research. This integrated approach, he suggested, would help the company weather any market turbulence better than competitors.

Pichai also told the BBC that people should not “blindly trust” everything AI tools output. The company currently faces repeated accuracy concerns about some of its AI models. Pichai said that while AI tools are helpful “if you want to creatively write something,” people “have to learn to use these tools for what they’re good at and not blindly trust everything they say.”

In the BBC interview, the Google boss also addressed the “immense” energy needs of AI, acknowledging that the intensive energy requirements of expanding AI ventures have caused slippage on Alphabet’s climate targets. However, Pichai insisted that the company still wants to achieve net zero by 2030 through investments in new energy technologies. “The rate at which we were hoping to make progress will be impacted,” Pichai said, warning that constraining an economy based on energy “will have consequences.”

Even with the warnings about a potential AI bubble, Pichai did not miss his chance to promote the technology, albeit with a hint of danger regarding its widespread impact. Pichai described AI as “the most profound technology” humankind has worked on.

“We will have to work through societal disruptions,” he said, adding that the technology would “create new opportunities” and “evolve and transition certain jobs.” He said people who adapt to AI tools “will do better” in their professions, whatever field they work in.

Google CEO: If an AI bubble pops, no one is getting out clean Read More »