AI

gemini-2.5-is-leaving-preview-just-in-time-for-google’s-new-$250-ai-subscription

Gemini 2.5 is leaving preview just in time for Google’s new $250 AI subscription

Deep Think graphs I/O

Deep Think is more capable of complex math and coding. Credit: Ryan Whitwam

Both 2.5 models have adjustable thinking budgets when used in Vertex AI and via the API, and now the models will also include summaries of the “thinking” process for each output. This makes a little progress toward making generative AI less overwhelmingly expensive to run. Gemini 2.5 Pro will also appear in some of Google’s dev products, including Gemini Code Assist.

Gemini Live, previously known as Project Astra, started to appear on mobile devices over the last few months. Initially, you needed to have a Gemini subscription or a Pixel phone to access Gemini Live, but now it’s coming to all Android and iOS devices immediately. Google demoed a future “agentic” capability in the Gemini app that can actually control your phone, search the web for files, open apps, and make calls. It’s perhaps a little aspirational, just like the Astra demo from last year. The version of Gemini Live we got wasn’t as good, but as a glimpse of the future, it was impressive.

There are also some developments in Chrome, and you guessed it, it’s getting Gemini. It’s not dissimilar from what you get in Edge with Copilot. There’s a little Gemini icon in the corner of the browser, which you can click to access Google’s chatbot. You can ask it about the pages you’re browsing, have it summarize those pages, and ask follow-up questions.

Google AI Ultra is ultra-expensive

Since launching Gemini, Google has only had a single $20 monthly plan for AI features. That plan granted you access to the Pro models and early versions of Google’s upcoming AI. At I/O, Google is catching up to AI firms like OpenAI, which have offered sky-high AI plans. Google’s new Google AI Ultra plan will cost $250 per month, more than the $200 plan for ChatGPT Pro.

Gemini 2.5 is leaving preview just in time for Google’s new $250 AI subscription Read More »

adobe-to-automatically-move-subscribers-to-pricier,-ai-focused-tier-in-june

Adobe to automatically move subscribers to pricier, AI-focused tier in June

Subscribers to Adobe’s multi-app subscription plan, Creative Cloud All Apps, will be charged more starting on June 17 to accommodate for new generative AI features.

Adobe’s announcement, spotted by MakeUseOf, says the change will affect North American subscribers to the Creative Cloud All Apps plan, which Adobe is renaming Creative Cloud Pro. Starting on June 17, Adobe will automatically renew Creative Cloud All Apps subscribers into the Creative Cloud Pro subscription, which will be $70 per month for individuals who commit to an annual plan, up from $60 for Creative Cloud All Apps. Annual plans for students and teachers plans are moving from $35/month to $40/month, and annual teams pricing will go from $90/month to $100/month. Monthly (non-annual) subscriptions are also increasing, from $90 to $105.

Further, in an apparent attempt to push generative AI users to more expensive subscriptions, as of June 17, Adobe will give single-app subscribers just 25 generative AI credits instead of the current 500.

Current subscribers can opt to move down to a new multi-app plan called Creative Cloud Standard, which is $55/month for annual subscribers and $82.49/month for monthly subscribers. However, this tier limits access to mobile and web app features, and subscribers can’t use premium generative AI features.

Creative Cloud Standard won’t be available to new subscribers, meaning the only option for new customers who need access to many Adobe apps will be the new AI-heavy Creative Cloud Pro plan.

Adobe’s announcement explained the higher prices by saying that the subscription tier “includes all the core applications and new AI capabilities that power the way people create today, and its price reflects that innovation, as well as our ongoing commitment to deliver the future of creative tools.”

Like today’s Creative Cloud All Apps plan, Creative Cloud Pro will include Photoshop, Illustrator, Premiere Pro, Lightroom, and access to Adobe’s web and mobile apps. AI features include unlimited usage of image and vector features in Adobe apps, including Generative Fill in Photoshop, Generative Remove in Lightroom, Generative Shape Fill in Illustrator, and 4K video generation with Generative Extend in Premiere Pro.

Adobe to automatically move subscribers to pricier, AI-focused tier in June Read More »

chicago-sun-times-prints-summer-reading-list-full-of-fake-books

Chicago Sun-Times prints summer reading list full of fake books

Photo of the Chicago Sun-Times

Photo of the Chicago Sun-Times “Summer reading list for 2025” supplement. Credit: Rachel King / Bluesky

Novelist Rachael King initially called attention to the error on Bluesky Tuesday morning. “The Chicago Sun-Times obviously gets ChatGPT to write a ‘summer reads’ feature almost entirely made up of real authors but completely fake books. What are we coming to?” King wrote.

So far, community reaction to the list has been largely negative online, but others have expressed sympathy for the publication. Freelance journalist Joshua J. Friedman noted on Bluesky that the reading list was “part of a ~60-page summer supplement” published on May 18, suggesting it might be “transparent filler” possibly created by “the lone freelancer apparently saddled with producing it.”

The staffing connection

The reading list appeared in a 64-page supplement called “Heat Index,” which was a promotional section not specific to Chicago. Buscaglia told 404 Media the content was meant to be “generic and national” and would be inserted into newspapers around the country. “We never get a list of where things ran,” he said.

The publication error comes two months after the Chicago Sun-Times lost 20 percent of its staff through a buyout program. In March, the newspaper’s nonprofit owner, Chicago Public Media, announced that 30 Sun-Times employees—including 23 from the newsroom—had accepted buyout offers amid financial struggles.

A March report on the buyout in the Sun-Times described the staff reduction as “the most drastic the oft-imperiled Sun-Times has faced in several years.” The departures included columnists, editorial writers, and editors with decades of experience.

Melissa Bell, CEO of Chicago Public Media, stated at the time that the exits would save the company $4.2 million annually. The company offered buyouts as it prepared for an expected expiration of grant support at the end of 2026.

Even with those pressures in the media, one Reddit user expressed disapproval of the apparent use of AI in the newspaper, even in a supplement that might not have been produced by staff. “As a subscriber, I am livid! What is the point of subscribing to a hard copy paper if they are just going to include AI slop too!?” wrote Reddit user xxxlovelit, who shared the reading list. “The Sun Times needs to answer for this, and there should be a reporter fired.”

This article was updated on May 20, 2025 at 11: 02 AM to include information on Marco Buscaglia from 404 Media.

Chicago Sun-Times prints summer reading list full of fake books Read More »

zero-click-searches:-google’s-ai-tools-are-the-culmination-of-its-hubris

Zero-click searches: Google’s AI tools are the culmination of its hubris


Google’s first year with AI search was a wild ride. It will get wilder.

Google is constantly making changes to its search rankings, but not all updates are equal. Every few months, the company bundles up changes into a larger “core update.” These updates make rapid and profound changes to search, so website operators watch them closely.

The March 2024 update was unique. It was one of Google’s largest core updates ever, and it took over a month to fully roll out. Nothing has felt quite the same since. Whether the update was good or bad depends on who you ask—and maybe who you are.

It’s common for websites to see traffic changes after a core update, but the impact of the March 2024 update marked a seismic shift. Google says the update aimed to address spam and AI-generated content in a meaningful way. Still, many publishers say they saw clicks on legitimate sites evaporate, while others have had to cope with unprecedented volatility in their traffic. Because Google owns almost the entire search market, changes in its algorithm can move the Internet itself.

In hindsight, the March 2024 update looks like the first major Google algorithm update for the AI era. Not only did it (supposedly) veer away from ranking AI-authored content online, but it also laid the groundwork for Google’s ambitious—and often annoying—desire to fuse AI with search.

A year ago, this ambition surfaced with AI Overviews, but now the company is taking an even more audacious route, layering in a new chat-based answer service called “AI Mode.” Both of these technologies do at least two things: They aim to keep you on Google properties longer, and they remix publisher content without always giving prominent citations.

Smaller publishers appear to have borne the brunt of the changes caused by these updates. “Google got all this flak for crushing the small publishers, and it’s true that when they make these changes, they do crush a lot of publishers,” says Jim Yu, CEO of enterprise SEO platform BrightEdge. Yu explains that Google is the only search engine likely to surface niche content in the first place, and there are bound to be changes to sites at the fringes during a major core update.

Google’s own view on the impact of the March 2024 update is unsurprisingly positive. The company said it was hoping to reduce the appearance of unhelpful content in its search engine results pages (SERPs) by 40 percent. After the update, the company claimed an actual reduction of closer to 45 percent. But does it feel like Google’s results have improved by that much? Most people don’t think so.

What causes this disconnect? According to Michael King, founder of SEO firm iPullRank, we’re not speaking the same language as Google. “Google’s internal success metrics differ from user perceptions,” he says. “Google measures user satisfaction through quantifiable metrics, while external observers rely on subjective experiences.”

Google evaluates algorithm changes with various tests, including human search quality testers and running A/B tests on live searches. But more than anything else, success is about the total number of searches (5 trillion of them per year). Google often makes this number a centerpiece of its business updates to show investors that it can still grow.

However, using search quantity to measure quality has obvious problems. For instance, more engagement with a search engine might mean that quality has decreased, so people try new queries (e.g., the old trick of adding “Reddit” to the end of your search string). In other words, people could be searching more because they don’t like the results.

Jim Yu suggests that Google is moving fast and breaking things, but it may not be as bad as we think. “I think they rolled things out faster because they had to move a lot faster than they’ve historically had to move, and it ends up that they do make some real mistakes,” says Yu. “[Google] is held to a higher standard, but by and large, I think their search quality is improving.”

According to King, Google’s current search behavior still favors big names, but other sites have started to see a rebound. “Larger brands are performing better in the top three positions, while lesser-known websites have gained ground in positions 4 through 10,” says King. “Although some websites have indeed lost traffic due to reduced organic visibility, the bigger issue seems tied to increased usage of AI Overviews”—and now the launch of AI Mode.

Yes, the specter of AI hangs over every SERP. The unhelpful vibe many people now get from Google searches, regardless of the internal metrics the company may use, may come from a fundamental shift in how Google surfaces information in the age of AI.

The AI Overview hangover

In 2025, you can’t talk about Google’s changes to search without acknowledging the AI-generated elephant in the room. As it wrapped up that hefty core update in March 2024, Google also announced a major expansion of AI in search, moving the “Search Generative Experience” out of labs and onto Google.com. The feature was dubbed “AI Overviews.”

The AI Overview box has been a fixture on Google’s search results page ever since its debut a year ago. The feature uses the same foundational AI model as Google’s Gemini chatbot to formulate answers to your search queries by ingesting the top 100 (!) search results. It sits at the top of the page, pushing so-called blue link content even farther down below the ads and knowledge graph content. It doesn’t launch on every query, and sometimes it answers questions you didn’t ask—or even hallucinates a totally wrong answer.

And it’s not without some irony that Google’s laudable decision to de-rank synthetic AI slop comes at the same time that Google heavily promotes its own AI-generated content right at the top of SERPs.

AI Overview on phone

AI Overviews appear right at the top of many search results.

Credit: Google

AI Overviews appear right at the top of many search results. Credit: Google

What is Google getting for all of this AI work? More eyeballs, it would seem. “AI is driving more engagement than ever before on Google,” says Yu. BrightEdge data shows that impressions on Google are up nearly 50 percent since AI Overviews launched. Many of the opinions you hear about AI Overviews online are strongly negative, but that doesn’t mean people aren’t paying attention to the feature. In its Q1 2025 earnings report, Google announced that AI Overviews is being “used” by 1.5 billion people every month. (Since you can’t easily opt in or opt out of AI Overviews, this “usage” claim should be taken with a grain of salt.)

Interestingly, the impact of AI Overviews has varied across the web. In October 2024, Google was so pleased with AI Overviews that it expanded them to appear in more queries. And as AI crept into more queries, publishers saw a corresponding traffic drop. Yu estimates this drop to be around 30 percent on average for those with high AI query coverage. For searches that are less supported in AI Overviews—things like restaurants and financial services—the traffic change has been negligible. And there are always exceptions. Yu suggests that some large businesses with high AI Overview query coverage have seen much smaller drops in traffic because they rank extremely well as both AI citations and organic results.

Lower traffic isn’t the end of the world for some businesses. Last May, AI Overviews were largely absent from B2B queries, but that turned around in a big way in recent months. BrightEdge estimates that 70 percent of B2B searches now have AI answers, which has reduced traffic for many companies. Yu doesn’t think it’s all bad, though. “People don’t click through as much—they engage a lot more on the AI—but when they do click, the conversion rate for the business goes up,” Yu says. In theory, serious buyers click and window shoppers don’t.

But the Internet is not a giant mall that exists only for shoppers. It is, first and foremost, a place to share and find information, and AI Overviews have hit some purveyors of information quite hard. At launch, AI Overviews were heavily focused on “What is” and “How to” queries. Such “service content” is a staple of bloggers and big media alike, and these types of publishers aren’t looking for sales conversions—it’s traffic that matters. And they’re getting less of it because AI Overviews “helpfully” repackages and remixes their content, eliminating the need to click through to the site. Some publishers are righteously indignant, asking how it’s fair for Google to remix content it doesn’t own, and to do so without compensation.

But Google’s intentions don’t end with AI Overviews. Last week, the company started an expanded public test of so-called “AI Mode,” right from the front page. AI Mode doesn’t even bother with those blue links. It’s a chatbot experience that, at present, tries to answer your query without clearly citing sources inline. (On some occasions, it will mention Reddit or Wikipedia.) On the right side of the screen, Google provides a little box with three sites linked, which you can expand to see more options. To the end user, it’s utterly unclear if those are “sources,” “recommendations,” or “partner deals.”

Perhaps more surprisingly, in our testing, not a single AI Mode “sites box” listed a site that ranked on the first page for the same query on a regular search. That is, the links in AI Mode for “best foods to eat for a cold” don’t overlap at all with the SERP for the same query in Google Search. In fairness, AI Mode is very new, and its behavior will undoubtedly change. But the direction the company is headed seems clear.

Google’s real goal is to keep you on Google or other Alphabet properties. In 2019, Rand Fishkin noticed that Google’s evolution from search engine to walled garden was at a tipping point. At that time—and for the first time—more than half of Google searches resulted in zero click-throughs to other sites. But data did show large numbers of clicks to Google’s own properties, like YouTube and Maps. If Google doesn’t intend to deliver a “zero-click” search experience, you wouldn’t know it from historical performance data or the new features the company develops.

You also wouldn’t know it from the way AI Overviews work. They do cite some of the sources used in building each output, and data suggests people click on those links. But are the citations accurate? Is every source used for constructing an AI Overview cited? We don’t really know, as Google is famously opaque about how its search works. We do know that Google uses a customized version of Gemini to support AI Overviews and that Gemini has been trained on billions and billions of webpages.

When AI Overviews do cite a source, it’s not clear how those sources came to be the ones cited. There’s good reason to be suspicious here: AI Overview’s output is not great, as witnessed by the numerous hallucinations we all know and love (telling people to eat rocks, for instance). The only thing we know for sure is that Google isn’t transparent about any of this.

No signs of slowing

Despite all of that, Google is not slowing down on AI in search. More recent core updates have only solidified this new arrangement with an ever-increasing number of AI-answered queries. The company appears OK with its current accuracy problems, or at the very least, it’s comfortable enough to push out AI updates anyway. Google appears to have been caught entirely off guard by the public launch of ChatGPT, and it’s now utilizing its search dominance to play catch-up.

To make matters even more dicey, Google isn’t even trying to address the biggest issue in all this: The company’s quest for zero-click search harms the very content creators upon which the company has built its empire.

For its part, Google has been celebrating its AI developments, insisting that content producers don’t know what’s best for them, refuting any concerns with comments about search volume increases and ever-more-complex search query strings. The changes must be working!

Google has been building toward this moment for years. The company started with a list of 10 blue links and nothing else, but little by little, it pushed the links down the page and added more content that keeps people in the Google ecosystem. Way back in 2007, Google added Universal Search, which allowed it to insert content from Google Maps, YouTube, and other services. In 2009, Rich Snippets began displaying more data from search results on SERPs. In 2012, the Knowledge Graph began extracting data from search results to display answers in the search results. Each change kept people on Google longer and reduced click-throughs, all the while pushing the search results down the page.

AI Overviews, and especially AI Mode, are the logical outcome of Google’s yearslong transformation from an indexer of information to an insular web portal built on scraping content from around the web. Earlier in Google’s evolution, the implicit agreement was that websites would allow Google to crawl their pages in exchange for sending them traffic. That relationship has become strained as the company has kept more traffic for itself, reducing click-throughs to websites even as search volume continues to increase. And locking Google out isn’t a realistic option when the company controls almost the entire search market.

Even when Google has taken a friendlier approach, business concerns could get in the way. During the search antitrust trial, documents showed that Google initially intended to let sites opt out of being used for AI training for its search-based AI features—but these sites would still be included in search results. The company ultimately canned that idea, leaving site operators with the Pyrrhic choice of participating in the AI “revolution” or becoming invisible on the web. Google now competes with, rather than supports, the open web.

When many of us look at Google’s search results today, the vibe feels off. Maybe it’s the AI, maybe it’s Google’s algorithm, or maybe the Internet just isn’t what it once was. Whatever the cause, the shift toward zero-click search that began more than a decade ago was made clear by the March 2024 core update, and it has only accelerated with the launch of AI Mode. Even businesses that have escaped major traffic drops from AI Overviews could soon find that Google’s AI-only search can get much more overbearing.

The AI slop will continue until morale improves.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Zero-click searches: Google’s AI tools are the culmination of its hubris Read More »

labor-dispute-erupts-over-ai-voiced-darth-vader-in-fortnite

Labor dispute erupts over AI-voiced Darth Vader in Fortnite

For voice actors who previously portrayed Darth Vader in video games, the Fortnite feature starkly illustrates how AI voice synthesis could reshape their profession. While James Earl Jones created the iconic voice for films, at least 54 voice actors have performed as Vader in various media games over the years when Jones wasn’t available—work that could vanish if AI replicas become the industry standard.

The union strikes back

SAG-AFTRA’s labor complaint (which can be read online here) doesn’t focus on the AI feature’s technical problems or on permission from the Jones estate, which explicitly authorized the use of a synthesized version of his voice for the character in Fortnite. The late actor, who died in 2024, had signed over his Darth Vader voice rights before his death.

Instead, the union’s grievance centers on labor rights and collective bargaining. In the NLRB filing, SAG-AFTRA alleges that Llama Productions “failed and refused to bargain in good faith with the union by making unilateral changes to terms and conditions of employment, without providing notice to the union or the opportunity to bargain, by utilizing AI-generated voices to replace bargaining unit work on the Interactive Program Fortnite.”

The action comes amid SAG-AFTRA’s ongoing interactive media strike, which began in July 2024 after negotiations with video game producers stalled primarily over AI protections. The strike continues, with more than 100 games signing interim agreements, while others, including those from major publishers like Epic, remain in dispute.

Labor dispute erupts over AI-voiced Darth Vader in Fortnite Read More »

new-orleans-called-out-for-sketchiest-use-of-facial-recognition-yet-in-the-us

New Orleans called out for sketchiest use of facial recognition yet in the US

According to police records submitted to the city council, the network “only proved useful in a single case.” Investigating the tension between these claims, the Post suggested we may never know how many suspects were misidentified or what steps police took to ensure responsible use of the controversial live feeds.

In the US, New Orleans stands out for taking a step further than law enforcement in other regions by using live feeds from facial recognition cameras to make immediate arrests, the Post noted. The Security Industry Association told the Post that four states—Maryland, Montana, Vermont, and Virginia—and 19 cities nationwide “explicitly bar” the practice.

Lagarde told the Post that police cannot “directly” search for suspects on the camera network or add suspects to the watchlist in real time. Reese Harper, an NOPD spokesperson, told the Post that his department “does not own, rely on, manage, or condone the use by members of the department of any artificial intelligence systems associated with the vast network of Project Nola crime cameras.”

In a federally mandated 2023 audit, New Orleans police complained that complying with the ordinance took too long and “often” resulted in no matches. That could mean the tech is flawed, or it could be a sign that the process was working as intended to prevent wrongful arrests.

The Post noted that in total, “at least eight Americans have been wrongfully arrested due to facial recognition,” as both police and AI software rushing arrests are prone to making mistakes.

“By adopting this system–in secret, without safeguards, and at tremendous threat to our privacy and security–the City of New Orleans has crossed a thick red line,” Wessler said. “This is the stuff of authoritarian surveillance states and has no place in American policing.”

Project Nola did not immediately respond to Ars’ request to comment.

New Orleans called out for sketchiest use of facial recognition yet in the US Read More »

spotify-caught-hosting-hundreds-of-fake-podcasts-that-advertise-selling-drugs

Spotify caught hosting hundreds of fake podcasts that advertise selling drugs

This week, Spotify rushed to remove hundreds of obviously fake podcasts found to be marketing prescription drugs in violation of Spotify’s policies and, likely, federal law.

On Thursday, Business Insider (BI) reported that Spotify removed 200 podcasts advertising the sale of opioids and other drugs, but that wasn’t the end of the scandal. Today, CNN revealed that it easily uncovered dozens more fake podcasts peddling drugs.

Some of the podcasts may have raised a red flag for a human moderator—with titles like “My Adderall Store” or “Xtrapharma.com” and episodes titled “Order Codeine Online Safe Pharmacy Louisiana” or “Order Xanax 2 mg Online Big Deal On Christmas Season,” CNN reported.

But Spotify’s auto-detection did not flag the fake podcasts for removal. Some of them remained up for months, CNN reported, which could create trouble for the music streamer at a time when the US government is cracking down on illegal drug sales online.

“Multiple teens have died of overdoses from pills bought online,” CNN noted, sparking backlash against tech companies. And Donald Trump’s aggressive tariffs were specifically raised to stop deadly drugs from bombarding the US, which the president declared a national emergency.

BI found that many podcast episodes featured a computerized voice and were under a minute long, while CNN noted some episodes were as short as 10 seconds. Some of them didn’t contain any audio at all, BI reported.

Spotify caught hosting hundreds of fake podcasts that advertise selling drugs Read More »

the-empire-strikes-back-with-f-bombs:-ai-darth-vader-goes-rogue-with-profanity,-slurs

The empire strikes back with F-bombs: AI Darth Vader goes rogue with profanity, slurs

In that sense, the vulgar Vader situation creates a touchy dilemma for Epic Games and Disney, which likely invested substantially in this high-profile collaboration. While Epic acted swiftly in response, maintaining the feature while preventing further Jedi mind tricks from players presents ongoing technical challenges for interactive AI speech of any kind.

An AI language model like the one used for constructing responses for Vader (Google’s Gemini 2.0 Flash in this case, according to Epic) are fairly easy to trick with exploits like prompt injections and jailbreaks, and that has limited their usefulness in some applications. Imagine a truly ChatGPT-like Siri or Alexa, for example, that could be tricked into saying racist things on behalf of Apple or Amazon.

David Prowse as Darth Vader and Carrie Fisher as Princess Leia filming the original Star Wars. Credit: Sunset Boulevard/Corbis via Getty Images

Beyond language models, the AI voice technology behind the AI Darth Vader voice in Fortnite comes from ElevenLabs’ Flash v2.5 model, trained on examples of speech from James Earl Jones so it can synthesize new speech in the same style.

Previously, Lucasfilm worked with a Ukrainian startup we covered in 2022 on Obi-Wan Kenobi to recreate Darth Vader’s voice performance using a different AI voice model called Respeecher, which isn’t used in Fortnite.

According to Variety, Jones’ family supported the new Fortnite collaboration, stating: “James Earl felt that the voice of Darth Vader was inseparable from the story of Star Wars, and he always wanted fans of all ages to continue to experience it. We hope that this collaboration with Fortnite will allow both longtime fans of Darth Vader and newer generations to share in the enjoyment of this iconic character.”

This article was updated on May 16, 2025 at 4: 25 PM to include information about an email sent out from Epic Games to parents. This Article was updated again on May 17, 2025 at 10: 10 AM to correctly attribute ElevenLabs Flash v2.5 as the source of the Darth Vader audio model in Fortnite. The article previously incorrectly stated that Respeecher had been used for the game.

The empire strikes back with F-bombs: AI Darth Vader goes rogue with profanity, slurs Read More »

google-to-give-app-devs-access-to-gemini-nano-for-on-device-ai

Google to give app devs access to Gemini Nano for on-device AI

The rapid expansion of generative AI has changed the way Google and other tech giants design products, but most of the AI features you’ve used are running on remote servers with a ton of processing power. Your phone has a lot less power, but Google appears poised to give developers some important new mobile AI tools. At I/O next week, Google will likely announce a new set of APIs to let developers leverage the capabilities of Gemini Nano for on-device AI.

Google has quietly published documentation on big new AI features for developers. According to Android Authority, an update to the ML Kit SDK will add API support for on-device generative AI features via Gemini Nano. It’s built on AI Core, similar to the experimental Edge AI SDK, but it plugs into an existing model with a set of predefined features that should be easy for developers to implement.

Google says ML Kit’s GenAI APIs will enable apps to do summarization, proofreading, rewriting, and image description without sending data to the cloud. However, Gemini Nano doesn’t have as much power as the cloud-based version, so expect some limitations. For example, Google notes that summaries can only have a maximum of three bullet points, and image descriptions will only be available in English. The quality of outputs could also vary based on the version of Gemini Nano on a phone. The standard version (Gemini Nano XS) is about 100MB in size, but Gemini Nano XXS as seen on the Pixel 9a is a quarter of the size. It’s text-only and has a much smaller context window.

Not all versions of Gemini Nano are created equal.

Credit: Ryan Whitwam

Not all versions of Gemini Nano are created equal. Credit: Ryan Whitwam

This move is good for Android in general because ML Kit works on devices outside Google’s Pixel line. While Pixel devices use Gemini Nano extensively, several other phones are already designed to run this model, including the OnePlus 13, Samsung Galaxy S25, and Xiaomi 15. As more phones add support for Google’s AI model, developers will be able to target those devices with generative AI features.

Google to give app devs access to Gemini Nano for on-device AI Read More »

openai-introduces-codex,-its-first-full-fledged-ai-agent-for-coding

OpenAI introduces Codex, its first full-fledged AI agent for coding

We’ve been expecting it for a while, and now it’s here: OpenAI has introduced an agentic coding tool called Codex in research preview. The tool is meant to allow experienced developers to delegate rote and relatively simple programming tasks to an AI agent that will generate production-ready code and show its work along the way.

Codex is a unique interface (not to be confused with the Codex CLI tool introduced by OpenAI last month) that can be reached from the side bar in the ChatGPT web app. Users enter a prompt and then click either “code” to have it begin producing code, or “ask” to have it answer questions and advise.

Whenever it’s given a task, that task is performed in a distinct container that is preloaded with the user’s codebase and is meant to accurately reflect their development environment.

To make Codex more effective, developers can include an “AGENTS.md” file in the repo with custom instructions, for example to contextualize and explain the code base or to communicate standardizations and style practices for the project—kind of a README.md but for AI agents rather than humans.

Codex is built on codex-1, a fine-tuned variation of OpenAI’s o3 reasoning model that was trained using reinforcement learning on a wide range of coding tasks to analyze and generate code, and to iterate through tests along the way.

OpenAI introduces Codex, its first full-fledged AI agent for coding Read More »

xai-says-an-“unauthorized”-prompt-change-caused-grok-to-focus-on-“white-genocide”

xAI says an “unauthorized” prompt change caused Grok to focus on “white genocide”

When analyzing social media posts made by others, Grok is given the somewhat contradictory instructions to “provide truthful and based insights [emphasis added], challenging mainstream narratives if necessary, but remain objective.” Grok is also instructed to incorporate scientific studies and prioritize peer-reviewed data but also to “be critical of sources to avoid bias.”

Grok’s brief “white genocide” obsession highlights just how easy it is to heavily twist an LLM’s “default” behavior with just a few core instructions. Conversational interfaces for LLMs in general are essentially a gnarly hack for systems intended to generate the next likely words to follow strings of input text. Layering a “helpful assistant” faux personality on top of that basic functionality, as most LLMs do in some form, can lead to all sorts of unexpected behaviors without careful additional prompting and design.

The 2,000+ word system prompt for Anthropic’s Claude 3.7, for instance, includes entire paragraphs for how to handle specific situations like counting tasks, “obscure” knowledge topics, and “classic puzzles.” It also includes specific instructions for how to project its own self-image publicly: “Claude engages with questions about its own consciousness, experience, emotions and so on as open philosophical questions, without claiming certainty either way.”

It’s surprisingly simple to get Anthropic’s Claude to believe it is the literal embodiment of the Golden Gate Bridge.

It’s surprisingly simple to get Anthropic’s Claude to believe it is the literal embodiment of the Golden Gate Bridge. Credit: Antrhopic

Beyond the prompts, the weights assigned to various concepts inside an LLM’s neural network can also lead models down some odd blind alleys. Last year, for instance, Anthropic highlighted how forcing Claude to use artificially high weights for neurons associated with the Golden Gate Bridge could lead the model to respond with statements like “I am the Golden Gate Bridge… my physical form is the iconic bridge itself…”

Incidents like Grok’s this week are a good reminder that, despite their compellingly human conversational interfaces, LLMs don’t really “think” or respond to instructions like humans do. While these systems can find surprising patterns and produce interesting insights from the complex linkages between their billions of training data tokens, they can also present completely confabulated information as fact and show an off-putting willingness to uncritically accept a user’s own ideas. Far from being all-knowing oracles, these systems can show biases in their actions that can be much harder to detect than Grok’s recent overt “white genocide” obsession.

xAI says an “unauthorized” prompt change caused Grok to focus on “white genocide” Read More »

fbi-warns-of-ongoing-scam-that-uses-deepfake-audio-to-impersonate-government-officials

FBI warns of ongoing scam that uses deepfake audio to impersonate government officials

The FBI is warning people to be vigilant of an ongoing malicious messaging campaign that uses AI-generated voice audio to impersonate government officials in an attempt to trick recipients into clicking on links that can infect their computers.

“Since April 2025, malicious actors have impersonated senior US officials to target individuals, many of whom are current or former senior US federal or state government officials and their contacts,” Thursday’s advisory from the bureau’s Internet Crime Complaint Center said. “If you receive a message claiming to be from a senior US official, do not assume it is authentic.”

Think you can’t be fooled? Think again.

The campaign’s creators are sending AI-generated voice messages—better known as deepfakes—along with text messages “in an effort to establish rapport before gaining access to personal accounts,” FBI officials said. Deepfakes use AI to mimic the voice and speaking characteristics of a specific individual. The differences between the authentic and simulated speakers are often indistinguishable without trained analysis. Deepfake videos work similarly.

One way to gain access to targets’ devices is for the attacker to ask if the conversation can be continued on a separate messaging platform and then successfully convince the target to click on a malicious link under the guise that it will enable the alternate platform. The advisory provided no additional details about the campaign.

The advisory comes amid a rise in reports of deepfaked audio and sometimes video used in fraud and espionage campaigns. Last year, password manager LastPass warned that it had been targeted in a sophisticated phishing campaign that used a combination of email, text messages, and voice calls to trick targets into divulging their master passwords. One part of the campaign included targeting a LastPass employee with a deepfake audio call that impersonated company CEO Karim Toubba.

In a separate incident last year, a robocall campaign that encouraged New Hampshire Democrats to sit out the coming election used a deepfake of then-President Joe Biden’s voice. A Democratic consultant was later indicted in connection with the calls. The telco that transmitted the spoofed robocalls also agreed to pay a $1 million civil penalty for not authenticating the caller as required by FCC rules.

FBI warns of ongoing scam that uses deepfake audio to impersonate government officials Read More »