Google

everything-tech-giants-will-hate-about-the-eu’s-new-ai-rules

Everything tech giants will hate about the EU’s new AI rules

The code also details expectations for AI companies to respect paywalls, as well as robots.txt instructions restricting crawling, which could help confront a growing problem of AI crawlers hammering websites. It “encourages” online search giants to embrace a solution that Cloudflare is currently pushing: allowing content creators to protect copyrights by restricting AI crawling without impacting search indexing.

Additionally, companies are asked to disclose total energy consumption for both training and inference, allowing the EU to detect environmental concerns while companies race forward with AI innovation.

More substantially, the code’s safety guidance provides for additional monitoring for other harms. It makes recommendations to detect and avoid “serious incidents” with new AI models, which could include cybersecurity breaches, disruptions of critical infrastructure, “serious harm to a person’s health (mental and/or physical),” or “a death of a person.” It stipulates timelines of between five and 10 days to report serious incidents with the EU’s AI Office. And it requires companies to track all events, provide an “adequate level” of cybersecurity protection, prevent jailbreaking as best they can, and justify “any failures or circumventions of systemic risk mitigations.”

Ars reached out to tech companies for immediate reactions to the new rules. OpenAI, Meta, and Microsoft declined to comment. A Google spokesperson confirmed that the company is reviewing the code, which still must be approved by the European Commission and EU member states amid expected industry pushback.

“Europeans should have access to first-rate, secure AI models when they become available, and an environment that promotes innovation and investment,” Google’s spokesperson said. “We look forward to reviewing the code and sharing our views alongside other model providers and many others.”

These rules are just one part of the AI Act, which will start taking effect in a staggered approach over the next year or more, the NYT reported. Breaching the AI Act could result in AI models being yanked off the market or fines “of as much as 7 percent of a company’s annual sales or 3 percent for the companies developing advanced AI models,” Bloomberg noted.

Everything tech giants will hate about the EU’s new AI rules Read More »

musk’s-grok-4-launches-one-day-after-chatbot-generated-hitler-praise-on-x

Musk’s Grok 4 launches one day after chatbot generated Hitler praise on X

Musk has also apparently used the Grok chatbots as an automated extension of his trolling habits, showing examples of Grok 3 producing “based” opinions that criticized the media in February. In May, Grok on X began repeatedly generating outputs about white genocide in South Africa, and most recently, we’ve seen the Grok Nazi output debacle. It’s admittedly difficult to take Grok seriously as a technical product when it’s linked to so many examples of unserious and capricious applications of the technology.

Still, the technical achievements xAI claims for various Grok 4 models seem to stand out. The Arc Prize organization reported that Grok 4 Thinking (with simulated reasoning enabled) achieved a score of 15.9 percent on its ARC-AGI-2 test, which the organization says nearly doubles the previous commercial best and tops the current Kaggle competition leader.

“With respect to academic questions, Grok 4 is better than PhD level in every subject, no exceptions,” Musk claimed during the livestream. We’ve previously covered nebulous claims about “PhD-level” AI, finding them to be generally specious marketing talk.

Premium pricing amid controversy

During Wednesday’s livestream, xAI also announced plans for an AI coding model in August, a multi-modal agent in September, and a video generation model in October. The company also plans to make Grok 4 available in Tesla vehicles next week, further expanding Musk’s AI assistant across his various companies.

Despite the recent turmoil, xAI has moved forward with an aggressive pricing strategy for “premium” versions of Grok. Alongside Grok 4 and Grok 4 Heavy, xAI launched “SuperGrok Heavy,” a $300-per-month subscription that makes it the most expensive AI service among major providers. Subscribers will get early access to Grok 4 Heavy and upcoming features.

Whether users will pay xAI’s premium pricing remains to be seen, particularly given the AI assistant’s tendency to periodically generate politically motivated outputs. These incidents represent fundamental management and implementation issues that, so far, no fancy-looking test-taking benchmarks have been able to capture.

Musk’s Grok 4 launches one day after chatbot generated Hitler praise on X Read More »

gemini-can-now-turn-your-photos-into-video-with-veo-3

Gemini can now turn your photos into video with Veo 3

Google’s Veo 3 videos have propagated across the Internet since the model’s debut in May, blurring the line between truth and fiction. Now, it’s getting even easier to create these AI videos. The Gemini app is gaining photo-to-video generation, allowing you to upload a photo and turn it into a video. You don’t have to pay anything extra for these Veo 3 videos, but the feature is only available to subscribers of Google’s Pro and Ultra AI plans.

When Veo 3 launched, it could conjure up a video based only on your description, complete with speech, music, and background audio. This has made Google’s new AI videos staggeringly realistic—it’s actually getting hard to identify AI videos at a glance. Using a reference photo makes it easier to get the look you want without tediously describing every aspect. This was an option in Google’s Flow AI tool for filmmakers, but now it’s in the Gemini app and web interface.

To create a video from a photo, you have to select “Video” from the Gemini toolbar. Once this feature is available, you can then add your image and prompt, including audio and dialogue. Generating the video takes several minutes—this process takes a lot of computation, which is why video output is still quite limited.

Gemini can now turn your photos into video with Veo 3 Read More »

gmail’s-new-subscription-management-is-here-to-declutter-your-inbox

Gmail’s new subscription management is here to declutter your inbox

With decades of online life behind us, many people are using years-old email addresses. Those inboxes are probably packed with subscriptions you’ve picked up over the years, and you probably don’t need all of them. Gmail is going to make it easier to manage them with a new feature rolling out on mobile and web. Google’s existing unsubscribe prompts are evolving into a one-stop shop for all your subscription management needs, a feature that has been in the works for a weirdly long time.

The pitch is simple: The aptly named “Manage subscriptions” feature will list all the emails to which you are currently subscribed—newsletters, promotions, updates for products you no longer use, and more. With a tap, you’ll never see them again. This feature in Gmail will be accessible in the navigation drawer, a UI feature that is increasingly rare in Google’s apps but is essential to managing inboxes and labels in Gmail. Down near the bottom, you’ll soon see the new Manage subscriptions item.

The page will list all email subscriptions with an unsubscribe button. If you’re not sure about letting a newsletter or deal alert remain, you can select the subscription to see all recent messages from that sender. As long as a sender supports Google’s recommended one-click unsubscribe, all you have to do is tap the Unsubscribe button, and you’ll be done. Some senders will redirect you to a website to unsubscribe, but Gmail has a “Block instead” option in those cases.

Gmail’s new subscription management is here to declutter your inbox Read More »

unless-users-take-action,-android-will-let-gemini-access-third-party-apps

Unless users take action, Android will let Gemini access third-party apps

Starting today, Google is implementing a change that will enable its Gemini AI engine to interact with third-party apps, such as WhatsApp, even when users previously configured their devices to block such interactions. Users who don’t want their previous settings to be overridden may have to take action.

An email Google sent recently informing users of the change linked to a notification page that said that “human reviewers (including service providers) read, annotate, and process” the data Gemini accesses. The email provides no useful guidance for preventing the changes from taking effect. The email said users can block the apps that Gemini interacts with, but even in those cases, data is stored for 72 hours.

An email Google recently sent to Android users.

An email Google recently sent to Android users.

No, Google, it’s not good news

The email never explains how users can fully extricate Gemini from their Android devices and seems to contradict itself on how or whether this is even possible. At one point, it says the changes “will automatically start rolling out” today and will give Gemini access to apps such as WhatsApp, Messages, and Phone “whether your Gemini apps activity is on or off.” A few sentences later, the email says, “If you have already turned these features off, they will remain off.” Nowhere in the email or the support pages it links to are Android users informed how to remove Gemini integrations completely.

Compounding the confusion, one of the linked support pages requires users to open a separate support page to learn how to control their Gemini app settings. Following the directions from a computer browser, I accessed the settings of my account’s Gemini app. I was reassured to see the text indicating no activity has been stored because I have Gemini turned off. Then again, the page also said that Gemini was “not saving activity beyond 72 hours.”

Unless users take action, Android will let Gemini access third-party apps Read More »

tiktok-is-being-flooded-with-racist-ai-videos-generated-by-google’s-veo-3

TikTok is being flooded with racist AI videos generated by Google’s Veo 3

The release of Google’s Veo 3 video generator in May represented a disconcerting leap in AI video quality. While many of the viral AI videos we’ve seen are harmless fun, the model’s pixel-perfect output can also be used for nefarious purposes. On TikTok, which may or may not be banned in the coming months, users have noticed a surplus of racist AI videos, courtesy of Google’s Veo 3.

According to a report from MediaMatters, numerous TikTok accounts have started posting AI-generated videos that use racist and antisemitic tropes in recent weeks. Most of the AI vitriol is aimed at Black people, depicting them as “the usual suspects” in crimes, absent parents, and monkeys with an affinity for watermelon. The content also targets immigrants and Jewish people. The videos top out at eight seconds and bear the “Veo” watermark, confirming they came from Google’s leading AI model.

The compilation video below has examples pulled from TikTok since the release of Veo 3, but be warned, it contains racist and antisemitic content. Some of the videos are shocking, which is likely the point—nothing drives engagement on social media like anger and drama. MediaMatters reports that the original posts have numerous comments echoing the stereotypes used in the video.

Hateful AI videos generated by Veo 3 spreading on TikTok.

Google has stressed security when announcing new AI models—we’ve all seen an AI refuse to complete a task that runs afoul of its guardrails. And it’s never fun when you have genuinely harmless intentions, but the system throws a false positive and blocks your output. Google has mostly struck the right balance previously, but it appears that Veo 3 is more compliant. We’ve tested a few simple prompts with Veo 3 and found it easy to reproduce elements of these videos.

Clear but unenforced policies

TikTok’s terms of service ban this kind of content. “We do not allow any hate speech, hateful behavior, or promotion of hateful ideologies. This includes explicit or implicit content that attacks a protected group,” the community guidelines read. Despite this blanket ban on racist caricatures, the hateful Veo 3 videos appear to be spreading unchecked.

TikTok is being flooded with racist AI videos generated by Google’s Veo 3 Read More »

android-16-review:-post-hype

Android 16 review: Post-hype


Competent, not captivating

The age of big, exciting Android updates is probably over.

Android 16 on a Pixel

Android 16 is currently only available for Pixel phones. Credit: Ryan Whitwam

Android 16 is currently only available for Pixel phones. Credit: Ryan Whitwam

Google recently released Android 16, which brings a smattering of new features for Pixel phones, with promises of additional updates down the road. The numbering scheme has not been consistent over the years, and as a result, Android 16 is actually the 36th major release in a lineage that stretches back nearly two decades. In 2008, we didn’t fully understand how smartphones would work, so there was a lot of trial and error. In 2025, the formula has been explored every which way. Today’s smartphones run mature software, and that means less innovation in each yearly release. That trend is exemplified and amplified by Google’s approach to Android 16.

The latest release is perhaps the most humdrum version of the platform yet, but don’t weep for Google. The company has been working toward this goal for years: a world where the average phone buyer doesn’t need to worry about Android version numbers.

A little fun up front

When you install Android 16 on one of Google’s Pixel phones, you may need to check the settings to convince yourself that the update succeeded. Visually, the changes are so minuscule that you’ll only notice them if you’re obsessive about how Android works. For example, Google changed the style of icons in the overview screen and added a few more options to the overview app menus. There are a lot of these minor style tweaks; we expect more when Google releases Material 3 Expressive, but that’s still some way off.

There are some thoughtful UI changes, but again, they’re very minor and you may not even notice them at first. For instance, Google’s predictive back gesture, which allows the previous screen to peek out from behind the currently displayed one, now works with button navigation.

Apps targeting the new API (level 36) will now default to using edge-to-edge rendering, which removes the navigation background to make apps more immersive. Android apps have long neglected larger form factors because Google itself was neglecting those devices. Since the Android 12L release a few years ago, Google has been attempting to right that wrong. Foldable phones have suffered from many of the same issues with app scaling that tablets have, but all big-screen Android devices will soon benefit from adaptive apps. Previously, apps could completely ignore the existence of large screens and render a phone-shaped UI on a large screen.

Advanced Protection is a great addition to Android, even if it’s not the most riveting.

Credit: Ryan Whitwam

Advanced Protection is a great addition to Android, even if it’s not the most riveting. Credit: Ryan Whitwam

In Android 16, apps will automatically adapt to larger screens, saving you from having to tinker with the forced aspect ratio tools built into Google and Samsung devices. Don’t confuse this with tablet-style interfaces, though. Just because an app fills the screen, it’s no guarantee that it will look good. Most of the apps we’ve run on the Pixel 9 Pro Fold are still using stretched phone interfaces that waste space. Developers need to make adjustments to properly take advantage of larger screens. Will they? That’s yet another aspect of Android 16 that we hope will come later.

Security has been a focus in many recent Android updates. While not the most sexy improvement, the addition of Advanced Protection in Android 16 could keep many people from getting hit with malware, and it makes it harder for government entities to capture your data. This feature blocks insecure 2G connections, websites lacking HTTPS, and exploits over USB. It disables sideloading of apps, too, which might make some users wary. However, if you know someone who isn’t tech savvy, you should encourage them to enable Advanced Protection when (and if) they get access to Android 16. This is a great feature that Google should have added years ago.

The changes to notifications will probably make the biggest impact on your daily life. Whether you’re using Android or iOS, notification spam is getting out of hand. Every app seems to want our attention, and notifications can really pile up. Android 16 introduces a solid quality-of-life improvement by bundling notifications from each app. While notification bundles were an option before, they were primarily used for messaging, and not all developers bothered. Now, the notification shade is less overwhelming, and it’s easy to expand each block to triage individual items.

Progress notification

Android 16’s progress notifications are partially implemented in the first release.

Credit: Ryan Whitwam

Android 16’s progress notifications are partially implemented in the first release. Credit: Ryan Whitwam

Google has also added a new category of notifications that can show progress, similar to a feature on the iPhone. The full notification will include a live updating bar that can tell you exactly when your Uber will show up, for example. These notifications will come first to delivery and rideshare apps, but none of them are working yet. You can get a preview of how these notifications will work with the Android 16 easter egg, which sends a little spaceship rocketing toward a distant planet.

The progress notifications will also have a large status bar chip with basic information visible at all times. Tapping on it will expand the full notification. However, this is also not implemented in the first release of Android 16. Yes, this is a recurring theme with Google’s new OS.

More fun still to come

You may notice that none of the things we’ve discussed in Android 16 are exactly riveting—better security features and cleaner notifications are nice to have, but this is hardly a groundbreaking update. It might have been more exciting were it not for the revamped release schedule, though. This Android 16 release isn’t even the Android 16. There will be a second Android 16 update later in the year, and some of the most interesting features aren’t arriving as part of either one.

Traditionally, Google has released new versions of Android in the fall, around the time new Pixel phones arrive. Android 15, for example, began its rollout in October 2024. Just eight months later, we’re on to Android 16. This is the first cycle in which Google will split its new version into two updates. Going forward, the bigger update will arrive in Q2, and the smaller one, which includes API and feature tweaks, will come at the end of the year.

Google has said the stylish but divisive Material 3 Expressive UI and the desktop windowing feature will come later. They’re currently in testing with the latest beta for Android 16 QPR1, which will become a Pixel Drop in September. It’s easy to imagine that with a single fall Android 16 release, both of these changes would have been included.

In the coming months, we expect to see some Google apps updated with support for Material 3, but the changes will be minimal unless you’re using a phone that runs Google’s Android theme. For all intents and purposes, that means a Pixel. Motorola has traditionally hewed closely to Google’s interface, while Samsung, OnePlus, and others forged their own paths. But even Moto has been diverging more as it focuses on AI. It’s possible that Google’s big UI shakeup will only affect Pixel users.

As for desktop windowing, that may have limited impact, too. On-device windowing will only be supported on tablets—even tablet-style foldables will be left out. We’ve asked Google to explain this decision and will report back if we get more details. Non-tablet devices will be able to project a desktop-style interface on an external display via USB video-out, but the feature won’t be available universally. Google tells Ars that it’s up to OEMs to support this feature. So even a phone that has video-out over USB may not have desktop windowing. Again, Pixels may be the best (or only) way to get Android’s new desktop mode.

The end of version numbers

There really isn’t much more to say about Android 16 as it currently exists. This update isn’t flashy, but it lays important groundwork for the future. The addition of Material 3 Expressive will add some of the gravitas we expect from major version bumps, but it’s important to remember that this is just Google’s take on Android—other companies have their own software interests, mostly revolving around AI. We’ll have to wait to see what Samsung, OnePlus, and others do with the first Android 16 release. The underlying software has been released in the Android Open Source Project (AOSP), but it will be a few months before other OEMs have updates.

In some ways, boring updates are exactly what Google has long wanted from Android. Consider the era when Android updates were undeniably exciting—a time when the addition of screenshots could be a headlining feature (Android 4.0 Ice Cream Sandwich) or when Google finally figured out how to keep runaway apps from killing your battery (Android 6.0 Marshmallow). But there was a problem with these big tentpole updates: Not everyone got them, and they were salty about it.

During the era of rapid software improvement, it took the better part of a year (or longer!) for a company like Samsung or LG to deploy new Android updates. Google would announce a laundry list of cool features, but only the tiny sliver of people using Nexus (and later Pixel) phones would see them. By the time a Samsung Galaxy user had the new version, it was time for Google to release another yearly update.

This “fragmentation” issue was a huge headache for Google, leading it to implement numerous platform changes over the years to take the pressure off its partners and app developers. There were simple tweaks like adding important apps, including Maps and the keyboard (later Gboard), to the Play Store so they could be updated regularly. On the technical side, initiatives like Project Mainline made the platform more modular so features could be added and improved outside of major updates. Google has also meticulously moved features into Play Services, which can deliver system-level changes without an over-the-air update (although there are drawbacks to that).

Android I/O sign

Android version numbers hardly matter anymore—it’s just Android.

Credit: Ryan Whitwam

Android version numbers hardly matter anymore—it’s just Android. Credit: Ryan Whitwam

The overarching story of Android has been a retreat from monolithic updates, and that means there’s less to get excited about when a new version appears. Rather than releasing a big update rife with changes, Google has shown a preference for rolling out features via the Play Store and Play Services to the entire Android ecosystem. Experiences like Play Protect anti-malware, Google Play Games, Google Cast, Find My Device, COVID-19 exposure alerts, Quick Share, and myriad more were released to almost all Google-certified Android devices without system updates.

As more features arrive in dribs and drabs via Play Services and Pixel Drops, the numbered version changes are less important. People used to complain about missing out on the tentpole updates, but it’s quieter when big features are decoupled from version numbers. And that’s where we are—Android 15 or Android 16—the number is no longer important. You won’t notice a real difference, but the upshot is that most phones get new features faster than they once did. That was the cost to fix fragmentation.

Boring updates aren’t just a function of rearranging features. Even if all the promised upgrades were here now, Android 16 would still barely move the needle. Phones are now mature products with established usage paradigms. It’s been almost 20 years since the age of touchscreen smartphones began, and we’ve figured out how these things should work. It’s not just Android updates settling into prosaic predictability—Apple is running low on paradigm shifts, too. The release of iOS 26 will add some minor improvements to a few apps, and the theme is getting more transparent with the controversial “Liquid Glass” UI. And that’s it.

Until there’s a marked change in form factors or capability, these flat glass slabs will look and work more or less as they do now (with a lot more AI slop, whether you like it or not). If you have a recent non-Pixel Android device, you’ll probably get Android 16 in the coming months, but it won’t change the way you use your phone.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Android 16 review: Post-hype Read More »

android-phones-could-soon-warn-you-of-“stingrays”-snooping-on-your-communications

Android phones could soon warn you of “Stingrays” snooping on your communications

Smartphones contain a treasure trove of personal data, which makes them a worthwhile target for hackers. However, law enforcement is not above snooping on cell phones, and their tactics are usually much harder to detect. Cell site simulators, often called Stingrays, can trick your phone into revealing private communications, but a change in Android 16 could allow phones to detect this spying.

Law enforcement organizations have massively expanded the use of Stingray devices because almost every person of interest today uses a cell phone at some point. These devices essentially trick phones into connecting to them like a normal cell tower, allowing the operator to track that device’s location. The fake towers can also shift a phone to less secure wireless technology to intercept calls and messages. There’s no indication this is happening on the suspect’s end, which is another reason these machines have become so popular with police.

However, while surveilling a target, Stingrays can collect data from other nearby phones. It’s not unreasonable to expect a modicum of privacy if you happen to be in the same general area, but sometimes police use Stingrays simply because they can. There’s also evidence that cell simulators have been deployed by mysterious groups outside law enforcement. In short, it’s a problem. Google has had plans to address this security issue for more than a year, but a lack of hardware support has slowed progress. Finally, in the coming months, we will see the first phones capable of detecting this malicious activity, and Android 16 is ready for it.

Android phones could soon warn you of “Stingrays” snooping on your communications Read More »

google-begins-rolling-out-ai-search-in-youtube

Google begins rolling out AI search in YouTube

Over the past year, Google has transformed its web search experience with AI, driving toward a zero-click experience. Now, the same AI focus is coming to YouTube, and Premium subscribers can get a preview of the new search regime. Select searches on the video platform will now produce an AI-generated results carousel with a collection of relevant videos. Even if you don’t pay for YouTube, AI is still coming for you with an expansion of Google’s video chatbot.

Google says the new AI search feature, which appears at the top of the results page, will include multiple videos, along with an AI summary of each. You can tap the video thumbnails to begin playing them right from the carousel. The summary is intended to extract the information most relevant to your search query, so you may not even have to watch the videos.

The AI results carousel is only a test right now, and it’s limited to YouTube Premium subscribers. If you’re paying for Premium, you can enable the feature on YouTube’s experimental page. While the feature is entirely opt-in, that probably won’t last long. Like AI Overviews in search, this feature will take precedence over organic search results and get people interacting with Google’s AI, and that’s the driving force behind most of the company’s decisions lately.

It’s not hard to see where this feature could lead because we’ve seen the same thing play out in general web search. By putting AI-generated content at the top of search results, Google will reduce the number of videos people click to watch. The carousel gives you the relevant parts of the video along with a summary, but the video page is another tap away. Rather than opening videos, commenting, subscribing, and otherwise interacting with creators, some users will just peruse the AI carousel. That could make it harder for channels to grow and earn revenue from their content—the same content Google will feed into Gemini to generate the AI carousel.

Google begins rolling out AI search in YouTube Read More »

google’s-spotty-find-hub-network-could-get-better-thanks-to-a-small-setup-tweak

Google’s spotty Find Hub network could get better thanks to a small setup tweak

Bluetooth trackers have existed for quite a while, but Apple made them worthwhile when it enlisted every iPhone to support AirTags. The tracking was so reliable that Apple had to add anti-stalking features. And although there are just as many Android phones out there, Google’s version of mobile device tracking, known as Find Hub, has been comparatively spotty. Now, Google is about to offer users a choice that could fix Bluetooth tracking on Android.

According to a report from Android Authority, Google is preparing to add a new screen to the Android setup process. This change, integrated with Play Services version 25.24, has yet to roll out widely, but it will allow anyone setting up an Android phone to choose a more effective method of tracking that will bolster Google’s network. This is included in the Play Services changelog as, “You can now configure Find Hub when setting up your phone, allowing the device to be located remotely.”

Trackable devices like AirTags and earbuds work by broadcasting a Bluetooth LE identifier, which phones in the area can see. Our always-online smartphones then report the approximate location of that signal, and with enough reports, the owner can pinpoint the tag. Perhaps wary of the privacy implications, Google rolled out its Find Hub network (previously Find My Device) with harsh restrictions on where device finding would work.

By default, Find Hub only works in busy areas where multiple phones can contribute to narrowing down the location. That’s suboptimal if you actually want to find things. The setting to allow finding in all areas is buried several menus deep in the system settings where no one is going to see it. Currently, the settings for Find Hub are under the security menu of your phone, but the patch may vary from one device to the next. For Pixels, it’s under Security > Device finders > Find Hub > Find your offline devices. Yeah, not exactly discoverable.

Google’s spotty Find Hub network could get better thanks to a small setup tweak Read More »

gemini-cli-is-a-free,-open-source-coding-agent-that-brings-ai-to-your-terminal

Gemini CLI is a free, open source coding agent that brings AI to your terminal

Some developers prefer to live in the command line interface (CLI), eschewing the flashy graphics and file management features of IDEs. Google’s latest AI tool is for those terminal lovers. It’s called Gemini CLI, and it shares a lot with Gemini Code Assist, but it works in your terminal environment instead of integrating with an IDE. And perhaps best of all, it’s free and open source.

Gemini CLI plugs into Gemini 2.5 Pro, Google’s most advanced model for coding and simulated reasoning. It can create and modify code for you right inside the terminal, but you can also call on other Google models to generate images or videos without leaving the security of your terminal cocoon. It’s essentially vibe coding from the command line.

This tool is fully open source, so developers can inspect the code and help to improve it. The openness extends to how you configure the AI agent. It supports Model Context Protocol (MCP) and bundled extensions, allowing you to customize your terminal as you see fit. You can even include your own system prompts—Gemini CLI relies on GEMINI.md files, which you can use to tweak the model for different tasks or teams.

Now that Gemini 2.5 Pro is generally available, Gemini Code Assist has been upgraded to use the same technology as Gemini CLI. Code Assist integrates with IDEs like VS Code for those times when you need a more feature-rich environment. The new agent mode in Code Assist allows you to give the AI more general instructions, like “Add support for dark mode to my application” or “Build my project and fix any errors.”

Gemini CLI is a free, open source coding agent that brings AI to your terminal Read More »

google’s-new-robotics-ai-can-run-without-the-cloud-and-still-tie-your-shoes

Google’s new robotics AI can run without the cloud and still tie your shoes

We sometimes call chatbots like Gemini and ChatGPT “robots,” but generative AI is also playing a growing role in real, physical robots. After announcing Gemini Robotics earlier this year, Google DeepMind has now revealed a new on-device VLA (vision language action) model to control robots. Unlike the previous release, there’s no cloud component, allowing robots to operate with full autonomy.

Carolina Parada, head of robotics at Google DeepMind, says this approach to AI robotics could make robots more reliable in challenging situations. This is also the first version of Google’s robotics model that developers can tune for their specific uses.

Robotics is a unique problem for AI because, not only does the robot exist in the physical world, but it also changes its environment. Whether you’re having it move blocks around or tie your shoes, it’s hard to predict every eventuality a robot might encounter. The traditional approach of training a robot on action with reinforcement was very slow, but generative AI allows for much greater generalization.

“It’s drawing from Gemini’s multimodal world understanding in order to do a completely new task,” explains Carolina Parada. “What that enables is in that same way Gemini can produce text, write poetry, just summarize an article, you can also write code, and you can also generate images. It also can generate robot actions.”

General robots, no cloud needed

In the previous Gemini Robotics release (which is still the “best” version of Google’s robotics tech), the platforms ran a hybrid system with a small model on the robot and a larger one running in the cloud. You’ve probably watched chatbots “think” for measurable seconds as they generate an output, but robots need to react quickly. If you tell the robot to pick up and move an object, you don’t want it to pause while each step is generated. The local model allows quick adaptation, while the server-based model can help with complex reasoning tasks. Google DeepMind is now unleashing the local model as a standalone VLA, and it’s surprisingly robust.

Google’s new robotics AI can run without the cloud and still tie your shoes Read More »