Google

gemini-cli-is-a-free,-open-source-coding-agent-that-brings-ai-to-your-terminal

Gemini CLI is a free, open source coding agent that brings AI to your terminal

Some developers prefer to live in the command line interface (CLI), eschewing the flashy graphics and file management features of IDEs. Google’s latest AI tool is for those terminal lovers. It’s called Gemini CLI, and it shares a lot with Gemini Code Assist, but it works in your terminal environment instead of integrating with an IDE. And perhaps best of all, it’s free and open source.

Gemini CLI plugs into Gemini 2.5 Pro, Google’s most advanced model for coding and simulated reasoning. It can create and modify code for you right inside the terminal, but you can also call on other Google models to generate images or videos without leaving the security of your terminal cocoon. It’s essentially vibe coding from the command line.

This tool is fully open source, so developers can inspect the code and help to improve it. The openness extends to how you configure the AI agent. It supports Model Context Protocol (MCP) and bundled extensions, allowing you to customize your terminal as you see fit. You can even include your own system prompts—Gemini CLI relies on GEMINI.md files, which you can use to tweak the model for different tasks or teams.

Now that Gemini 2.5 Pro is generally available, Gemini Code Assist has been upgraded to use the same technology as Gemini CLI. Code Assist integrates with IDEs like VS Code for those times when you need a more feature-rich environment. The new agent mode in Code Assist allows you to give the AI more general instructions, like “Add support for dark mode to my application” or “Build my project and fix any errors.”

Gemini CLI is a free, open source coding agent that brings AI to your terminal Read More »

google’s-new-robotics-ai-can-run-without-the-cloud-and-still-tie-your-shoes

Google’s new robotics AI can run without the cloud and still tie your shoes

We sometimes call chatbots like Gemini and ChatGPT “robots,” but generative AI is also playing a growing role in real, physical robots. After announcing Gemini Robotics earlier this year, Google DeepMind has now revealed a new on-device VLA (vision language action) model to control robots. Unlike the previous release, there’s no cloud component, allowing robots to operate with full autonomy.

Carolina Parada, head of robotics at Google DeepMind, says this approach to AI robotics could make robots more reliable in challenging situations. This is also the first version of Google’s robotics model that developers can tune for their specific uses.

Robotics is a unique problem for AI because, not only does the robot exist in the physical world, but it also changes its environment. Whether you’re having it move blocks around or tie your shoes, it’s hard to predict every eventuality a robot might encounter. The traditional approach of training a robot on action with reinforcement was very slow, but generative AI allows for much greater generalization.

“It’s drawing from Gemini’s multimodal world understanding in order to do a completely new task,” explains Carolina Parada. “What that enables is in that same way Gemini can produce text, write poetry, just summarize an article, you can also write code, and you can also generate images. It also can generate robot actions.”

General robots, no cloud needed

In the previous Gemini Robotics release (which is still the “best” version of Google’s robotics tech), the platforms ran a hybrid system with a small model on the robot and a larger one running in the cloud. You’ve probably watched chatbots “think” for measurable seconds as they generate an output, but robots need to react quickly. If you tell the robot to pick up and move an object, you don’t want it to pause while each step is generated. The local model allows quick adaptation, while the server-based model can help with complex reasoning tasks. Google DeepMind is now unleashing the local model as a standalone VLA, and it’s surprisingly robust.

Google’s new robotics AI can run without the cloud and still tie your shoes Read More »

uk-looking-to-loosen-google’s-control-of-its-search-engine

UK looking to loosen Google’s control of its search engine

Other conduct rules that the CMA is considering include requirements in how it ranks its search results and for Google’s distribution partners such as Apple to offer “choice screens” to help consumers switch more easily between search providers.

The CMA said Alphabet-owned Google’s dominance made the cost of search advertising “higher than would be expected” in a more competitive market.

Google on Tuesday slammed the proposals as “broad and unfocused” and said they could threaten the UK’s access to its latest products and services.

Oliver Bethell, Google’s senior director for competition, warned that “punitive regulations” could change how quickly Google launches new products in the UK.

“Proportionate, evidence-based regulation will be essential to preventing the CMA’s road map from becoming a roadblock to growth in the UK,” he added.

Bethell’s warning of the potential impact of any regulations on the wider UK economy comes after the government explicitly mandated the CMA to focus on supporting growth and investment while minimizing uncertainty for businesses.

Google said last year that it planned to invest $1 billion in a huge new data center just outside London.

The CMA’s probe comes after Google lost a pair of historic US antitrust cases over its dominance of search and its lucrative advertising business.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

UK looking to loosen Google’s control of its search engine Read More »

google-rolls-out-street-view-time-travel-to-celebrate-20-years-of-google-earth

Google rolls out Street View time travel to celebrate 20 years of Google Earth

After 20 years, being able to look at any corner of the planet in Google Earth doesn’t seem that impressive, but it was a revolution in 2005. Google Earth has gone through a lot of changes in that time, and Google has some more lined up for the service’s 20th anniversary. Soon, Google Earth will help you travel back in time with historic Street View integration, and pro users will get some new “AI-driven insights”—of course Google can’t update a product without adding at least a little AI.

Google Earth began its life as a clunky desktop client, but that didn’t stop it from being downloaded 100 million times in the first week. Today, Google Earth is available on the web, in mobile apps, and in the Google Earth Pro desktop app. However you access Earth, you’ll find a blast from the past.

For the service’s 20th anniversary, Google was inspired by a social media trend from last year in which people shared historical images of locations in Google Maps. Now, Google Earth is getting a “time travel” interface where you can see historical Street View images from almost any location.

Google Earth historical

Historical Street View images will be added to Google Earth.

Credit: Google

Historical Street View images will be added to Google Earth. Credit: Google

While this part isn’t new, Google is also using the 20th anniversary as an opportunity to surface its 3D timelapse feature. These animations use satellite data to show how an area has changed from a higher vantage point. They’re just as cool as when they were announced in 2021.

Google rolls out Street View time travel to celebrate 20 years of Google Earth Read More »

google-brings-new-gemini-features-to-chromebooks,-debuts-first-on-device-ai

Google brings new Gemini features to Chromebooks, debuts first on-device AI

Google hasn’t been talking about Chromebooks as much since AI became its all-consuming focus, but that’s changing today with a bounty of new AI features for Google-powered laptops. Newer, more powerful Chromebooks will soon have image generation, text summarization, and more built into the OS. There’s also a new Lenovo Chromebook with a few exclusive AI goodies that only work thanks to its overpowered hardware.

If you have a Chromebook Plus device, which requires a modern CPU and at least 8GB of RAM, your machine will soon get a collection of features you may recognize from other Google products. For example, Lens is expanding on Chrome OS, allowing you to long-press the launcher icon to select any area of the screen to perform a visual search. Lens also includes text capture and integration with Google Calendar and Docs.

Gemini models are also playing a role here, according to Google. The Quick Insert key, which debuted last year, is gaining a new visual element. It could already insert photos or emoji with ease, but it can now also help you generate a new image on demand with AI.

Google’s new Chromebook AI features.

Even though Google’s AI features are running in the cloud, the AI additions are limited to this more powerful class of Google-powered laptops. The Help Me Read feature leverages Gemini to summarize long documents and webpages, and it can now distill that data into a more basic form. The new Summarize option can turn dense, technical text into something more readable in a few clicks.

Google has also rolled out a new AI trial for Chromebook Plus devices. If you buy one of these premium Chromebooks, you’ll get a 12-month free trial of the Google AI Pro plan, which gives you 2TB of cloud storage, expanded access to Google’s Gemini Pro model, and NotebookLM Pro. NotebookLM is also getting a place in the Chrome OS shelf.

Google brings new Gemini features to Chromebooks, debuts first on-device AI Read More »

one-of-the-best-pac-man-games-in-years-is-playable-on-youtube,-of-all-places

One of the best Pac-Man games in years is playable on YouTube, of all places

Those who’ve played the excellent Pac-Man Championship Edition series will be familiar with the high-speed vibe here, but Pac-Man Superfast remains focused on the game’s original maze and selection of just four ghosts. That means old-school strategies for grouping ghosts together and running successful patterns through the narrow corridors work in similar ways here. Successfully excecuting those patterns becomes a tense battle of nerves here, though, requiring multiple direction changes every second at the highest speeds. While the game will technically work with swipe controls on a smartphone or tablet, high-level play really requires the precision of a keyboard via a desktop/laptop web browser (we couldn’t get the game to recognize a USB controller, unfortunately).

Collecting those high-value items at the bottom is your ticket to a lot of extra lives. Credit: Youtube Playables

As exciting as the high-speed maze gameplay gets, though, Pac-Man Superfast is hampered by a few odd design decisions. The game ends abruptly after just 13 levels, for instance, making it impossible to even attempt the high-endurance 256-level runs that Pac-Man is known for. The game also throws an extra life at you every 5,000 points, making it relatively easy to brute force your way to the end as long as you focus on the three increasingly high-point-value items that appear periodically on each stage.

Despite this, the game doesn’t give any point reward for unused extra lives or long-term survival at high speeds, limiting the rewards for high-level play. And the lack of a built-in leaderboard makes it hard to directly compare your performance to friends and/or strangers anyway.

A large part of the reason I wrote about this game was to see if someone could beat my high score.

Credit: Youtube Playables

A large part of the reason I wrote about this game was to see if someone could beat my high score. Credit: Youtube Playables

Those issues aside, I’ve had a blast coming back to Pac-Man Supefast over and over again in the past few days, slowly raising my high score above the 162,000 point mark during coffee breaks (consider the gauntlet thrown, Ars readers). If you’re a fan of classic arcade games, Pac-Man Superfast is worth a try before the “YouTube Playables” initiative inevitably joins the growing graveyard of discontinued Google products.

One of the best Pac-Man games in years is playable on YouTube, of all places Read More »

google’s-frighteningly-good-veo-3-ai-videos-to-be-integrated-with-youtube-shorts

Google’s frighteningly good Veo 3 AI videos to be integrated with YouTube Shorts

Even in the age of TikTok, YouTube viewership continues to climb. While Google’s iconic video streaming platform has traditionally pushed creators to produce longer videos that can accommodate more ads, the site’s Shorts format is growing fast. That growth may explode in the coming months, as YouTube CEO Neal Mohan has announced that the Google Veo 3 AI video generator will be integrated with YouTube Shorts later this summer.

According to Mohan, YouTube Shorts has seen a rise in popularity even compared to YouTube as a whole. The streaming platform is now the most watched source of video in the world, but Shorts specifically have seen a massive 186 percent increase in viewership over the past year. Mohan says Shorts now average 200 billion daily views.

YouTube has already equipped creators with a few AI tools, including Dream Screen, which can produce AI video backgrounds with a text prompt. Veo 3 support will be a significant upgrade, though. At the Cannes festival, Mohan revealed that the streaming site will begin offering integration with Google’s leading video model later this summer. “I believe these tools will open new creative lanes for everyone to explore,” said Mohan.

YouTube Shorts recommendations.

YouTube heavily promotes Shorts on the homepage.

Credit: Google

YouTube heavily promotes Shorts on the homepage. Credit: Google

This move will require a few tweaks to Veo 3 outputs, but it seems like a perfect match. As the name implies, YouTube Shorts is intended for short video content. The format initially launched with a 30-second ceiling, but that has since been increased to 60 seconds. Because of the astronomical cost of generative AI, each generated Veo clip is quite short, a mere eight seconds in the current version of the tool. Slap a few of those together, and you’ve got a YouTube Short.

Google’s frighteningly good Veo 3 AI videos to be integrated with YouTube Shorts Read More »

openai-weighs-“nuclear-option”-of-antitrust-complaint-against-microsoft

OpenAI weighs “nuclear option” of antitrust complaint against Microsoft

OpenAI executives have discussed filing an antitrust complaint with US regulators against Microsoft, the company’s largest investor, The Wall Street Journal reported Monday, marking a dramatic escalation in tensions between the two long-term AI partners. OpenAI, which develops ChatGPT, has reportedly considered seeking a federal regulatory review of the terms of its contract with Microsoft for potential antitrust law violations, according to people familiar with the matter.

The potential antitrust complaint would likely argue that Microsoft is using its dominant position in cloud services and contractual leverage to suppress competition, according to insiders who described it as a “nuclear option,” the WSJ reports.

The move could unravel one of the most important business partnerships in the AI industry—a relationship that started with a $1 billion investment by Microsoft in 2019 and has grown to include billions more in funding, along with Microsoft’s exclusive rights to host OpenAI models on its Azure cloud platform.

The friction centers on OpenAI’s efforts to transition from its current nonprofit structure into a public benefit corporation, a conversion that needs Microsoft’s approval to complete. The two companies have not been able to agree on details after months of negotiations, sources told Reuters. OpenAI’s existing for-profit arm would become a Delaware-based public benefit corporation under the proposed restructuring.

The companies are discussing revising the terms of Microsoft’s investment, including the future equity stake it will hold in OpenAI. According to The Information, OpenAI wants Microsoft to hold a 33 percent stake in a restructured unit in exchange for foregoing rights to future profits. The AI company also wants to modify existing clauses that give Microsoft exclusive rights to host OpenAI models in its cloud.

OpenAI weighs “nuclear option” of antitrust complaint against Microsoft Read More »

google-can-now-generate-a-fake-ai-podcast-of-your-search-results

Google can now generate a fake AI podcast of your search results

NotebookLM is undoubtedly one of Google’s best implementations of generative AI technology, giving you the ability to explore documents and notes with a Gemini AI model. Last year, Google added the ability to generate so-called “audio overviews” of your source material in NotebookLM. Now, Google has brought those fake AI podcasts to search results as a test. Instead of clicking links or reading the AI Overview, you can have two nonexistent people tell you what the results say.

This feature is not currently rolling out widely—it’s available in search labs, which means you have to manually enable it. Anyone can opt in to the new Audio Overview search experience, though. If you join the test, you’ll quickly see the embedded player in Google search results. However, it’s not at the top with the usual block of AI-generated text. Instead, you’ll see it after the first few search results, below the “People also ask” knowledge graph section.

Credit: Google

Google isn’t wasting resources to generate the audio automatically, so you have to click the generate button to get started. A few seconds later, you’re given a back-and-forth conversation between two AI voices summarizing the search results. The player includes a list of sources from which the overview is built, as well as the option to speed up or slow down playback.

Google can now generate a fake AI podcast of your search results Read More »

another-one-for-the-graveyard:-google-to-kill-instant-apps-in-december

Another one for the graveyard: Google to kill Instant Apps in December

But that was then, and this is now. Today, an increasing number of mobile apps are functionally identical to the mobile websites they are intended to replace, and developer uptake of Instant Apps was minimal. Even in 2017, loading an app instead of a website had limited utility. As a result, most of us probably only encountered Instant Apps a handful of times in all the years it was an option for developers.

To use the feature, which was delivered to virtually all Android devices by Google Play Services, developers had to create a special “instant” version of their app that was under 15MB. The additional legwork to get an app in front of a subset of new users meant this was always going to be a steep climb, and Google struggles to incentivize developers to adopt new features. Plus, there’s no way to cram in generative AI! So it’s not a shock to see Google retiring the feature.

This feature is currently listed in the collection of Google services in your phone settings as “Google Play Instant.” Unfortunately, there aren’t many examples still available if you’re curious about what Instant Apps were like—the Finnish publisher Ilta-Sanomat is one of the few still offering it. Make sure the settings toggle for Instant Apps is on if you want a little dose of nostalgia.

Another one for the graveyard: Google to kill Instant Apps in December Read More »

ai-overviews-hallucinates-that-airbus,-not-boeing,-involved-in-fatal-air-india-crash

AI Overviews hallucinates that Airbus, not Boeing, involved in fatal Air India crash

When major events occur, most people rush to Google to find information. Increasingly, the first thing they see is an AI Overview, a feature that already has a reputation for making glaring mistakes. In the wake of a tragic plane crash in India, Google’s AI search results are spreading misinformation claiming the incident involved an Airbus plane—it was actually a Boeing 787.

Travelers are more attuned to the airliner models these days after a spate of crashes involving Boeing’s 737 lineup several years ago. Searches for airline disasters are sure to skyrocket in the coming days, with reports that more than 200 passengers and crew lost their lives in the Air India Flight 171 crash. The way generative AI operates means some people searching for details may get the wrong impression from Google’s results page.

Not all searches get AI answers, but Google has been steadily expanding this feature since it debuted last year. One searcher on Reddit spotted a troubling confabulation when searching for crashes involving Airbus planes. AI Overviews, apparently overwhelmed with results reporting on the Air India crash, stated confidently (and incorrectly) that it was an Airbus A330 that fell out of the sky shortly after takeoff. We’ve run a few similar searches—some of the AI results say Boeing, some say Airbus, and some include a strange mashup of both Airbus and Boeing. It’s a mess.

In this search, Google’s AI says the crash involved an Airbus A330 instead of a Boeing 787.

Credit: /u/stuckintrraffic

In this search, Google’s AI says the crash involved an Airbus A330 instead of a Boeing 787. Credit: /u/stuckintrraffic

But why is Google bringing up the Air India crash at all in the context of Airbus? Unfortunately, it’s impossible to predict if you’ll get an AI Overview that blames Boeing or Airbus—generative AI is non-deterministic, meaning the output is different every time, even for identical inputs. Our best guess for the underlying cause is that numerous articles on the Air India crash mention Airbus as Boeing’s main competitor. AI Overviews is essentially summarizing these results, and the AI goes down the wrong path because it lacks the ability to understand what is true.

AI Overviews hallucinates that Airbus, not Boeing, involved in fatal Air India crash Read More »

google-left-months-old-dark-mode-bug-in-android-16,-fix-planned-for-next-pixel-drop

Google left months-old dark mode bug in Android 16, fix planned for next Pixel Drop

Google’s Pixel phones got a big update this week with the release of Android 16 and a batch of Pixel Drop features. Pixels now have enhanced security, new contact features, and improved button navigation. However, some of the most interesting features, like desktop windowing and Material 3 Expressive, are coming later. Another thing that’s coming later, it seems, is a fix for an annoying bug Google introduced a few months back.

Google broke the system dark mode schedule in its March Pixel update and did not address it in time for Android 16. The company confirms a fix is coming, though.

The system-level dark theme arrives in Android 10 to offer a less eye-searing option, which is particularly handy in dark environments. It took a while for even Google’s apps to fully adopt this feature, but support is solid five years later. Google even offers a scheduling feature to switch between light and dark mode at custom times or based on sunrise/sunset. However, the scheduling feature was busted in the March update.

Currently, if you manually toggle dark mode on or off, schedules stop working. The only way to get them back is to set up your schedule again and then never toggle dark mode. Google initially marked this as “intended behavior,” but a more recent bug report was accepted as a valid issue.

Google left months-old dark mode bug in Android 16, fix planned for next Pixel Drop Read More »