Google

google-balks-at-$270m-fine-after-training-ai-on-french-news-sites’-content

Google balks at $270M fine after training AI on French news sites’ content

Google balks at $270M fine after training AI on French news sites’ content

Google has agreed to pay 250 million euros (about $273 million) to settle a dispute in France after breaching years-old commitments to inform and pay French news publishers when referencing and displaying content in both search results and when training Google’s AI-powered chatbot, Gemini.

According to France’s competition watchdog, the Autorité de la Concurrence (ADLC), Google dodged many commitments to deal with publishers fairly. Most recently, it never notified publishers or the ADLC before training Gemini (initially launched as Bard) on publishers’ content or displaying content in Gemini outputs. Google also waited until September 28, 2023, to introduce easy options for publishers to opt out, which made it impossible for publishers to negotiate fair deals for that content, the ADLC found.

“Until this date, press agencies and publishers wanting to opt out of this use had to insert an instruction opposing any crawling of their content by Google, including on the Search, Discover and Google News services,” the ADLC noted, warning that “in the future, the Autorité will be particularly attentive as regards the effectiveness of opt-out systems implemented by Google.”

To address breaches of four out of seven commitments in France—which the ADLC imposed in 2022 for a period of five years to “benefit” publishers by ensuring Google’s ongoing negotiations with them were “balanced”—Google has agreed to “a series of corrective measures,” the ADLC said.

Google is not happy with the fine, which it described as “not proportionate” partly because the fine “doesn’t sufficiently take into account the efforts we have made to answer and resolve the concerns raised—in an environment where it’s very hard to set a course because we can’t predict which way the wind will blow next.”

According to Google, regulators everywhere need to clearly define fair use of content when developing search tools and AI models, so that search companies and AI makers always know “whom we are paying for what.” Currently in France, Google contends, the scope of Google’s commitments has shifted from just general news publishers to now also include specialist publications and listings and comparison sites.

The ADLC agreed that “the question of whether the use of press publications as part of an artificial intelligence service qualifies for protection under related rights regulations has not yet been settled,” but noted that “at the very least,” Google was required to “inform publishers of the use of their content for their Bard software.”

Regarding Bard/Gemini, Google said that it “voluntarily introduced a new technical solution called Google-Extended to make it easier for rights holders to opt out of Gemini without impact on their presence in Search.” It has now also committed to better explain to publishers both “how our products based on generative AI work and how ‘Opt Out’ works.”

Google said that it agreed to the settlement “because it’s time to move on” and “focus on the larger goal of sustainable approaches to connecting people with quality content and on working constructively with French publishers.”

“Today’s fine relates mostly to [a] disagreement about how much value Google derives from news content,” Google’s blog said, claiming that “a lack of clear regulatory guidance and repeated enforcement actions have made it hard to navigate negotiations with publishers, or plan how we invest in news in France in the future.”

What changes did Google agree to make?

Google defended its position as “the first and only platform to have signed significant licensing agreements” in France, benefiting 280 French press publishers and “covering more than 450 publications.”

With these publishers, the ADLC found that Google breached requirements to “negotiate in good faith based on transparent, objective, and non-discriminatory criteria,” to consistently “make a remuneration offer” within three months of a publisher’s request, and to provide information for publishers to “transparently assess their remuneration.”

Google also breached commitments to “inform editors and press agencies of the use of their content by its service Bard” and of Google’s decision to link “the use of press agencies’ and publishers’ content by its artificial intelligence service to the display of protected content on services such as Search, Discover and News.”

Regarding negotiations, the ADLC found that Google not only failed to be transparent with publishers about remuneration, but also failed to keep the ADLC informed of information necessary to monitor whether Google was honoring its commitments to fairly pay publishers. Partly “to guarantee better communication,” Google has agreed to appoint a French-speaking representative in its Paris office, along with other steps the ADLC recommended.

According to the ADLC’s announcement (translated from French), Google seemingly acted sketchy in negotiations by not meeting non-discrimination criteria—and unfavorably treating publishers in different situations identically—and by not mentioning “all the services that could generate revenues for the negotiating party.”

“According to the Autorité, not taking into account differences in attractiveness between content does not allow for an accurate reflection of the contribution of each press agency and publisher to Google’s revenues,” the ADLC said.

Also problematically, Google established a minimum threshold of 100 euros for remuneration that it has now agreed to drop.

This threshold, “in its very principle, introduces discrimination between publishers that, below a certain threshold, are all arbitrarily assigned zero remuneration, regardless of their respective situations,” the ADLC found.

Google balks at $270M fine after training AI on French news sites’ content Read More »

google-reshapes-fitbit-in-its-image-as-users-allege-“planned-obsolescence”

Google reshapes Fitbit in its image as users allege “planned obsolescence”

Google Fitbit, emphasis on Google —

Generative AI may not be enough to appease frustrated customers.

Product render of Fitbit Charge 5 in Lunar White and Soft Gold.

Enlarge / Google Fitbit’s Charge 5.

Fitbit

Google closed its Fitbit acquisition in 2021. Since then, the tech behemoth has pushed numerous changes to the wearable brand, including upcoming updates announced this week. While Google reshapes its fitness tracker business, though, some long-time users are regretting their Fitbit purchases and questioning if Google’s practices will force them to purchase their next fitness tracker elsewhere.

Generative AI coming to Fitbit (of course)

As is becoming common practice with consumer tech announcements, Google’s latest announcements about Fitbit seemed to be trying to convince users of the wonders of generative AI and how that will change their gadgets for the better. In a blog post yesterday, Dr. Karen DeSalvo, Google’s chief health officer, announced that Fitbit Premium subscribers would be able to test experimental AI features later this year (Google hasn’t specified when).

“You will be able to ask questions in a natural way and create charts just for you to help you understand your own data better. For example, you could dig deeper into how many active zone minutes… you get and the correlation with how restorative your sleep is,” she wrote.

DeSalvo’s post included an example of a user asking a chatbot if there was a connection between their sleep and activity and said that the experimental AI features will only be available to “a limited number of Android users who are enrolled in the Fitbit Labs program in the Fitbit mobile app.”

Google shared this image as an example of what future Fitbit generative AI features could look like.

Google shared this image as an example of what future Fitbit generative AI features could look like.

Fitbit is also working with the Google Research team and “health and wellness experts, doctors, and certified coaches” to develop a large language model (LLM) for upcoming Fitbit mobile app features that pull data from Fitbit and Pixel devices, DeSalvo said. The announcement follows Google’s decision to stop selling Fitbits in places where it doesn’t sell Pixels, taking the trackers off shelves in a reported 29 countries.

In a blog post yesterday, Yossi Matias, VP of engineering and research at Google, said the company wants to use the LLM to add personalized coaching features, such as the ability to look for sleep irregularities and suggest actions “on how you might change the intensity of your workout.”

Google’s Fitbit is building the LLM on Gemini models that are tweaked on de-identified data from unspecified “research case studies,” Matias said, adding: “For example, we’re testing performance using sleep medicine certification exam-like practice tests.”

Gemini, which Google released in December, has been criticized for generating historically inaccurate images. After users complained about different races and ethnicities being inaccurately portrayed in prompts for things like Nazi members and medieval British kings, Google pulled the feature last month and said it would release a fix “soon.”In a press briefing, Florence Thng, director and product lead at Fitbit, suggested that such problems wouldn’t befall Fitbit’s LLM since it’s being tested by users before an official rollout, CNET reported.

Other recent changes to Fitbit include a name tweak from Fitbit by Google, to Google Fitbit, as spotted by 9to5Google this week.

A screenshot from Fitbit's homepage.

Enlarge / A screenshot from Fitbit’s homepage.

Combined with other changes that Google has brought to Fitbit over the past two years—including axing most social features, the ability to sync with computers, its browser-based SDK for developing apps, and pushing users to log in with Google accounts ahead of Google shuttering all Fitbit accounts in 2025—Fitbit, like many acquired firms, is giving long-time customers a different experience than it did before it was bought.

Disheartened customers

Meanwhile, customers, especially Charge 5 users, are questioning whether their next fitness tracker will come from Fitbit Google Fitbit.

For example, in January, we reported that users were claiming that their Charge 5 suddenly started draining battery rapidly after installing a firmware update that Fitbit released in December. As of this writing, one thread discussing the problem on Fitbit’s support forum has 33 pages of comments. Google told BBC in January that it didn’t know what the problem was but knew that it wasn’t tied to firmware. Google hasn’t followed up with further explanation since. The company hasn’t responded to multiple requests from Ars Technica for comment. In the meantime, users continue experiencing problems and have reported so on Fitbit’s forum. Per user comments, the most Google has done is offer discounts or, if the device was within its warranty period, a replacement.

“This is called planned obsolescence. I’ll be upgrading to a watch style tracker from a different company. I wish Fitbit hadn’t sold out to Google,” a forum user going by Sean77024 wrote on Fitbit’s support forum yesterday.

Others, like 2MeFamilyFlyer, have also accused Fitbit of planning Charge 5 obsolescence. 2MeFamilyFlyer said they’re seeking a Fitbit alternative.

The ongoing problems with the Charge 5, which was succeeded by the Charge 6 on October 12, has some, like reneeshawgo on Fitbit’s forum and PC World Senior Editor Alaina Yee saying that Fitbit devices aren’t meant to last long. In January, Yee wrote: “You should see Fitbits as a 1-year purchase in the US and two years in regions with better warranty protections.”

For many, a year or two wouldn’t be sufficient, even if the Fitbit came with trendy AI features.

Google reshapes Fitbit in its image as users allege “planned obsolescence” Read More »

youtube-will-require-disclosure-of-ai-manipulated-videos-from-creators

YouTube will require disclosure of AI-manipulated videos from creators

You could also just ban manipulations altogether? —

YouTube wants “realistic” likenesses or audio fabrications to be labeled.

YouTube will require disclosure of AI-manipulated videos from creators

Future Publishing | Getty Images

YouTube is rolling out a new requirement for content creators: You must disclose when you’re using AI-generated content in your videos. The disclosure appears in the video upload UI and will be used to power an “altered content” warning on videos.

Google previewed the “misleading AI content” policy in November, but the questionnaire is now going live. Google is mostly concerned about altered depictions of real people or events, which sounds like more election-season concerns about how AI can mislead people. Just last week, Google disabled election questions for its “Gemini” chatbot.

As always, the exact rules on YouTube are up for interpretation. Google says it’s “requiring creators to disclose to viewers when realistic content—content a viewer could easily mistake for a real person, place, or event—is made with altered or synthetic media, including generative AI,” but doesn’t require creators to disclose manipulated content that is “clearly unrealistic, animated, includes special effects, or has used generative AI for production assistance.”

Google gives examples of when a disclosure is necessary, and the new video upload questionnaire walks content creators through these requirements:

  • Using the likeness of a realistic person: Digitally altering content to replace the face of one individual with another’s or synthetically generating a person’s voice to narrate a video.
  • Altering footage of real events or places: Such as making it appear as if a real building caught fire, or altering a real cityscape to make it appear different from reality.
  • Generating realistic scenes: Showing a realistic depiction of fictional major events, like a tornado moving toward a real town.
  • Google’s video upload questionnaire.

    YouTube

  • Take note of the super-tiny message at the bottom, denoting “altered or synthetic content.”

    YouTube

  • You can expand the description for slightly more info.

    YouTube

Google says the labels will start rolling out “across all YouTube surfaces and formats in the weeks ahead, beginning with the YouTube app on your phone, and soon on your desktop and TV.” The company says it’s also working on a process for people who are the subject of an AI-manipulated video to request its removal, but it doesn’t have details on that yet.

YouTube will require disclosure of AI-manipulated videos from creators Read More »

google’s-phone-app-no-longer-searches-google-maps

Google’s phone app no longer searches Google Maps

AI is the future though —

Google’s search-infused phone app was touted as a major feature a few years ago.

The Google Phone's Play Store listing still touts Nearby Places as a major feature.

Enlarge / The Google Phone’s Play Store listing still touts Nearby Places as a major feature.

Google

9to5Google reports that Google has killed off the Google Phone app’s “nearby places” feature. Google announced the impending death of the feature in February, saying: “We’ve found only a very small number of people use this feature, and the vast majority of users go to Google Search or Maps when seeking business-related phone numbers.” Now it’s really dead.

The “Nearby Places” feature in the Google Phone app seemed like a useful and common-sense feature. It connected the power of Google Maps to the phone app, allowing the phone search bar to not only look through your contacts but also businesses listed in Google Maps. When you want to call the local pizza place, just type in the name, rather than some arcane string of numbers, and hit “dial.”

The feature has been around on Pixel phones since at least the Pixel 2 and has been generally available to anyone who downloaded the “Phone by Google” app in the Play Store for the past few years. It was a perfect “Google” feature, combining the company’s OS, breadth of online data, and search into a useful function. Google has made its AI-infused phone app a primary selling point of Pixel phones over the years, so stripping it of features is weird.

Google claims that the feature is being killed because it was used by a “very small number of people,” but it also might be shutting it down because it hasn’t worked reliably for a while now. It looks like Google broke the Nearby Places search around August 2023. Here’s a bug report from around that time with 100 comments, and there are several Reddit and Google forum posts out there. Even new phones were shipping with the feature disabled. One reason for Google’s apparent disinterest in the feature is that the phone app’s Nearby Places searches took traffic away from Google Maps. Maps shows ads in the search results, and the phone app didn’t.

Google’s phone app no longer searches Google Maps Read More »

apple-may-hire-google-to-power-new-iphone-ai-features-using-gemini—report

Apple may hire Google to power new iPhone AI features using Gemini—report

Bake a cake as fast as you can —

With Apple’s own AI tech lagging behind, the firm looks for a fallback solution.

A Google

Benj Edwards

On Monday, Bloomberg reported that Apple is in talks to license Google’s Gemini model to power AI features like Siri in a future iPhone software update coming later in 2024, according to people familiar with the situation. Apple has also reportedly conducted similar talks with ChatGPT maker OpenAI.

The potential integration of Google Gemini into iOS 18 could bring a range of new cloud-based (off-device) AI-powered features to Apple’s smartphone, including image creation or essay writing based on simple prompts. However, the terms and branding of the agreement have not yet been finalized, and the implementation details remain unclear. The companies are unlikely to announce any deal until Apple’s annual Worldwide Developers Conference in June.

Gemini could also bring new capabilities to Apple’s widely criticized voice assistant, Siri, which trails newer AI assistants powered by large language models (LLMs) in understanding and responding to complex questions. Rumors of Apple’s own internal frustration with Siri—and potential remedies—have been kicking around for some time. In January, 9to5Mac revealed that Apple had been conducting tests with a beta version of iOS 17.4 that used OpenAI’s ChatGPT API to power Siri.

As we have previously reported, Apple has also been developing its own AI models, including a large language model codenamed Ajax and a basic chatbot called Apple GPT. However, the company’s LLM technology is said to lag behind that of its competitors, making a partnership with Google or another AI provider a more attractive option.

Google launched Gemini, a language-based AI assistant similar to ChatGPT, in December and has updated it several times since. Many industry experts consider the larger Gemini models to be roughly as capable as OpenAI’s GPT-4 Turbo, which powers the subscription versions of ChatGPT. Until just recently, with the emergence of Gemini Ultra and Claude 3, OpenAI’s top model held a fairly wide lead in perceived LLM capability.

The potential partnership between Apple and Google could significantly impact the AI industry, as Apple’s platform represents more than 2 billion active devices worldwide. If the agreement gets finalized, it would build upon the existing search partnership between the two companies, which has seen Google pay Apple billions of dollars annually to make its search engine the default option on iPhones and other Apple devices.

However, Bloomberg reports that the potential partnership between Apple and Google is likely to draw scrutiny from regulators, as the companies’ current search deal is already the subject of a lawsuit by the US Department of Justice. The European Union is also pressuring Apple to make it easier for consumers to change their default search engine away from Google.

With so much potential money on the line, selecting Google for Apple’s cloud AI job could potentially be a major loss for OpenAI in terms of bringing its technology widely into the mainstream—with a market representing billions of users. Even so, any deal with Google or OpenAI may be a temporary fix until Apple can get its own LLM-based AI technology up to speed.

Apple may hire Google to power new iPhone AI features using Gemini—report Read More »

google-says-chrome’s-new-real-time-url-scanner-won’t-invade-your-privacy

Google says Chrome’s new real-time URL scanner won’t invade your privacy

We don’t need another way to track you —

Google says URL hashes and a third-party relay server will keep it out of your history.

Google's safe browsing warning is not subtle.

Enlarge / Google’s safe browsing warning is not subtle.

Google

Google Chrome’s “Safe Browsing” feature—the thing that pops up a giant red screen when you try to visit a malicious website—is getting real-time updates for all users. Google announced the change on the Google Security Blog. Real-time protection naturally means sending URL data to some far-off server, but Google says it will use “privacy-preserving URL protection” so it won’t get a list of your entire browsing history. (Not that Chrome doesn’t already have features that log your history or track you.)

Safe Browsing basically boils down to checking your current website against a list of known bad sites. Google’s old implementation happened locally, which had the benefit of not sending your entire browsing history to Google, but that meant downloading the list of bad sites at 30- to 60-minute intervals. There are a few problems with local downloads. First, Google says the majority of bad sites exist for “less than 10 minutes,” so a 30-minute update time isn’t going to catch them. Second, the list of all bad websites on the entire Internet is going to be very large and constantly growing, and Google already says that “not all devices have the resources necessary to maintain this growing list.”

If you really want to shut down malicious sites, what you want is real-time checking against a remote server. There are a lot of bad ways you could do this. One way would be to just send every URL to the remote server, and you’d basically double Internet website traffic for all of Chrome’s 5 billion users. To cut down on those server requests, Chrome is instead going to download a list of known good sites, and that will cover the vast majority of web traffic. Only the small, unheard-of sites will be subject to a server check, and even then, Chrome will keep a cache of your recent small site checks, so you’ll only check against the server the first time.

When you’re not on the known-safe-site list or recent cache, info about your web URL will be headed to some remote server, but Google says it won’t be able to see your web history. Google does all of its URL checking against hashes, rather than the plain-text URL. Previously, Google offered an opt-in “enhanced protection” mode for safe browsing, which offered more up-to-date malicious site blocking in exchange for “sharing more security-related data” with Google, but the company thinks this new real-time mode is privacy-preserving enough to roll out to everyone by default. The “Enhanced” mode is still sticking around since that allows for “deep scans for suspicious files and extra protection from suspicious Chrome extensions.”

Google's diagram of how the whole process works.

Enlarge / Google’s diagram of how the whole process works.

Google

Interestingly, the privacy scheme involves a relay server that will be run by a third party. Google says, “In order to preserve user privacy, we have partnered with Fastly, an edge cloud platform that provides content delivery, edge compute, security, and observability services, to operate an Oblivious HTTP (OHTTP) privacy server between Chrome and Safe Browsing.”

For now, Google’s remote checks, when they happen, will mean some latency while your safety check completes, but Google says it’s “in the process of introducing an asynchronous mechanism, which will allow the site to load while the real-time check is in progress. This will improve the user experience, as the real-time check won’t block page load.”

The feature should be live in the latest Chrome release for desktop, Android, and iOS. If you don’t want it, you can turn it off in the “Privacy and security” section of the Chrome settings.

Listing image by Getty Images

Google says Chrome’s new real-time URL scanner won’t invade your privacy Read More »

google’s-new-gaming-ai-aims-past-“superhuman-opponent”-and-at-“obedient-partner”

Google’s new gaming AI aims past “superhuman opponent” and at “obedient partner”

Even hunt-and-fetch quests are better with a little AI help.

Enlarge / Even hunt-and-fetch quests are better with a little AI help.

At this point in the progression of machine-learning AI, we’re accustomed to specially trained agents that can utterly dominate everything from Atari games to complex board games like Go. But what if an AI agent could be trained not just to play a specific game but also to interact with any generic 3D environment? And what if that AI was focused not only on brute-force winning but instead on responding to natural language commands in that gaming environment?

Those are the kinds of questions animating Google’s DeepMind research group in creating SIMA, a “Scalable, Instructable, Multiworld Agent” that “isn’t trained to win, it’s trained to do what it’s told,” as research engineer Tim Harley put it in a presentation attended by Ars Technica. “And not just in one game, but… across a variety of different games all at once.”

Harley stresses that SIMA is still “very much a research project,” and the results achieved in the project’s initial tech report show there’s a long way to go before SIMA starts to approach human-level listening capabilities. Still, Harley said he hopes that SIMA can eventually provide the basis for AI agents that players can instruct and talk to in cooperative gameplay situations—think less “superhuman opponent” and more “believable partner.”

“This work isn’t about achieving high game scores,” as Google puts it in a blog post announcing its research. “Learning to play even one video game is a technical feat for an AI system, but learning to follow instructions in a variety of game settings could unlock more helpful AI agents for any environment.”

Learning how to learn

Google trained SIMA on nine very different open-world games in an attempt to create a generalizable AI agent.

To train SIMA, the DeepMind team focused on three-dimensional games and test environments controlled either from a first-person perspective or an over-the-shoulder third-person perspective. The nine games in its test suite, which were provided by Google’s developer partners, all prioritize “open-ended interactions” and eschew “extreme violence” while providing a wide range of different environments and interactions, from “outer space exploration” to “wacky goat mayhem.”

In an effort to make SIMA as generalizable as possible, the agent isn’t given any privileged access to a game’s internal data or control APIs. The system takes nothing but on-screen pixels as its input and provides nothing but keyboard and mouse controls as its output, mimicking “the [model] humans have been using [to play video games] for 50 years,” as the researchers put it. The team also designed the agent to work with games running in real time (i.e., at 30 frames per second) rather than slowing down the simulation for extra processing time like some other interactive machine-learning projects.

Animated samples of SIMA responding to basic commands across very different gaming environments.

While these restrictions increase the difficulty of SIMA’s tasks, they also mean the agent can be integrated into a new game or environment “off the shelf” with minimal setup and without any specific training regarding the “ground truth” of a game world. It also makes it relatively easy to test whether things SIMA has learned from training on previous games can “transfer” over to previously unseen games, which could be a key step to getting at artificial general intelligence.

For training data, SIMA uses video of human gameplay (and associated time-coded inputs) on the provided games, annotated with natural language descriptions of what’s happening in the footage. These clips are focused on “instructions that can be completed in less than approximately 10 seconds” to avoid the complexity that can develop with “the breadth of possible instructions over long timescales,” as the researchers put it in their tech report. Integration with pre-trained models like SPARC and Phenaki also helps the SIMA model avoid having to learn how to interpret language and visual data from scratch.

Google’s new gaming AI aims past “superhuman opponent” and at “obedient partner” Read More »

google’s-gemini-ai-now-refuses-to-answer-election-questions

Google’s Gemini AI now refuses to answer election questions

I also refuse to answer political questions —

Gemini is opting out of election-related responses entirely for 2024.

The Google Gemini logo.

Enlarge / The Google Gemini logo.

Google

Like many of us, Google Gemini is tired of politics. Reuters reports that Google has restricted the chatbot from answering questions about the upcoming US election, and instead, it will direct users to Google Search.

Google had planned to do this back when the Gemini chatbot was still called “Bard.” In December, the company said, “Beginning early next year, in preparation for the 2024 elections and out of an abundance of caution on such an important topic, we’ll restrict the types of election-related queries for which Bard and [Google Search’s Bard integration] will return responses.” Tuesday, Google confirmed to Reuters that those restrictions have kicked in. Election queries now tend to come back with the refusal: “I’m still learning how to answer this question. In the meantime, try Google Search.”

Google’s original plan in December was likely to disable election info so Gemini could avoid any political firestorms. Boy, did that not work out! When asked to generate images of people, Gemini quietly tacked diversity requirements onto the image request; this practice led to offensive and historically inaccurate images along with a general refusal to generate images of white people. Last month that earned Google wall-to-wall coverage in conservative news spheres along the lines of “Google’s woke AI hates white people!” Google CEO Sundar Pichai called the AI’s “biased” responses “completely unacceptable,” and for now, creating images of people is disabled while Google works on it.

The start of the first round of US elections in the AI era has already led to new forms of disinformation, and Google presumably wants to opt out of all of it.

Google’s Gemini AI now refuses to answer election questions Read More »

google’s-self-designed-office-swallows-wi-fi-“like-the-bermuda-triangle”

Google’s self-designed office swallows Wi-Fi “like the Bermuda Triangle”

Please return to the office. You’ll be more productive! —

Bad radio propagation means Googlers are making do with Ethernet cables, phone hotspots.

Google's Bay View campus was designed with the world's strangest roof line.

Enlarge / Google’s Bay View campus was designed with the world’s strangest roof line.

Google

Google’s swanky new “Bay View” campus apparently has a major problem: bad Wi-Fi. Reuters reports that Google’s first self-designed office building has “been plagued for months by inoperable or, at best, spotty Wi-Fi, according to six people familiar with the matter.” A Google spokesperson confirmed the problems and said the company is working on fixing them.

Bay View opened in May 2022. At launch, Google’s VP of Real Estate & Workplace Services, David Radcliffe, said the site “marks the first time we developed one of our own major campuses, and the process gave us the chance to rethink the very idea of an office.” The result is a wild tent-like structure with a striking roofline made up of swooping square sections. Of course, it’s all made of metal and glass, but the roof shape looks like squares of cloth held up by poles—each square section has high points on the four corners and sags down in the middle. The roof is covered in solar cells and collects rainwater while also letting in natural light, and Google calls it the “Gradient Canopy.”

We'll guess the roofline's multiple parabolic sections are great at scattering the Wi-Fi signal.

Enlarge / We’ll guess the roofline’s multiple parabolic sections are great at scattering the Wi-Fi signal.

Google

All those peaks and parabolic ceiling sections apparently aren’t great for Wi-Fi propagation, with the Reuters report saying that the roof “swallows broadband like the Bermuda Triangle.” Googlers assigned to the building are making do with Ethernet cables, using phones as hotspots, or working outside, where the Wi-Fi is stronger. One anonymous employee told Reuters, “You’d think the world’s leading Internet company would have worked this out.”

Having an office with barely working Wi-Fi sure is awkward for a company pushing a “return to office” plan that includes at least three days a week at Google’s Wi-Fi desert. A Google spokesperson told Reuters the company has already made several improvements and hopes to have a fix in the coming weeks.

Google’s self-designed office swallows Wi-Fi “like the Bermuda Triangle” Read More »

google-says-the-ai-focused-pixel-8-can’t-run-its-latest-smartphone-ai-models

Google says the AI-focused Pixel 8 can’t run its latest smartphone AI models

we’re all trying to find the guy who did this —

Gemini Nano can’t run on the smaller Pixel 8 due to mysterious “hardware limitations.”

The bigger Pixel 8 Pro gets the latest AI features. The smaller model does not.

Enlarge / The bigger Pixel 8 Pro gets the latest AI features. The smaller model does not.

Google

If you believe Google’s marketing hype, AI in a phone is really, really important, the best AI is Google’s, and the best place to get that AI is Google’s flagship smartphone, the Pixel 8. We’re five months removed from the launch of the Pixel 8, and that doesn’t seem like a justifiable position anymore: Google says its latest AI models can’t run on the Pixel 8.

Google dropped that news in a Mobile World Congress wrap-up video that was spotted by Mishaal Rahman. At the end of the show in a Q&A session, Googler Terence Zhang, a member of the Gemini-on-Android team, said “[Gemini] Nano will not be coming to the Pixel 8 because of some hardware limitations. It’s currently on the Pixel 8 Pro and very recently available on the Samsung S24 family. It’ll be coming to more high-end devices in the near future.”

That is a wild statement. Gemini is Google’s latest AI model, and it made a big deal of the launch last month. Gemini comes in a few different sizes, and the smallest “Nano” size is specifically designed to run on smartphones as a much-hyped “on-device AI.” The Pixel 8 and Pixel 8 Pro are Google’s flagship smartphones. Google designed the phone and the chip and the AI model and somehow can’t make these things play nice together?

Adding to the weirdness is that Gemini Nano can run on the Pixel 8 Pro but not the smaller Pixel 8 due to “hardware limitations.” What limitations would those be, exactly? The two phones have the exact same Google Tensor SoC. They run the same software. The main differences between the two phones are screen size (6.7 inches versus 6.2), battery size, a different camera loadout, and 8GB of RAM versus 12GB. RAM is the only known difference you can point to that could create a processing limitation, but Gemini Nano also runs on the Galaxy S24 series, where the base model has 8GB of RAM. RAM being the issue would mean Samsung phones are somehow more RAM efficient than Pixel phones, which is hard to believe. If the Pixel 8 Pro Tensor 3 and Pixel 8 Tensor 3 are different somehow, that’s not on the spec sheet.

Five months ago at the Pixel 8 launch event, Google painted a very different picture of the Pixel 8 series: “I’m excited to introduce you to the next evolution of AI in your hand, Google Pixel 8 Pro and Google Pixel 8. Our latest phones bring together so many technologies from across Google. They’re the first phones to use our latest Google Tensor chip. They include the very best Android experience, first-of-its-kind camera experiences, and the latest AI advancements from Google.” Both devices feature the custom Google Tensor 3 SoC that Google claimed was “designed specifically to bring Google’s AI breakthroughs directly to Pixel users and show the world what’s possible.” This custom Google AI-focused design was supposed to deliver “unbelievably helpful experiences that no other phone can.”

Google's

Enlarge / Google’s “Compare” page does not clearly communicate to customers what they’re buying.

Google

When you launch two phones at once, it’s always hard to distinguish what the actual differences between the two models are. Sometimes, the devices get talked about in the plural, while other times “Pixel 8” is used to represent both devices. Sometimes, the more expensive device is singularly mentioned for no reason other than it’s the more expensive flagship. Between the hour-long presentation and private press pre-briefing that Ars was a part of, “What’s the difference” became a pretty well-worn question that was expected to be answered clearly. Usually, the go-to delineator here is the spec sheet, which is expected to spell out in clear language what you’re actually buying. The Google Store has a compare page where you can directly pit the Pixel 8 and Pixel 8 Pro against each other, and nothing spells out a difference in AI processing capabilities or a difference in the Tensor chips.

In the case of the Pixel 8 and Pixel 8 Pro, Google wasn’t clear enough in its communication at launch. Today, though, re-watching the launch presentation with the new knowledge that there is some dramatic difference in AI processing capabilities, you can pick up some language like talk of the “Pixel 8 Pro’s on-device LLM” that you could now interpret as a declaration of exclusive AI capabilities for the Pro model, but that wasn’t clear at the time.

As a consumer, it’s hard not to feel misled, and it’s embarrassing for Google, but to practically care about this, you’d need to know what the heck “Gemini Nano” actually does and why you should care about it. That’s a hard question to answer. Google has a page up here detailing some of the features Gemini Nano powers on the Pixel 8 Pro, but a feature could also be powered by different models on different devices. For what it’s worth, the rundown lists a “summarize” feature for the Google Recorder app and “smart reply” in Gboard. Plenty of Google apps already have a “smart reply” feature without Gemini Nano. Third-party developers can also plug into the onboard Gemini Nano model for their apps, but it’s hard to imagine anyone doing that with such limited device support.

The other option is to just forget about doing all of this AI stuff on-device and just do it in the cloud. As a great example of this, none of this Gemini Nano stuff has anything to do with the Google Gemini Chatbot, which all runs in the cloud. A big question is what this will mean for the smaller Google Pixel 8 going forward. Google promised seven years of OS updates for the new Pixels, and to already be stripping down features due to “hardware limitations” after five months is a disappointment.

Google says the AI-focused Pixel 8 can’t run its latest smartphone AI models Read More »

us-gov’t-announces-arrest-of-former-google-engineer-for-alleged-ai-trade-secret-theft

US gov’t announces arrest of former Google engineer for alleged AI trade secret theft

Don’t trade the secrets dept. —

Linwei Ding faces four counts of trade secret theft, each with a potential 10-year prison term.

A Google sign stands in front of the building on the sidelines of the opening of the new Google Cloud data center in Hesse, Hanau, opened in October 2023.

Enlarge / A Google sign stands in front of the building on the sidelines of the opening of the new Google Cloud data center in Hesse, Hanau, opened in October 2023.

On Wednesday, authorities arrested former Google software engineer Linwei Ding in Newark, California, on charges of stealing AI trade secrets from the company. The US Department of Justice alleges that Ding, a Chinese national, committed the theft while secretly working with two China-based companies.

According to the indictment, Ding, who was hired by Google in 2019 and had access to confidential information about the company’s data centers, began uploading hundreds of files into a personal Google Cloud account two years ago.

The trade secrets Ding allegedly copied contained “detailed information about the architecture and functionality of GPU and TPU chips and systems, the software that allows the chips to communicate and execute tasks, and the software that orchestrates thousands of chips into a supercomputer capable of executing at the cutting edge of machine learning and AI technology,” according to the indictment.

Shortly after the alleged theft began, Ding was offered the position of chief technology officer at an early-stage technology company in China that touted its use of AI technology. The company offered him a monthly salary of about $14,800, plus an annual bonus and company stock. Ding reportedly traveled to China, participated in investor meetings, and sought to raise capital for the company.

Investigators reviewed surveillance camera footage that showed another employee scanning Ding’s name badge at the entrance of the building where Ding worked at Google, making him look like he was working from his office when he was actually traveling.

Ding also founded and served as the chief executive of a separate China-based startup company that aspired to train “large AI models powered by supercomputing chips,” according to the indictment. Prosecutors say Ding did not disclose either affiliation to Google, which described him as a junior employee. He resigned from Google on December 26 of last year.

The FBI served a search warrant at Ding’s home in January, seizing his electronic devices and later executing an additional warrant for the contents of his personal accounts. Authorities found more than 500 unique files of confidential information that Ding allegedly stole from Google. The indictment says that Ding copied the files into the Apple Notes application on his Google-issued Apple MacBook, then converted the Apple Notes into PDF files and uploaded them to an external account to evade detection.

“We have strict safeguards to prevent the theft of our confidential commercial information and trade secrets,” Google spokesperson José Castañeda told Ars Technica. “After an investigation, we found that this employee stole numerous documents, and we quickly referred the case to law enforcement. We are grateful to the FBI for helping protect our information and will continue cooperating with them closely.”

Attorney General Merrick Garland announced the case against the 38-year-old at an American Bar Association conference in San Francisco. Ding faces four counts of federal trade secret theft, each carrying a potential sentence of up to 10 years in prison.

US gov’t announces arrest of former Google engineer for alleged AI trade secret theft Read More »

worried-about-roundabouts?-waze-wants-to-help

Worried about roundabouts? Waze wants to help

📲🗺️📍🚙 —

Google’s other navigation app is getting some new features.

In this photo illustration a Waze logo of a GPS navigation software app is seen on a smartphone and a pc screen.

Pavlo Gonchar/SOPA Images/LightRocket via Getty Images

Waze, the navigation app owned by Google, is adding some new features. Some of these are safety-oriented, like alerts about first responders or speed limit changes. Others are convenience-minded, like help navigating roundabouts or parking information. It’s also expanding its use of crowdsourcing to determine road conditions.

When Google bought Waze in 2013, the navigation app was already well-liked for adding a slightly social aspect to in-car navigation—something that seems adorably quaint and perhaps unthinkable these 11 years later.

Over the years, Google has slowly incorporated more of Waze’s features into its own Google Maps platform and taken away Waze’s autonomy, too. In 2022, it was formally merged into the same division at Google that runs Maps, and last year, Google laid off some workers and ditched Waze’s own ad platform for Google ads.

Considering Google’s notorious nature when it comes to wielding an axe to much-liked apps or services, it’s fair to wonder how much longer Waze will continue to exist. But despite this existential threat, Waze continues to update and improve its app.

Last year, it added crash history alerts to warn drivers of crash hotspots they might be approaching. Now, it’s going to add speed limit alerts to both Android and iOS users later this month, which begins notifying a user that there’s an impending speed limit decrease once it’s within 500 feet. This functionality can commonly be found on new cars that use camera-based lane-keeping systems, but for everyone else on the road, it ought to be a handy update.

This month will also see Waze give alerts about impending speed bumps, toll booths, and sharp curves.

Another new safety feature is already available for all Waze users in the US, Canada, Mexico, and France. This alerts users if there’s an emergency vehicle stopped along the route. Connected car drivers in Germany have benefited from a similar system—for Waze’s feature, the data comes from its “Waze for City” partners.

  • An example of Waze’s new road alert.

    Waze

  • An example of Waze’s new emergency vehicle alert.

  • An example of Waze’s new speed limit decrease alert.

    Waze

  • An example of Waze’s roundabout navigation update.

    Waze

  • Waze will now display information about parking garages.

    Waze

  • You can book parking in the app.

    Waze

  • Waze will now know your usual routes and can tell you if it’s quicker to go a different way.

    Waze

Waze’s new roundabout navigation should be a boon to tourists planning to drive to Washington, DC. Again, it’s using crowdsourced data to show users where to enter a roundabout and where to leave it, as well as which lane to be in if there’s more than one. Waze says this feature will roll out to all its Android users across the globe this month. But if you use iOS, you’ll just have to keep circumnavigating that traffic circle until sometime later this year.

Rather than use crowdsourced info, the new parking update is a partnership with the parking platform Flash. It will show users information like whether the parking is covered, if it’s wheelchair accessible, and if there is EV charging or valet parking, and you’ll be able to reserve parking via the app. (Flash says its “Book Online” feature is also coming to Google Maps.) For now, Flash’s database covers about 30,000 parking garages in the US and Canada.

Finally, Waze says it’s adapting to users whose preferred routes aren’t the fastest option and that it will start displaying traffic information along these routes this month to both Android and iOS users.

Worried about roundabouts? Waze wants to help Read More »