Google

runway’s-latest-ai-video-generator-brings-giant-cotton-candy-monsters-to-life

Runway’s latest AI video generator brings giant cotton candy monsters to life

Screen capture of a Runway Gen-3 Alpha video generated with the prompt

Enlarge / Screen capture of a Runway Gen-3 Alpha video generated with the prompt “A giant humanoid, made of fluffy blue cotton candy, stomping on the ground, and roaring to the sky, clear blue sky behind them.”

On Sunday, Runway announced a new AI video synthesis model called Gen-3 Alpha that’s still under development, but it appears to create video of similar quality to OpenAI’s Sora, which debuted earlier this year (and has also not yet been released). It can generate novel, high-definition video from text prompts that range from realistic humans to surrealistic monsters stomping the countryside.

Unlike Runway’s previous best model from June 2023, which could only create two-second-long clips, Gen-3 Alpha can reportedly create 10-second-long video segments of people, places, and things that have a consistency and coherency that easily surpasses Gen-2. If 10 seconds sounds short compared to Sora’s full minute of video, consider that the company is working with a shoestring budget of compute compared to more lavishly funded OpenAI—and actually has a history of shipping video generation capability to commercial users.

Gen-3 Alpha does not generate audio to accompany the video clips, and it’s highly likely that temporally coherent generations (those that keep a character consistent over time) are dependent on similar high-quality training material. But Runway’s improvement in visual fidelity over the past year is difficult to ignore.

AI video heats up

It’s been a busy couple of weeks for AI video synthesis in the AI research community, including the launch of the Chinese model Kling, created by Beijing-based Kuaishou Technology (sometimes called “Kwai”). Kling can generate two minutes of 1080p HD video at 30 frames per second with a level of detail and coherency that reportedly matches Sora.

Gen-3 Alpha prompt: “Subtle reflections of a woman on the window of a train moving at hyper-speed in a Japanese city.”

Not long after Kling debuted, people on social media began creating surreal AI videos using Luma AI’s Luma Dream Machine. These videos were novel and weird but generally lacked coherency; we tested out Dream Machine and were not impressed by anything we saw.

Meanwhile, one of the original text-to-video pioneers, New York City-based Runway—founded in 2018—recently found itself the butt of memes that showed its Gen-2 tech falling out of favor compared to newer video synthesis models. That may have spurred the announcement of Gen-3 Alpha.

Gen-3 Alpha prompt: “An astronaut running through an alley in Rio de Janeiro.”

Generating realistic humans has always been tricky for video synthesis models, so Runway specifically shows off Gen-3 Alpha’s ability to create what its developers call “expressive” human characters with a range of actions, gestures, and emotions. However, the company’s provided examples weren’t particularly expressive—mostly people just slowly staring and blinking—but they do look realistic.

Provided human examples include generated videos of a woman on a train, an astronaut running through a street, a man with his face lit by the glow of a TV set, a woman driving a car, and a woman running, among others.

Gen-3 Alpha prompt: “A close-up shot of a young woman driving a car, looking thoughtful, blurred green forest visible through the rainy car window.”

The generated demo videos also include more surreal video synthesis examples, including a giant creature walking in a rundown city, a man made of rocks walking in a forest, and the giant cotton candy monster seen below, which is probably the best video on the entire page.

Gen-3 Alpha prompt: “A giant humanoid, made of fluffy blue cotton candy, stomping on the ground, and roaring to the sky, clear blue sky behind them.”

Gen-3 will power various Runway AI editing tools (one of the company’s most notable claims to fame), including Multi Motion Brush, Advanced Camera Controls, and Director Mode. It can create videos from text or image prompts.

Runway says that Gen-3 Alpha is the first in a series of models trained on a new infrastructure designed for large-scale multimodal training, taking a step toward the development of what it calls “General World Models,” which are hypothetical AI systems that build internal representations of environments and use them to simulate future events within those environments.

Runway’s latest AI video generator brings giant cotton candy monsters to life Read More »

google’s-abuse-of-fitbit-continues-with-web-app-shutdown

Google’s abuse of Fitbit continues with web app shutdown

Welcome to the Google lifestyle —

Users say the app, which is now the only Fitbit interface, lacks matching features.

Google’s abuse of Fitbit continues with web app shutdown

Fitbit

Google’s continued abuse of the Fitbit brand is continuing with the shutdown of the web dashboard. Fitbit.com used to be both a storefront and a way for users to get a big-screen UI to sift through reams of fitness data. The store closed up shop in April, and now the web dashboard is dying in July.

In a post on the “Fitbit Community” forums, the company said: “Next month, we’re consolidating the Fitbit.com dashboard into the Fitbit app. The web browser will no longer offer access to the Fitbit.com dashboard after July 8, 2024.” That’s it. There’s no replacement or new fitness thing Google is more interested in; web functionality is just being removed. Google, we’ll remind you, used to be a web company. Now it’s a phone app or nothing. Google did the same thing to its Google Fit product in 2019, killing off the more powerful website in favor of an app focus.

Dumping the web app leaves a few holes in Fitbit’s ecosystem. The Fitbit app doesn’t support big screens like tablet devices, so this is removing the only large-format interface for data. Fitbit’s competitors all have big-screen interfaces. Garmin has a very similar website, and the Apple Watch has an iPad health app. This isn’t an improvement. To make matters worse, the app does not have the features of the web dashboard, with many of the livid comments in the forums on Reddit calling out the app’s deficiencies in graphing, achievement statistics, calorie counting, and logs.

The web dashboard.

The web dashboard.

Fitbit

Google bought Fitbit back in 2021 and has spent most of its time shutting down Fitbit features and making the products worse. Migrations to Google Accounts started in 2022. The Google Assistant was removed from Fitbit’s 2022 product line, the Sense 2 and Versa 4, when support existed on the previous models. Social features—a key part of fitness motivation for many—were killed off in 2023. Google has mostly focused on making Fitbit an app for the Pixel Watch.

Google’s abuse of Fitbit continues with web app shutdown Read More »

google’s-pixel-8-series-gets-usb-c-to-displayport;-desktop-mode-rumors-heat-up

Google’s Pixel 8 series gets USB-C to DisplayPort; desktop mode rumors heat up

You would think a phone called “Pixel” would be better at this —

Grab a USB-C to DisplayPort cable and newer Pixels can be viewed from your TV or monitor.

The Pixel 8.

Enlarge / The Pixel 8.

Google

Google’s June Android update is out, and it’s bringing a few notable changes for Pixel phones. The most interesting is that the Pixel 8a, Pixel 8 and Pixel 8 Pro are all getting DisplayPort Alt Mode capabilities via their USB-C ports. This means you can go from USB-C to DisplayPort and plug right into a TV or monitor. This has been rumored forever and landed in some of the Android Betas earlier, but now it’s finally shipping out to production.

The Pixel 8’s initial display support is just a mirrored mode. You can either get an awkward vertical phone in the middle of your wide-screen display or turn the phone sideways and get a more reasonable layout. You could see it being useful for videos or presentations. It would be nice if it could do more.

Alongside this year-plus of display port rumors has been a steady drum beat (again) for an Android desktop mode. Google has been playing around with this idea since Android 7.0 in 2016. In 2019, we were told it was just a development testing project, and it never shipped to any real devices. Work around Android’s desktop mode has been heating up, though, so maybe a second swing at this idea will result in an actual product.

Android 15's in-development desktop mode.

Android 15’s in-development desktop mode.

Android Authority’s Mishaal Rahman has been tracking down the new desktop mode for a while now and now has it running. The new desktop mode looks just like a real desktop OS. Every app gets a title bar window decoration with an app icon, a label, and maximize and close buttons. You can drag windows around and resize them; the OS supports automatic window tiling by dragging to the side of the screen; and there’s even a little drop-down menu in the title bar app icon. If you were to turn that on with Tablet Android’s bottom app bar, you would have a lot of what you need for a desktop OS.

Just like last time, we’ve got no clue if this will turn into a real product. The biggest Android partner, Samsung, certainly seems to think the idea is worth doing. Samsung’s “DeX” desktop mode has been a feature for years on its devices.

DisplayPort support is part of the June 2024 update and should roll out to devices soon.

Google’s Pixel 8 series gets USB-C to DisplayPort; desktop mode rumors heat up Read More »

apple-and-openai-currently-have-the-most-misunderstood-partnership-in-tech

Apple and OpenAI currently have the most misunderstood partnership in tech

A man talks into a smartphone.

Enlarge / He isn’t using an iPhone, but some people talk to Siri like this.

On Monday, Apple premiered “Apple Intelligence” during a wide-ranging presentation at its annual Worldwide Developers Conference in Cupertino, California. However, the heart of its new tech, an array of Apple-developed AI models, was overshadowed by the announcement of ChatGPT integration into its device operating systems.

Since rumors of the partnership first emerged, we’ve seen confusion on social media about why Apple didn’t develop a cutting-edge GPT-4-like chatbot internally. Despite Apple’s year-long development of its own large language models (LLMs), many perceived the integration of ChatGPT (and opening the door for others, like Google Gemini) as a sign of Apple’s lack of innovation.

“This is really strange. Surely Apple could train a very good competing LLM if they wanted? They’ve had a year,” wrote AI developer Benjamin De Kraker on X. Elon Musk has also been grumbling about the OpenAI deal—and spreading misinformation about it—saying things like, “It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!”

While Apple has developed many technologies internally, it has also never been shy about integrating outside tech when necessary in various ways, from acquisitions to built-in clients—in fact, Siri was initially developed by an outside company. But by making a deal with a company like OpenAI, which has been the source of a string of tech controversies recently, it’s understandable that some people don’t understand why Apple made the call—and what it might entail for the privacy of their on-device data.

“Our customers want something with world knowledge some of the time”

While Apple Intelligence largely utilizes its own Apple-developed LLMs, Apple also realized that there may be times when some users want to use what the company considers the current “best” existing LLM—OpenAI’s GPT-4 family. In an interview with The Washington Post, Apple CEO Tim Cook explained the decision to integrate OpenAI first:

“I think they’re a pioneer in the area, and today they have the best model,” he said. “And I think our customers want something with world knowledge some of the time. So we considered everything and everyone. And obviously we’re not stuck on one person forever or something. We’re integrating with other people as well. But they’re first, and I think today it’s because they’re best.”

The proposed benefit of Apple integrating ChatGPT into various experiences within iOS, iPadOS, and macOS is that it allows AI users to access ChatGPT’s capabilities without the need to switch between different apps—either through the Siri interface or through Apple’s integrated “Writing Tools.” Users will also have the option to connect their paid ChatGPT account to access extra features.

As an answer to privacy concerns, Apple says that before any data is sent to ChatGPT, the OS asks for the user’s permission, and the entire ChatGPT experience is optional. According to Apple, requests are not stored by OpenAI, and users’ IP addresses are hidden. Apparently, communication with OpenAI servers happens through API calls similar to using the ChatGPT app on iOS, and there is reportedly no deeper OS integration that might expose user data to OpenAI without the user’s permission.

We can only take Apple’s word for it at the moment, of course, and solid details about Apple’s AI privacy efforts will emerge once security experts get their hands on the new features later this year.

Apple’s history of tech integration

So you’ve seen why Apple chose OpenAI. But why look to outside companies for tech? In some ways, Apple building an external LLM client into its operating systems isn’t too different from what it has previously done with streaming video (the YouTube app on the original iPhone), Internet search (Google search integration), and social media (integrated Twitter and Facebook sharing).

The press has positioned Apple’s recent AI moves as Apple “catching up” with competitors like Google and Microsoft in terms of chatbots and generative AI. But playing it slow and cool has long been part of Apple’s M.O.—not necessarily introducing the bleeding edge of technology but improving existing tech through refinement and giving it a better user interface.

Apple and OpenAI currently have the most misunderstood partnership in tech Read More »

google-avoids-jury-trial-by-sending-$2.3-million-check-to-us-government

Google avoids jury trial by sending $2.3 million check to US government

Judge, no jury —

Google gets a bench trial after sending unexpected check to Justice Department.

At Google headquarters, the company's logo is seen on the glass exterior of a building.

Getty Images | Justin Sullivan

Google has achieved its goal of avoiding a jury trial in one antitrust case after sending a $2.3 million check to the US Department of Justice. Google will face a bench trial, a trial conducted by a judge without a jury, after a ruling today that the preemptive check is big enough to cover any damages that might have been awarded by a jury.

“I am satisfied that the cashier’s check satisfies any damages claim,” US District Judge Leonie Brinkema said after a hearing in the Eastern District of Virginia on Friday, according to Bloomberg. “A fair reading of the expert reports does not support” a higher amount, Brinkema said.

The check was reportedly for $2,289,751. “Because the damages are no longer part of the case, Brinkema ruled a jury is no longer needed and she will oversee the trial, set to begin in September,” according to Bloomberg.

The payment was unusual, but so was the US request for a jury trial because antitrust cases are typically heard by a judge without a jury. The US argued that a jury should rule on damages because US government agencies were overcharged for advertising.

The US opposed Google’s motion to strike the jury demand in a filing last week, arguing that “the check it delivered did not actually compensate the United States for the full extent of its claimed damages” and that “the unilateral offer of payment was improperly premised on Google’s insistence that such payment ‘not be construed’ as an admission of damages.”

The government’s damages expert calculated damages that were “much higher” than the amount cited by Google, the US filing said. In last week’s filing, the higher damages amount sought by the government was redacted.

Lawsuit targets Google advertising

The US and eight states sued Google in January 2023 in a lawsuit related to the company’s advertising technology business. There are now 17 states involved in the case.

Google’s objection to a jury trial said that similar antitrust cases have been tried by judges because of their technical and often abstract nature. “To secure this unusual posture, several weeks before filing the Complaint, on the eve of Christmas 2022, DOJ attorneys scrambled around looking for agencies on whose behalf they could seek damages,” Google said.

The US and states’ lawsuit claimed that Google “corrupted legitimate competition in the ad tech industry” in a plan to “neutralize or eliminate ad tech competitors, actual or potential, through a series of acquisitions” and “wield its dominance across digital advertising markets to force more publishers and advertisers to use its products while disrupting their ability to use competing products effectively.”

The US government lawsuit said that federal agencies bought over $100 million in advertising since 2019 and aimed to recover treble damages for Google’s alleged overcharges on those purchases. But the government narrowed its claims to the ad purchases of just eight agencies, lowering the potential damages amount.

Google sent the check in mid-May. While the amount wasn’t initially public, Google said it contained “every dollar the United States could conceivably hope to recover under the damages calculation of the United States’ own expert.” Google also said it “continues to dispute liability and welcomes a full resolution by this Court of all remaining claims in the Complaint.”

US: We want more

The US disagreed that $2.3 million was the maximum it could recover. “Under the law, Google must pay the United States the maximum amount it could possibly recover at trial, which Google has not done,” the US said. “And Google cannot condition acceptance of that payment on its assertion that the United States was not harmed in the first place. In doing so, Google attempts to seize the strategic upside of satisfying the United States’ damages claim (potentially allowing it to avoid judgment by a jury) while at the same time avoiding the strategic downside of the United States being free to argue the common-sense inference that Google’s payment, is, at minimum, an acknowledgment of the harm done to federal agency advertisers who used Google’s ad tech tools.”

In a filing on Wednesday, Google said the DOJ previously agreed that its claims amounted to less than $1 million before trebling and pre-judgment interest. The check sent by Google was for the exact amount after trebling and interest, the filing said. But the “DOJ now ignores this undisputed fact, offering up a brand new figure, previously uncalculated by any DOJ expert, unsupported by the record, and never disclosed,” Google told the court.

Siding with Google at today’s hearing, Brinkema “said the amount of Google’s check covered the highest possible amount the government had sought in its initial filings,” the Associated Press reported. “She likened receipt of the money, which was paid unconditionally to the government regardless of whether the tech giant prevailed in its arguments to strike a jury trial, as equivalent to ‘receiving a wheelbarrow of cash.'”

While the US lost its attempt to obtain more damages than Google offered, the lawsuit also seeks an order declaring that Google illegally monopolized the market. The complaint requests a breakup in which Google would have to divest “the Google Ad Manager suite, including both Google’s publisher ad server, DFP, and Google’s ad exchange, AdX.”

Google avoids jury trial by sending $2.3 million check to US government Read More »

google’s-ai-overviews-misunderstand-why-people-use-google

Google’s AI Overviews misunderstand why people use Google

robot hand holding glue bottle over a pizza and tomatoes

Aurich Lawson | Getty Images

Last month, we looked into some of the most incorrect, dangerous, and downright weird answers generated by Google’s new AI Overviews feature. Since then, Google has offered a partial apology/explanation for generating those kinds of results and has reportedly rolled back the feature’s rollout for at least some types of queries.

But the more I’ve thought about that rollout, the more I’ve begun to question the wisdom of Google’s AI-powered search results in the first place. Even when the system doesn’t give obviously wrong results, condensing search results into a neat, compact, AI-generated summary seems like a fundamental misunderstanding of how people use Google in the first place.

Reliability and relevance

When people type a question into the Google search bar, they only sometimes want the kind of basic reference information that can be found on a Wikipedia page or corporate website (or even a Google information snippet). Often, they’re looking for subjective information where there is no one “right” answer: “What are the best Mexican restaurants in Santa Fe?” or “What should I do with my kids on a rainy day?” or “How can I prevent cheese from sliding off my pizza?”

The value of Google has always been in pointing you to the places it thinks are likely to have good answers to those questions. But it’s still up to you, as a user, to figure out which of those sources is the most reliable and relevant to what you need at that moment.

  • This wasn’t funny when the guys at Pep Boys said it, either. (via)

    Kyle Orland / Google

  • Weird Al recommends “running with scissors” as well! (via)

    Kyle Orland / Google

  • This list of steps actually comes from a forum thread response about doing something completely different. (via)

    Kyle Orland / Google

  • An island that’s part of the mainland? (via)

    Kyle Orland / Google

  • If everything’s cheaper now, why does everything seem so expensive?

    Kyle Orland / Google

  • Pretty sure this Truman was never president… (via)

    Kyle Orland / Google

For reliability, any savvy Internet user makes use of countless context clues when judging a random Internet search result. Do you recognize the outlet or the author? Is the information from someone with seeming expertise/professional experience or a random forum poster? Is the site well-designed? Has it been around for a while? Does it cite other sources that you trust, etc.?

But Google also doesn’t know ahead of time which specific result will fit the kind of information you’re looking for. When it comes to restaurants in Santa Fe, for instance, are you in the mood for an authoritative list from a respected newspaper critic or for more off-the-wall suggestions from random locals? Or maybe you scroll down a bit and stumble on a loosely related story about the history of Mexican culinary influences in the city.

One of the unseen strengths of Google’s search algorithm is that the user gets to decide which results are the best for them. As long as there’s something reliable and relevant in those first few pages of results, it doesn’t matter if the other links are “wrong” for that particular search or user.

Google’s AI Overviews misunderstand why people use Google Read More »

google’s-ai-overview-is-flawed-by-design,-and-a-new-company-blog-post-hints-at-why

Google’s AI Overview is flawed by design, and a new company blog post hints at why

guided by voices —

Google: “There are bound to be some oddities and errors” in system that told people to eat rocks.

A selection of Google mascot characters created by the company.

Enlarge / The Google “G” logo surrounded by whimsical characters, all of which look stunned and surprised.

On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, “AI Overviews: About last week.” In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlined steps taken to improve a system that appears flawed by design, even if it doesn’t realize it is admitting it.

To recap, the AI Overview feature—which the company showed off at Google I/O a few weeks ago—aims to provide search users with summarized answers to questions by using an AI model integrated with Google’s web ranking systems. Right now, it’s an experimental feature that is not active for everyone, but when a participating user searches for a topic, they might see an AI-generated answer at the top of the results, pulled from highly ranked web content and summarized by an AI model.

While Google claims this approach is “highly effective” and on par with its Featured Snippets in terms of accuracy, the past week has seen numerous examples of the AI system generating bizarre, incorrect, or even potentially harmful responses, as we detailed in a recent feature where Ars reporter Kyle Orland replicated many of the unusual outputs.

Drawing inaccurate conclusions from the web

On Wednesday morning, Google's AI Overview was erroneously telling us the Sony PlayStation and Sega Saturn were available in 1993.

Enlarge / On Wednesday morning, Google’s AI Overview was erroneously telling us the Sony PlayStation and Sega Saturn were available in 1993.

Kyle Orland / Google

Given the circulating AI Overview examples, Google almost apologizes in the post and says, “We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously.” But Reid, in an attempt to justify the errors, then goes into some very revealing detail about why AI Overviews provides erroneous information:

AI Overviews work very differently than chatbots and other LLM products that people may have tried out. They’re not simply generating an output based on training data. While AI Overviews are powered by a customized language model, the model is integrated with our core web ranking systems and designed to carry out traditional “search” tasks, like identifying relevant, high-quality results from our index. That’s why AI Overviews don’t just provide text output, but include relevant links so people can explore further. Because accuracy is paramount in Search, AI Overviews are built to only show information that is backed up by top web results.

This means that AI Overviews generally don’t “hallucinate” or make things up in the ways that other LLM products might.

Here we see the fundamental flaw of the system: “AI Overviews are built to only show information that is backed up by top web results.” The design is based on the false assumption that Google’s page-ranking algorithm favors accurate results and not SEO-gamed garbage. Google Search has been broken for some time, and now the company is relying on those gamed and spam-filled results to feed its new AI model.

Even if the AI model draws from a more accurate source, as with the 1993 game console search seen above, Google’s AI language model can still make inaccurate conclusions about the “accurate” data, confabulating erroneous information in a flawed summary of the information available.

Generally ignoring the folly of basing its AI results on a broken page-ranking algorithm, Google’s blog post instead attributes the commonly circulated errors to several other factors, including users making nonsensical searches “aimed at producing erroneous results.” Google does admit faults with the AI model, like misinterpreting queries, misinterpreting “a nuance of language on the web,” and lacking sufficient high-quality information on certain topics. It also suggests that some of the more egregious examples circulating on social media are fake screenshots.

“Some of these faked results have been obvious and silly,” Reid writes. “Others have implied that we returned dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression. Those AI Overviews never appeared. So we’d encourage anyone encountering these screenshots to do a search themselves to check.”

(No doubt some of the social media examples are fake, but it’s worth noting that any attempts to replicate those early examples now will likely fail because Google will have manually blocked the results. And it is potentially a testament to how broken Google Search is if people believed extreme fake examples in the first place.)

While addressing the “nonsensical searches” angle in the post, Reid uses the example search, “How many rocks should I eat each day,” which went viral in a tweet on May 23. Reid says, “Prior to these screenshots going viral, practically no one asked Google that question.” And since there isn’t much data on the web that answers it, she says there is a “data void” or “information gap” that was filled by satirical content found on the web, and the AI model found it and pushed it as an answer, much like Featured Snippets might. So basically, it was working exactly as designed.

A screenshot of an AI Overview query,

Enlarge / A screenshot of an AI Overview query, “How many rocks should I eat each day” that went viral on X last week.

Google’s AI Overview is flawed by design, and a new company blog post hints at why Read More »

google-chrome’s-plan-to-limit-ad-blocking-extensions-kicks-off-next-week

Google Chrome’s plan to limit ad blocking extensions kicks off next week

Firefox is free, you know —

Chrome’s Manifest V3 transition is here. First up are warnings for any V2 extensions.

A man wears soft rings that spell out CHROME.

Enlarge / Someone really likes Google Chrome.

Google Chrome will be shutting down its older, more capable extension system, Manifest V2, in favor of exclusively using the more limited Manifest V3. The deeply controversial Manifest V3 system was announced in 2019, and the full switch has been delayed a million times, but now Google says it’s really going to make the transition: As previously announced, the phase-out of older Chrome extensions is starting next week.

Google Chrome has been working toward a plan for a new, more limited extension system for a while now. Google says it created “Manifest V3” extensions with the goal of “improving the security, privacy, performance, and trustworthiness of the extension ecosystem.”

Other groups don’t agree with Google’s description, like the Electronic Frontier Foundation (EFF), which called Manifest V3 “deceitful and threatening” back when it was first announced in 2019, saying the new system “will restrict the capabilities of web extensions—especially those that are designed to monitor, modify, and compute alongside the conversation your browser has with the websites you visit.” It has a whole article out detailing how Manifest V3 won’t help security.

Comments from the Firefox team have also cast doubt on Google’s justification for Manifest V3. In a talk about the implications of Manifest V3, Philipp Kewisch, Firefox’s Add-ons operations manager, said, “for malicious add-ons, we feel for Firefox it has been on a manageable level, and since the add-ons are mostly interested in grabbing data, they can still do that with the current web request API [in Manifest V3].” Firefox plans to support Manifest V3 because Chrome is the world’s most popular browser, and it wants extensions to be cross-browser compatible, but it has no plans to turn off support for Manifest V2.

A big source of skepticism around Manifest V3 is limitations around “content filtering,” aka the APIs ad blockers and anti-tracking extensions use to fight ad companies like Google. Google, which makes about 77 percent of its revenue from advertising, has not published a serious explanation as to why Manifest V3 limits content filtering, and it’s not clear how that aligns with the goals of “improving the security, privacy, performance and trustworthiness.” Like Kewisch said, the primary goal of malicious extensions is to spy on users and slurp up data, which has nothing to do with content filtering. This is all happening while Google is building an ad system directly into Chrome, and Google properties like YouTube are making aggressive moves against ad blockers.

The initial version of Manifest V3 was detailed in 2019, and since then Google has gone back and forth with the extension community and made some concessions. Google says it raised the number of filtering rulesets allowed by Manifest V3, which should help ad blockers. One dramatic change is that filtering extensions won’t be able to update their rulesets themselves anymore, and any filtering updates would require a new version submitted to the Chrome extension store, which includes a potentially weeks-long security review. In the cat-and-mouse game of ad blockers, you can imagine how this could let YouTube change the ad system instantly, while any counterpunches from ad blockers could be delayed by weeks. Google now says it’s possible for extensions to skip the reviews process for “safe” rule set changes, but even this is limited to “static” rulesets, not more powerful “dynamic” ones.

In a comment to The Verge last year, the senior staff technologist at the EFF, Alexei Miagkov, summed up Google’s public negotiations with the extension community well, saying, “These are helpful changes, but they are tweaks to a limited-by-design system. The big problem remains the same: if extensions can’t innovate, users lose and trackers win… We now all depend on Google to keep evolving the API to keep up with advertisers and trackers.”

Google says, “over 85% of actively maintained extensions in the Chrome Web Store are running Manifest V3, and the top content filtering extensions all have Manifest V3 versions available.” The company doesn’t mention that the most popular ad blocker’s Manifest V3 version is “uBlock Origin Lite,” with the “Lite” indicating that it is inferior to the Manifest V2 version.

As for how this phase out is actually going to go, Google says next week the beta versions of Chrome will start seeing warning banners on the extensions page for any Manifest V2 extensions they have installed. V2 extensions will also lose their “featured” status in the Chrome extension store. Google says extensions will start to be disabled in “the coming months.” For a short period, users will be able to turn them back on if they visit the extension page, but Google says that “over time, this toggle will go away as well.” At that point you can either go hunting through the Chrome Store for alternatives or switch to Firefox.

Google Chrome’s plan to limit ad blocking extensions kicks off next week Read More »

google-cloud-explains-how-it-accidentally-deleted-a-customer-account

Google Cloud explains how it accidentally deleted a customer account

Flubbing the input —

UniSuper’s 647,000 users faced two weeks of downtime because of a Google Cloud bug.

Google Cloud explains how it accidentally deleted a customer account

Earlier this month, Google Cloud experienced one of its biggest blunders ever when UniSuper, a $135 billion Australian pension fund, had its Google Cloud account wiped out due to some kind of mistake on Google’s end. At the time, UniSuper indicated it had lost everything it had stored with Google, even its backups, and that caused two weeks of downtime for its 647,000 members. There were joint statements from the Google Cloud CEO and UniSuper CEO on the matter, a lot of apologies, and presumably a lot of worried customers who wondered if their retirement fund had disappeared.

In the immediate aftermath, the explanation we got was that “the disruption arose from an unprecedented sequence of events whereby an inadvertent misconfiguration during provisioning of UniSuper’s Private Cloud services ultimately resulted in the deletion of UniSuper’s Private Cloud subscription.” Two weeks later, Google Cloud’s internal review of the problem is finished, and the company has a blog post up detailing what happened.

Google has a “TL;DR” at the top of the post, and it sounds like a Google employee got an input wrong.

During the initial deployment of a Google Cloud VMware Engine (GCVE) Private Cloud for the customer using an internal tool, there was an inadvertent misconfiguration of the GCVE service by Google operators due to leaving a parameter blank. This had the unintended and then unknown consequence of defaulting the customer’s GCVE Private Cloud to a fixed term, with automatic deletion at the end of that period. The incident trigger and the downstream system behavior have both been corrected to ensure that this cannot happen again.

The most shocking thing about Google’s blunder was the sudden and irreversible deletion of a customer account. Shouldn’t there be protections, notifications, and confirmations in place to never accidentally delete something? Google says there are, but those warnings are for a “customer-initiated deletion” and didn’t work when using the admin tool. Google says, “No customer notification was sent because the deletion was triggered as a result of a parameter being left blank by Google operators using the internal tool, and not due to a customer deletion request. Any customer-initiated deletion would have been preceded by a notification to the customer.”

During its many downtime updates, UniSuper indicated it did not have access to Google Cloud backups and had to dig into a third-party (presumably less up-to-date) store to get back up and running. In the frenzy of the recovery period, UniSuper said that “UniSuper had duplication in two geographies as a protection against outages and loss. However, when the deletion of UniSuper’s Private Cloud subscription occurred, it caused deletion across both of these geographies… UniSuper had backups in place with an additional service provider. These backups have minimized data loss, and significantly improved the ability of UniSuper and Google Cloud to complete the restoration.”

In its post-mortem, Google now says, “Data backups that were stored in Google Cloud Storage in the same region were not impacted by the deletion, and, along with third-party backup software, were instrumental in aiding the rapid restoration.” It’s hard to square these two statements, especially with the two-week recovery period. The goal of a backup is to be quickly restored; so either UniSuper’s backups didn’t get deleted and weren’t effective, leading to two weeks of downtime, or they would have been effective had they not been partially or completely wiped out.

Google stressed many times in the post that this issue affected a single customer, has never happened before, should never happen again, and is not a systemic problem with Google Cloud. Here’s the entire “remediation” section of the blog post:

Google Cloud has since taken several actions to ensure that this incident does not and can not occur again, including:

  1. We deprecated the internal tool that triggered this sequence of events. This aspect is now fully automated and controlled by customers via the user interface, even when specific capacity management is required.
  2. We scrubbed the system database and manually reviewed all GCVE Private Clouds to ensure that no other GCVE deployments are at risk.
  3. We corrected the system behavior that sets GCVE Private Clouds for deletion for such deployment workflows.

Google says Cloud still has “safeguards in place with a combination of soft delete, advance notification, and human-in-the-loop, as appropriate,” and it confirmed these safeguards all still work.

Google Cloud explains how it accidentally deleted a customer account Read More »

google-is-killing-off-the-messaging-service-inside-google-maps

Google is killing off the messaging service inside Google Maps

Going out of business —

Google Maps has had its own chat platform since 2018, but it’s shutting down in July.

  • Whether you want to call it “Google Business Messaging” or “Google Business Profile Chat,” the chat buttons in Google Maps and Search are going away.

    Google

  • This is the 2018 version of Google Maps Messaging, which is when it was first built into the Google Maps app.

    Google

  • Messages used to have a top-tier spot in the navigation panel.

    Google

  • In the current UI, Messages lives in the “Updates” tab.

    Ron Amadeo

  • You used to be able to reply to Google Maps Messages with Google Allo.

Google is killing off a messaging service! This one is the odd “Google Business Messaging” service—basically an instant messaging client that is built into Google Maps. If you looked up a participating business in Google Maps or Google Search on a phone, the main row of buttons in the place card would read something like “Call,” “Chat,” “Directions,” and “Website.” That “Chat” button is the service we’re talking about. It would launch a full messaging interface inside the Google Maps app, and businesses were expected to use it for customer service purposes. Google’s deeply dysfunctional messaging strategy might lead people to joke about a theoretical “Google Maps Messaging” service, but it already exists and has existed for years, and now it’s being shut down.

Search Engine Land’s Barry Schwartz was the first to spot the shutdown emails being sent out to participating businesses. Google has two different support articles up for a shutdown of both “Google Business Profile Chat” and “Google Business Messages,” which appear to just be the same thing with different names. On July 15, 2024, the ability to start a new chat will be disabled, and on July 31, 2024, both services will be shut down. Google is letting businesses download past chat conversations via Google Takeout.

Google’s Maps messaging service was Google Messaging Service No. 16 in our giant History of Google Messaging article. The feature has undergone many changes, so it’s a bit hard to follow. The Google Maps Messaging button launched in 2017, when it would have been called “Google My Business Chat.” This wasn’t quite its own service yet—the messaging button would either launch your SMS app or boot into another dead Google messaging product, Google Allo!

The original SMS option was the easy path for small businesses with a single store, but SMS is tied to a single physical phone. If you’re a bigger business and want to take on the task of doing customer service across multiple stores, at the scale of Google Maps, that’s going to be a multi-person job. The Google Allo back-end (which feels like it was the driving force behind creating this project in the first place) would let you triage messages to multiple people. Allo was one year into its 2.5-year lifespan when this feature launched, though, so things would have to change soon before Allo’s 2019 shutdown date.

Knowing that the announcement of Allo’s death was a month away, Google started making Maps into its own standalone messaging service in November 2018. Previously, it would always launch an outside app (either SMS or Allo), but with this 2018 update, Maps got its own instant messaging UI built right into the app. “Messages” became a top-level item in the navigation drawer (later this would move to “updates”), and a third-party app was no longer needed. On the business side of things, a new “Google My Business” app would be the new customer service interface for all these messages. Allo’s shutdown in 2019 disabled the ability to use SMS for small businesses, and everything needed to use this Google My Business app now. Maps was officially a new messaging service. Google also created the “Business Messages API,” so big businesses could plug Maps messaging into some kind of customer management app.

It does not sound like Google is going to replace business messaging with anything in the near future, so the Chat buttons in Google Maps and search will be going away. In the endless pantheon of Google Messaging solutions, the Google Developer page also mentions an “RCS Business Messaging” platform that will launch the Google Messaging app. This service does not seem to be built into any existing Google products, though, and isn’t mentioned as an alternative in Google’s shutdown announcement. Google only suggests that businesses “redirect customers to your alternative communication channels,” but those links won’t be getting premium placement in Google’s products.

Business messaging is a pretty well-established market, and the Big Tech companies with competent messaging strategies are involved somehow. On iOS, there’s Apple’s iMessage-based Messages for Business, which also has a chat button layout in Apple Maps. Meta has both WhatsApp Business Messaging and Facebook Messenger’s Meta Business Messaging. There are also standalone businesses like Twilio.

Listing image by Google / Ron Amadeo

Google is killing off the messaging service inside Google Maps Read More »

google-accused-of-secretly-tracking-drivers-with-disabilities

Google accused of secretly tracking drivers with disabilities

Google accused of secretly tracking drivers with disabilities

Google needs to pump the brakes when it comes to tracking sensitive information shared with DMV sites, a new lawsuit suggests.

Filing a proposed class-action suit in California, Katherine Wilson has accused Google of using Google Analytics and DoubleClick trackers on the California DMV site to unlawfully obtain information about her personal disability without her consent.

This, Wilson argued, violated the Driver’s Privacy Protection Act (DPPA), as well as the California Invasion of Privacy Act (CIPA), and impacted perhaps millions of drivers who had no way of knowing Google was collecting sensitive information shared only for DMV purposes.

“Google uses the personal information it obtains from motor vehicle records to create profiles, categorize individuals, and derive information about them to sell its customers the ability to create targeted marketing and advertising,” Wilson alleged.

According to Wilson, California’s DMV “encourages” drivers “to use its website rather than visiting one of the DMV’s physical locations” without telling drivers that Google has trackers all over its site.

Likely due to promoting the website’s convenience, the DMV reported a record number of online transactions in 2020, Wilson’s complaint said. And people with disabilities have taken advantage of that convenience. In 2023, approximately “40 percent of the 1.6 million disability parking placard renewals occurred online.”

Wilson last visited the DMV site last summer when she was renewing her disability parking placard online. At that time, she did not know that Google obtained her personal information when she filled out her application, communicated directly with the DMV, searched on the site, or clicked on various URLs, all of which she said revealed that either she had a disability or believed she had a disability.

Her complaint alleged that Google secretly gathers information about the contents of the DMV’s online users’ searches, logging sensitive keywords like “teens,” “disabled drivers,” and any “inquiries regarding disabilities.”

Google “knowingly” obtained this information, Wilson alleged, to quietly expand user profiles for ad targeting, “intentionally” disregarding DMV website users’ “reasonable expectation of privacy.”

“Google then uses the personal information and data to generate revenue from the advertising and marketing services that Google sells to businesses and individuals,” Wilson’s complaint alleged. “That Plaintiff and Class Members would not have consented to Google obtaining their personal information or learning the contents of their communications with the DMV is not surprising.”

Congressman James P. Moran, who sponsored the DPPA in 1994, made it clear that the law was enacted specifically to keep marketers from taking advantage of computers making it easy to “pull up a person’s DMV record” with the “click of a button.”

Even back then, some people were instantly concerned about any potential “invasion of privacy,” Moran said, noting that “if you review the way in which people are classified by direct marketers based on DMV information, you can see why some individuals might object to their personal information being sold.”

Google accused of secretly tracking drivers with disabilities Read More »

google’s-“ai-overview”-can-give-false,-misleading,-and-dangerous-answers

Google’s “AI Overview” can give false, misleading, and dangerous answers

This is fine.

Enlarge / This is fine.

Getty Images

If you use Google regularly, you may have noticed the company’s new AI Overviews providing summarized answers to some of your questions in recent days. If you use social media regularly, you may have come across many examples of those AI Overviews being hilariously or even dangerously wrong.

Factual errors can pop up in existing LLM chatbots as well, of course. But the potential damage that can be caused by AI inaccuracy gets multiplied when those errors appear atop the ultra-valuable web real estate of the Google search results page.

“The examples we’ve seen are generally very uncommon queries and aren’t representative of most people’s experiences,” a Google spokesperson told Ars. “The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web.”

After looking through dozens of examples of Google AI Overview mistakes (and replicating many ourselves for the galleries below), we’ve noticed a few broad categories of errors that seemed to show up again and again. Consider this a crash course in some of the current weak points of Google’s AI Overviews and a look at areas of concern for the company to improve as the system continues to roll out.

Treating jokes as facts

  • The bit about using glue on pizza can be traced back to an 11-year-old troll post on Reddit. (via)

    Kyle Orland / Google

  • This wasn’t funny when the guys at Pep Boys said it, either. (via)

    Kyle Orland / Google

  • Weird Al recommends “running with scissors” as well! (via)

    Kyle Orland / Google

Some of the funniest example of Google’s AI Overview failing come, ironically enough, when the system doesn’t realize a source online was trying to be funny. An AI answer that suggested using “1/8 cup of non-toxic glue” to stop cheese from sliding off pizza can be traced back to someone who was obviously trying to troll an ongoing thread. A response recommending “blinker fluid” for a turn signal that doesn’t make noise can similarly be traced back to a troll on the Good Sam advice forums, which Google’s AI Overview apparently trusts as a reliable source.

In regular Google searches, these jokey posts from random Internet users probably wouldn’t be among the first answers someone saw when clicking through a list of web links. But with AI Overviews, those trolls were integrated into the authoritative-sounding data summary presented right at the top of the results page.

What’s more, there’s nothing in the tiny “source link” boxes below Google’s AI summary to suggest either of these forum trolls are anything other than good sources of information. Sometimes, though, glancing at the source can save you some grief, such as when you see a response calling running with scissors “cardio exercise that some say is effective” (that came from a 2022 post from Little Old Lady Comedy).

Bad sourcing

  • Washington University in St. Louis says this ratio is accurate, but others disagree. (via)

    Kyle Orland / Google

  • Man, we wish this fantasy remake was real. (via)

    Kyle Orland / Google

Sometimes Google’s AI Overview offers an accurate summary of a non-joke source that happens to be wrong. When asking about how many Declaration of Independence signers owned slaves, for instance, Google’s AI Overview accurately summarizes a Washington University of St. Louis library page saying that one-third “were personally enslavers.” But the response ignores contradictory sources like a Chicago Sun-Times article saying the real answer is closer to three-quarters. I’m not enough of a history expert to judge which authoritative-seeming source is right, but at least one historian online took issue with the Google AI’s answer sourcing.

Other times, a source that Google trusts as authoritative is really just fan fiction. That’s the case for a response that imagined a 2022 remake of 2001: A Space Odyssey, directed by Steven Spielberg and produced by George Lucas. A savvy web user would probably do a double-take before citing citing Fandom’s “Idea Wiki” as a reliable source, but a careless AI Overview user might not notice where the AI got its information.

Google’s “AI Overview” can give false, misleading, and dangerous answers Read More »