Another day, another dead Google product. The Google One VPN service we complained about last week is headed to the chopping block. Google’s support documents haven’t been updated yet, but Android Authority reported on an email going out to Google One users informing them of the shutdown. 9to5Google also got confirmation of the shutdown from Google.
The Google One VPN launched in 2020 as a bonus feature for paying Google One subscribers. Google One is Google’s cloud storage subscription plan that allows users to buy extra storage for Gmail, Drive, and Google Photos. In 2020, the plan was exclusive to the expensive 2TB tier for $10 a month, but later, it was brought down to all Google One tiers, including the entry-level $2-per-month option.
By our count, Google has three VPN products, though “products” might be too strong a word since they are all essentially the same thing—VPN market segments? There’s the general Google One VPN for Android, iOS, Windows, and Mac—this is the one that’s dying. There’s also the “Pixel VPN by Google One,” which came with Pixel phones (the “Google One” branding here makes no sense since you didn’t have to subscribe to Google One) and the Google Fi VPN that’s exclusive to Google Fi Android and iOS customers.
The Google One VPN that’s shutting down was by far the most flexible, with the widest platform support, and its shutdown represents Google ending VPN support for Windows and Mac. The Pixel and Fi VPNs will keep running, possibly with new branding.
A Google spokesperson told 9to5Google the Google One VPN is shutting down because “people simply weren’t using it.” The Windows client was also super buggy, and it’s probably easier to shut it down rather than fix it. There’s no shutdown date yet, but a message on this page says the VPN will be phased out “later in 2024.”
Google is joining the custom Arm data center chip trend. Google Cloud, the cloud platform division that competes with Amazon Web Services and Microsoft Azure, is following in the footsteps of those companies and rolling out its own Arm-based chip designs. Google says its new “Google Axion Processors” are “custom Arm-based CPUs designed for the data center” and offer “industry-leading performance and energy efficiency.”
Google has been developing custom data center accelerators for things like AI and video transcoding, but this is the first time the company is making a CPU. Google says it’s seeing “50% better performance and up to 60% better energy-efficiency than comparable current-generation x86-based instances.“
Google’s “Axion” chip is based on the Arm Neoverse V2 CPU, so just like the ARM chips we see on mobile devices, by making “custom” chips, these companies are closely following a lot of blueprints that Arm makes available. Google says it did include a custom microcontroller called “Titanium,” which it says handles networking, security, and storage I/O.
This is Google Cloud, so you won’t be buying anything with an “Axion” chip in it. You can pay for cloud processing that uses the new CPU, with Google naming “Google Compute Engine, Google Kubernetes Engine, Dataproc, Dataflow, Cloud Batch, and more” as services that will use the new chip. Some of these services bill by “vCPU” usage, so theoretically a faster CPU could lead to lower prices, but Google doesn’t spell that out in the post. Internally Google is also moving BigTable, Spanner, BigQuery, Blobstore, Pub/Sub, Google Earth Engine, and the YouTube Ads platform from its current Arm servers to this new custom one soon.
It’s a bit strange to tout a new cloud infrastructure CPU when the whole point of services like AWS and Google Cloud is that you don’t have to worry about the server. The services you were running will continue to run, while companies like Google, Amazon, and Microsoft can take care of all that complicated hardware and network data center stuff. Google says that Axion VMs will be available as a “preview” in “the coming months” and that Cloud customers can sign up for access.
Is that Google Slides? Nope it’s Google Vids, the new video editor that seems to just make souped-up slideshows.
Google
Google’s demo starts with an existing slideshow and then generates an outline.
Google
Choose a theme, which all look like PowerPoints.
Google
Write a script, preferably with the help of Google Gemini.
Google
You can record a voiceover, or pick from Google’s robot voices.
Google
This is a Google Workspace app, so there’s lots of realtime collaboration features, like these live mouse cursors that were brought over from Slides.
Google
Comments work too.
Google
It’s interesting you get a “stock media” library while apps like Slides would use generative AI images here.
Google
Record a talk from your webcam.
Google
Embed your video in the slideshow.
Google
If you had asked me before what Google’s video editor app was, I would say “YouTube Studio,” but now Google Workspace has a new productivity app called “Google Vids.” Normally a video editor is considered a secondary application in many productivity suites, but Google apparently imagines Vids as a major pillar of Workspace, saying Vids is an “all-in-one video creation app for work that will sit alongside Docs, Sheets and Slides.” So, that is an editor for documents, spreadsheets, presentations, and videos?
Google’s demo of the new video editor pitches the product not for YouTube videos or films but more as a corporate super slideshow for things like training materials or product demos. Really, this “video editor” almost looks like it could completely replace Google Slides since the interface is just Slides but with a video timeline instead of a slideshow timeline.
Google’s example video creates a “sales training video” that starts with a Slides presentation as the basic outline. You start with an outline editor, where each slideshow page gets its own major section. Google then has video “styles” you can pick from, which all seem very Powerpoint-y with a big title, subheading, and a slot for some kind of video. Google then wants you to write a script and either read it yourself or have a text-to-speech voice read the script. A “stock media” library lets you fill in some of those video slots with generic corporate imagery like a video of a sunset, choose background music, and use a few pictures. You can also fire up your webcam and record something, sort of like a pre-canned Zoom meeting. After that it’s a lot of the usual Google productivity app features: real-time editing collaboration with visible mouse cursors from each participant and a stream of comments.
Like all Google products after the rise of OpenAI, Google pitches Vids as an “AI-powered” video editor, even though there didn’t seem to be many generative AI features in the presentation. The videos, images, and music were “stock” media, not AI-generated inventions (Slides can generate images, but that wasn’t in this demo). There’s nothing in here like OpenAI’s “Sora,” which generates new videos out of its training data. There’s probably a Gemini-powered “help me write” feature for the script, and Google describes the initial outline as “generated” from your starting Slides presentation, but that seemed to be it.
Google says Vids is being released to “Workspace Labs” in June, so you’ll be able to opt in to testing it.
Chipolo’s trackers. The keychain tracker takes a CR2023 battery; the card is not rechargeable.
Pebblebee’s trackers are all rechargeable.
Google’s “Find My Device” app.
Google
After an announcement that ended up being a year early, Android’s version of Tile/AirTags is ready to launch. Google has been gearing up on the software side of things to enable a Bluetooth tracking network on Android, and the company’s two tracking tag hardware partners, Pebblebee and Chipolo, now have ship dates. The two companies each have a press release today, with Pebblebee saying its trackers will ship in “late May,” while Chipolo says it will ship “after May 27th.” Google has a blog post out, too, promising “additional Bluetooth tags from Eufy, Jio, Motorola and more” later this year.
Both sets of devices have been up for preorder for a year now, and it doesn’t seem like anything has changed since. Both companies are offering little Bluetooth trackers in a keychain tag or credit card format, and Pebblebee has a third stick-on tag format. They’ll all be anonymously tracked by Android’s 3 billion-device Bluetooth tracker network, and the device owner will be able to see them in Google’s “Find my device” app.
Chipolo’s “One Point” key chain tag is the only thing that takes a CR2032 coin cell battery, while the company’s credit card tracker is not rechargeable. Pebblebee’s key chain, credit card, and stick-on tracker all have rechargeable batteries, including the wallet card, which is very rare! Nothing has UWB for precise location tracking—everything uses a speaker. Both companies sell multiple SKUs of what look like the exact same product but are locked to Google’s or Apple’s network—no switching allowed.
These were all supposed to come out in 2023 originally. Google’s patch notes say that the tracking network shipped in Android in December 2022, even though nothing is using it. The company has actually been waiting on Apple. In May 2023, Google and Apple announced a joint standard for “unknown tracker” alerts. While the two networks will not be compatible, they will team up to alert users if a tracker is being used to stalk them. All this hardware was announced a week later, but in July 2023, Google shipped what a spokesperson called, “a custom implementation” for AirTags (enabling Android phones to alert users to an unknown AirTag), and the company said it wouldn’t enable its tracking network until the joint tracking detection standard with Apple was ready. It looks like Apple will do that in iOS 17.5. iOS 17.5 is expected to be out—you guessed it—at the end of May, so these tags can finally ship.
9: 00pm update: A Google spokesperson told us Google’s July release of Android’s unwanted AirTag detection is “a custom implementation” and not the joint standard.
Will Google ever launch its “Find My” network? The Android ecosystem was supposed to have its own version of Apple’s AirTags by now. Google has had a crowd-sourced device-tracking network sitting dormant on 3 billion Android phones since December 2022. Partners have been ready to go with Bluetooth tag hardware since May 2023! This was all supposed to launch a year ago, but Google has been in a holding pattern. The good news is we’re finally seeing some progress after a year of silence.
The reason for Google’s lengthy delay is actually Apple. A week before Google’s partners announced their Android network Bluetooth tags, Google and Apple jointly announced a standard to detect “unknown” Bluetooth trackers and show users alerts if their phone thinks they’re being stalked. Since you can constantly see an AirTag’s location, they can be used for stalking by just covertly slipping one into a bag or car; nobody wants that, so everyone’s favorite mobile duopoly is teaming up.
Google did its half of this partnership and rolled out AirTag detection in July 2023. At the same time, Google also announced: “We’ve made the decision to hold the rollout of the Find My Device network until Apple has implemented protections for iOS.” Surely Apple would be burning the midnight oil to launch iOS Android tag detection as soon as possible so that Google could start competing with AirTags.
It looks like iOS 17.5 is the magic version Google is waiting for. The first beta was recently released to testers, and 9to5Mac recently spotted strings for detecting “unwanted” non-Apple tracking devices that were suddenly following you around. This 17.5 update still needs to ship, and the expectation is sometime in May. That would be 11 months after Google’s release.
Just like AirTags, and the Tile network before it, the goal of the project is to enable helpful little Bluetooth tracking tags that can tell you where your stuff is. These Bluetooth tags are super low-power and aim to last for a year on a small battery, which means they don’t have the power to spare for GPS. They can still report their location, though, because they manage to “borrow” the GPS chip of any compatible smartphones in range. Your phone scans for any Bluetooth tags, even ones you don’t own, then notes their approximate location and uploads it to the cloud. This is all done anonymously, and only the owner of the tag can see its location, but everyone in the network pitches in to create a crowdsourced, worldwide thing-tracking network.
Tile started the whole idea by having any user with the Tile app running do anonymous location uploads for every other Tile in earshot. Nothing can compete with the scale of Apple’s version, though, which runs on every iThing out there, and the bigger size of the network makes it a lot more reliable. Android will have an even bigger network if it ever launches. In an ideal world, Android and iOS would just work together to perfectly track every Bluetooth tracker regardless of make and model, but they’re only teaming up for stalking detection.
Google gears up for launch
With the impending iOS release, Google seems to be getting its ducks in a row as well. 9to5Google has a screenshot of the new Find My Device settings page that is appearing for some users, which gives them a chance to opt out of the anonymous tracking network. That report also mentions that some users received an email Thursday of an impending tracking network launch, saying: “You’ll get a notification on your Android devices when this feature is turned on in 3 days. Until then, you can opt out of the network through Find My Device on the web.” The vast majority of Android users have not gotten this email, though, suggesting maybe it was a mistake. It’s very weird to announce a launch in “days remaining” rather than just saying what date something will launch, and this email went out Thursday, which would mean a bizarre Sunday launch when everyone is off for the weekend.
The official announcement could come at any time, but Google said it wanted to wait for Apple, and that means at least a few weeks for actual functionality to be turned on. We also need a launch date from those poor hardware partners that presumably have had tracking tags sitting around in a warehouse for a year. Google’s partners, Chipolo and Pebblebee, have both been taking preorders for Android tracking tags for the past year and don’t have any launch updates.
And speaking of hardware, Google was supposed to be building a first-party tracking tag once upon a time. January 2023 was when we first heard of a device codenamed “Grogu,” which was supposed to have a speaker, UWB compatibility, and Bluetooth LE. Is that still happening? There’s probably time to have made a second-generation device by now. Apple’s May iOS release would be great timing for a Google I/O announcement, but we were also expecting an announcement at the last I/O, so who knows.
Google has sued two app developers based in China over an alleged scheme targeting 100,000 users globally over four years with at least 87 fraudulent cryptocurrency and other investor apps distributed through the Play Store.
The tech giant alleged that scammers lured victims with “promises of high returns” from “seemingly legitimate” apps offering investment opportunities in cryptocurrencies and other products. Commonly known as “pig-butchering schemes,” these scams displayed fake returns on investments, but when users went to withdraw the funds, they discovered they could not.
In some cases, Google alleged, developers would “double down on the scheme by requesting various fees and other payments from victims that were supposedly necessary for the victims to recover their principal investments and purported gains.”
Google accused the app developers—Yunfeng Sun (also known as “Alphonse Sun”) and Hongnam Cheung (also known as “Zhang Hongnim” and “Stanford Fischer”)—of conspiring to commit “hundreds of acts of wire fraud” to further “an unlawful pattern of racketeering activity” that siphoned up to $75,000 from each user successfully scammed.
Google was able to piece together the elaborate alleged scheme because the developers used a wide array of Google products and services to target victims, Google said, including Google Play, Voice, Workspace, and YouTube, breaching each one’s terms of service. Perhaps most notably, the Google Play Store’s developer program policies “forbid developers to upload to Google Play ‘apps that expose users to deceptive or harmful financial products and services,’ including harmful products and services ‘related to the management or investment of money and cryptocurrencies.'”
In addition to harming Google consumers, Google claimed that each product and service’s reputation would continue to be harmed unless the US district court in New York ordered a permanent injunction stopping developers from using any Google products or services.
“By using Google Play to conduct their fraud scheme,” scammers “have threatened the integrity of Google Play and the user experience,” Google alleged. “By using other Google products to support their scheme,” the scammers “also threaten the safety and integrity of those other products, including YouTube, Workspace, and Google Voice.”
Google’s lawsuit is the company’s most recent attempt to block fraudsters from targeting Google products by suing individuals directly, Bloomberg noted. Last year, Google sued five people accused of distributing a fake Bard AI chatbot that instead downloaded malware to Google users’ devices, Bloomberg reported.
How did the alleged Google Play scams work?
Google said that the accused developers “varied their approach from app to app” when allegedly trying to scam users out of thousands of dollars but primarily relied on three methods to lure victims.
The first method relied on sending text messages using Google Voice—such as “I am Sophia, do you remember me?” or “I miss you all the time, how are your parents Mike?”—”to convince the targeted victims that they were sent to the wrong number.” From there, the scammers would apparently establish “friendships” or “romantic relationships” with victims before moving the conversation to apps like WhatsApp, where they would “offer to guide the victim through the investment process, often reassuring the victim of any doubts they had about the apps.” These supposed friends, Google claimed, would “then disappear once the victim tried to withdraw funds.”
Another strategy allegedly employed by scammers relied on videos posted to platforms like YouTube, where fake investment opportunities would be promoted, promising “rates of return” as high as “two percent daily.”
The third tactic, Google said, pushed bogus affiliate marketing campaigns, promising users commissions for “signing up additional users.” These apps, Google claimed, were advertised on social media as “a guaranteed and easy way to earn money.”
Once a victim was drawn into using one of the fraudulent apps, “user interfaces sought to convince victims that they were maintaining balances on the app and that they were earning ‘returns’ on their investments,” Google said.
Occasionally, users would be allowed to withdraw small amounts, convincing them that it was safe to invest more money, but “later attempts to withdraw purported returns simply did not work.” And sometimes the scammers would “bilk” victims out of “even more money,” Google said, by requesting additional funds be submitted to make a withdrawal.
“Some demands” for additional funds, Google found, asked for anywhere “from 10 to 30 percent to cover purported commissions and/or taxes.” Victims, of course, “still did not receive their withdrawal requests even after these additional fees were paid,” Google said.
Which apps were removed from the Play Store?
Google tried to remove apps as soon as they were discovered to be fraudulent, but Google claimed that scammers concocted new aliases and infrastructure to “obfuscate their connection to suspended fraudulent apps.” Because scammers relied on so many different Google services, Google was able to connect the scheme to the accused developers through various business records.
Fraudulent apps named in the complaint include fake cryptocurrency exchanges called TionRT and SkypeWallet. To make the exchanges appear legitimate, scammers put out press releases on newswire services and created YouTube videos likely relying on actors to portray company leadership.
In one YouTube video promoting SkypeWallet, the supposed co-founder of Skype Coin uses the name “Romser Bennett,” which is the same name used for the supposed founder of another fraudulent app called OTCAI2.0, Google said. In each video, a completely different presumed hired actor plays the part of “Romser Bennett.” In other videos, Google found the exact same actor plays an engineer named “Rodriguez” for one app and a technical leader named “William Bryant” for another app.
Another fraudulent app that was flagged by Google was called the Starlight app. Promoted on TikTok and Instagram, Google said, that app promised “that users could earn commissions by simply watching videos.”
The Starlight app was downloaded approximately 23,000 times and seemingly primarily targeted users in Ghana, allegedly scamming at least 6,000 Ghanian users out of initial investment capital that they were told was required before they could start earning money on the app.
Across all 87 fraudulent apps that Google has removed, Google estimated that approximately 100,000 users were victimized, including approximately 8,700 in the United States.
Currently, Google is not aware of any live apps in the Play Store connected to the alleged scheme, the complaint said, but scammers intent on furthering the scheme “will continue to harm Google and Google Play users” without a permanent injunction, Google warned.
Waymo and Uber have been working together on regular Ubers for a while, but the two companies are now teaming up for food delivery. Automated Uber Eats is rolling out to Waymo’s Phoenix service area. Waymo says this will start in “select merchants in Chandler, Tempe and Mesa, including local favorites like Princess Pita, Filiberto’s, and Bosa Donuts.”
Phoenix Uber Eats customers can fire up the app and order some food, and they might see the message “autonomous vehicles may deliver your order.” Waymo says you’ll be able to opt out of robot delivery at checkout if you want.
The pop-up screen if a Waymo is delivering your order.
Waymo
Of course, the big difference between human and robot food delivery is that the human driver will take your food door to door, while for the Waymo option, you’ll need to run outside and flag down your robot delivery vehicle when it arrives. Just like regular Uber, you’ll get a notification through the app when it’s time. The food should be in the trunk. If you get paired with a Waymo, your delivery tip will be refunded. Waymo doesn’t explain how the restaurant side of things will work, but inevitably, some poor food server will need to run outside when the Waymo arrives.
It seems pretty wasteful to have a 2-ton, crash-tested vehicle designed to seat five humans delivering a small bag of food, but at least the Jaguar i-Pace Waymos are all-electric. It’s a shame Waymo’s smaller “Firefly” cars were retired. There are smaller, more purpose-built food delivery bots out there—Uber Eats is partnered with Serve Robotics for smaller robot delivery—but these are all sidewalk-cruising, walking-speed robots that can only go a few blocks. The Nuro R3 (Nuro is also partnered with Uber) seems like a good example of what a road-going delivery should look like—it’s designed for food and not people, and it comes with heated or cooled food compartments. Waymo is still the industry leader in automated driving, though.
Enlarge/ You think this cute little search robot is going to work for free?
Google might start charging for access to search results that use generative artificial intelligence tools. That’s according to a new Financial Times report citing “three people with knowledge of [Google’s] plans.”
Charging for any part of the search engine at the core of its business would be a first for Google, which has funded its search product solely with ads since 2000. But it’s far from the first time Google would charge for AI enhancements in general; the “AI Premium” tier of a Google One subscription costs $10 more per month than a standard “Premium” plan, for instance, while “Gemini Business” adds $20 a month to a standard Google Workspace subscription.
Under the proposed plan, Google’s standard search (without AI) would remain free, and subscribers to a paid AI search tier would still see ads alongside their Gemini-powered search results, according to the FT report. But search ads—which brought in a reported $175 billion for Google last year—might not be enough to fully cover the increased costs involved with AI-powered search. A Reuters report from last year suggested that running a search query through an advanced neural network like Gemini “likely costs 10 times more than a standard keyword search,” potentially representing “several billion dollars of extra costs” across Google’s network.
“SGE never feels like a useful addition to Google Search,” Ars’ Ron Amadeo wrote last month. “Google Search is a tool, and just as a screwdriver is not a hammer, I don’t want a chatbot in a search engine.”
Regardless, the current tech industry mania surrounding anything and everything related to generative AI may make Google feel it has to integrate the technology into some sort of “premium” search product sooner rather than later. For now, FT reports that Google hasn’t made a final decision on whether to implement the paid AI search plan, even as Google engineers work on the backend technology necessary to launch such a service
Google also faces AI-related difficulties on the other side of the search divide. Last month, the company announced it was redoubling its efforts to limit the appearance of “spammy, low-quality content”—much of it generated by AI chatbots—in its search results.
Enlarge/ Billie Eilish attends the 2024 Vanity Fair Oscar Party hosted by Radhika Jones at the Wallis Annenberg Center for the Performing Arts on March 10, 2024, in Beverly Hills, California.
On Tuesday, the Artist Rights Alliance (ARA) announced an open letter critical of AI signed by over 200 musical artists, including Pearl Jam, Nicki Minaj, Billie Eilish, Stevie Wonder, Elvis Costello, and the estate of Frank Sinatra. In the letter, the artists call on AI developers, technology companies, platforms, and digital music services to stop using AI to “infringe upon and devalue the rights of human artists.” A tweet from the ARA added that AI poses an “existential threat” to their art.
Visual artists began protesting the advent of generative AI after the rise of the first mainstream AI image generators in 2022, and considering that generative AI research has since been undertaken for other forms of creative media, we have seen that protest extend to professionals in other creative domains, such as writers, actors, filmmakers—and now musicians.
“When used irresponsibly, AI poses enormous threats to our ability to protect our privacy, our identities, our music and our livelihoods,” the open letter states. It alleges that some of the “biggest and most powerful” companies (unnamed in the letter) are using the work of artists without permission to train AI models, with the aim of replacing human artists with AI-created content.
A list of musical artists that signed the ARA open letter against generative AI.
A list of musical artists that signed the ARA open letter against generative AI.
A list of musical artists that signed the ARA open letter against generative AI.
A list of musical artists that signed the ARA open letter against generative AI.
In January, Billboard reported that AI research taking place at Google DeepMind had trained an unnamed music-generating AI on a large dataset of copyrighted music without seeking artist permission. That report may have been referring to Google’s Lyria, an AI-generation model announced in November that the company positioned as a tool for enhancing human creativity. The tech has since powered musical experiments from YouTube.
We’ve previously covered AI music generators that seemed fairly primitive throughout 2022 and 2023, such as Riffusion, Google’s MusicLM, and Stability AI’s Stable Audio. We’ve also covered open source musical voice-cloning technology that is frequently used to make musical parodies online. While we have yet to see an AI model that can generate perfect, fully composed high-quality music on demand, the quality of outputs from music synthesis models has been steadily improving over time.
In considering AI’s potential impact on music, it’s instructive to remember historical instances where tech innovations initially sparked concern among artists. For instance, the introduction of synthesizers in the 1960s and 1970s and the advent of digital sampling in the 1980s both faced scrutiny and fear from parts of the music community, but the music industry eventually adjusted.
While we’ve seen fear of the unknown related to AI going around quite a bit for the past year, it’s possible that AI tools will be integrated into the music production process like any other music production tool or technique that came before. It’s also possible that even if that kind of integration comes to pass, some artists will still get hurt along the way—and the ARA wants to speak out about it before the technology progresses further.
“Race to the bottom”
The Artists Rights Alliance is a nonprofit advocacy group that describes itself as an “alliance of working musicians, performers, and songwriters fighting for a healthy creative economy and fair treatment for all creators in the digital world.”
The signers of the ARA’s open letter say they acknowledge the potential of AI to advance human creativity when used responsibly, but they also claim that replacing artists with generative AI would “substantially dilute the royalty pool” paid out to artists, which could be “catastrophic” for many working musicians, artists, and songwriters who are trying to make ends meet.
In the letter, the artists say that unchecked AI will set in motion a race to the bottom that will degrade the value of their work and prevent them from being fairly compensated. “This assault on human creativity must be stopped,” they write. “We must protect against the predatory use of AI to steal professional artist’ voices and likenesses, violate creators’ rights, and destroy the music ecosystem.”
The emphasis on the word “human” in the letter is notable (“human artist” was used twice and “human creativity” and “human artistry” are used once, each) because it suggests the clear distinction they are drawing between the work of human artists and the output of AI systems. It implies recognition that we’ve entered a new era where not all creative output is made by people.
The letter concludes with a call to action, urging all AI developers, technology companies, platforms, and digital music services to pledge not to develop or deploy AI music-generation technology, content, or tools that undermine or replace the human artistry of songwriters and artists or deny them fair compensation for their work.
While it’s unclear whether companies will meet those demands, so far, protests from visual artists have not stopped development of ever-more advanced image-synthesis models. On Threads, frequent AI industry commentator Dare Obasanjo wrote, “Unfortunately this will be as effective as writing an open letter to stop the sun from rising tomorrow.”
Google offers a VPN via its “Google One” monthly subscription plan, and while it debuted on phones, a desktop app has been available for Windows and Mac OS for over a year now. Since a lot of people pay for Google One for the cloud storage increase for their Google accounts, you might be tempted to try the VPN on a desktop, but Windows users testing out the app haven’t seemed too happy lately. An open bug report on Google’s GitHub for the project says the Windows app “breaks” the Windows DNS, and this has been ongoing since at least November.
A VPN would naturally route all your traffic through a secure tunnel, but you’ve still got to do DNS lookups somewhere. A lot of VPN services also come with a DNS service, and Google is no different. The problem is that Google’s VPN app changes the Windows DNS settings of all network adapters to always use Google’s DNS, whether the VPN is on or off. Even if you change them, Google’s program will change them back.
Most VPN apps don’t work this way, and even Google’s Mac VPN program doesn’t work this way. The users in the thread (and the ones emailing us) expect the app, at minimum, to use the original Windows settings when the VPN is off. Since running a VPN is often about privacy and security, users want to be able to change the DNS away from Google even when the VPN is running.
Changing the DNS can result in several problems for certain setups. As users in the thread point out, some people, especially those using a VPN, want an encrypted DNS setup, and Google’s VPN program will just turn this off. It can break custom filtering setups and will prevent users from accessing local network IPs, like a router configuration page or corporate intranet pages. It will also make it impossible to log in to a captive portal, which you often see on public Wi-Fi at a hotel, airport, or coffee shop.
Besides that behavior, the thread is full of all sorts of reports of Google’s VPN program getting screwy with the Windows DNS settings. Several users say Google’s VPN app frequently resets the DNS settings of all network adapters, even if they change them after the initial install sets them to 8.8.8.8. For instance, one reply from ryanzimbauser says: “This program has absolutely no business changing all present NICs to a separate DNS on the startup of my computer while the program is not set to ‘Launch app after computer starts.’ This recent change interfered with my computer’s ability to access a network implementing a private DNS filter. This has broken my trust and I will not be reinstalling this program until this is remedied.”
Several user reports say that even after uninstalling the Google VPN, the DNS settings don’t revert to what they used to be. Maybe this is more of a Windows problem than a Google problem, but a lot of users have trouble changing the settings away from 8.8.8.8 through the control panel after uninstalling. They are resorting to registry changes, PowerShell scripts, or the “reset network settings” button.
Google employee Ryan Lothian responded to the thread, saying:
Hey folks, thank you for reporting this behaviour.
To protect users privacy, the Google One VPN deliberately sets DNS to use Google’s DNS servers. This prevents a nefarious DNS server (that might be set by DHCP) compromising your privacy. Visit https://developers.google.com/speed/public-dns/privacy to learn about the limited logging performed by Google DNS.
We think this is a good default for most users. However, we do recognize that some users might want to have their own DNS, or have the DNS revert when VPN disconnects. We’ll consider adding this to a future release of the app.
It’s pretty rare for Google, the web and Android company, to make a Windows program. There’s Chrome, the Drive syncing app, Google Earth Pro, this VPN app, and not too much else. You can find it by going to the Google One website, clicking “Benefits” in the sidebar, and then “View Details” under the VPN box, where you’ll find an exceedingly rare Google Windows executable.
If you want a VPN and care about privacy, there are probably better places to go than Google. The company can still see all the websites you’re visiting via its DNS servers, and while the VPN data might be private, Google’s DNS holds onto your web history for up to 48 hours and is subject to subpoenas. There are several accusations in the thread of Google changing DNS for data harvesting purposes, but if you’re concerned about that, maybe don’t do business with one of the world’s biggest user-tracking companies.
On Monday, OpenAI announced that visitors to the ChatGPT website in some regions can now use the AI assistant without signing in. Previously, the company required that users create an account to use it, even with the free version of ChatGPT that is currently powered by the GPT-3.5 AI language model. But as we have noted in the past, GPT-3.5 is widely known to provide more inaccurate information compared to GPT-4 Turbo, available in paid versions of ChatGPT.
Since its launch in November 2022, ChatGPT has transformed over time from a tech demo to a comprehensive AI assistant, and it’s always had a free version available. The cost is free because “you’re the product,” as the old saying goes. Using ChatGPT helps OpenAI gather data that will help the company train future AI models, although free users and ChatGPT Plus subscription members can both opt out of allowing the data they input into ChatGPT to be used for AI training. (OpenAI says it never trains on inputs from ChatGPT Team and Enterprise members at all).
Opening ChatGPT to everyone could provide a frictionless on-ramp for people who might use it as a substitute for Google Search or potentially gain new customers by providing an easy way for people to use ChatGPT quickly, then offering an upsell to paid versions of the service.
“It’s core to our mission to make tools like ChatGPT broadly available so that people can experience the benefits of AI,” OpenAI says on its blog page. “For anyone that has been curious about AI’s potential but didn’t want to go through the steps to set up an account, start using ChatGPT today.”
Enlarge/ When you visit the ChatGPT website, you’re immediately presented with a chat box like this (in some regions). Screenshot captured April 1, 2024.
Benj Edwards
Since kids will also be able to use ChatGPT without an account—despite it being against the terms of service—OpenAI also says it’s introducing “additional content safeguards,” such as blocking more prompts and “generations in a wider range of categories.” What exactly that entails has not been elaborated upon by OpenAI, but we reached out to the company for comment.
There might be a few other downsides to the fully open approach. On X, AI researcher Simon Willison wrote about the potential for automated abuse as a way to get around paying for OpenAI’s services: “I wonder how their scraping prevention works? I imagine the temptation for people to abuse this as a free 3.5 API will be pretty strong.”
With fierce competition, more GPT-3.5 access may backfire
Willison also mentioned a common criticism of OpenAI (as voiced in this case by Wharton professor Ethan Mollick) that people’s ideas about what AI models can do have so far largely been influenced by GPT-3.5, which, as we mentioned, is far less capable and far more prone to making things up than the paid version of ChatGPT that uses GPT-4 Turbo.
“In every group I speak to, from business executives to scientists, including a group of very accomplished people in Silicon Valley last night, much less than 20% of the crowd has even tried a GPT-4 class model,” wrote Mollick in a tweet from early March.
With models like Google Gemini Pro 1.5 and Anthropic Claude 3 potentially surpassing OpenAI’s best proprietary model at the moment —and open weights AI models eclipsing the free version of ChatGPT—allowing people to use GPT-3.5 might not be putting OpenAI’s best foot forward. Microsoft Copilot, powered by OpenAI models, also supports a frictionless, no-login experience, but it allows access to a model based on GPT-4. But Gemini currently requires a sign-in, and Anthropic sends a login code through email.
For now, OpenAI says the login-free version of ChatGPT is not yet available to everyone, but it will be coming soon: “We’re rolling this out gradually, with the aim to make AI accessible to anyone curious about its capabilities.”
In a statement provided to Ars, users’ lawyer, David Boies, described the settlement as “a historic step in requiring honesty and accountability from dominant technology companies.” Based on Google’s insights, users’ lawyers valued the settlement between $4.75 billion and $7.8 billion, the Monday court filing said.
Under the settlement, Google agreed to delete class-action members’ private browsing data collected in the past, as well as to “maintain a change to Incognito mode that enables Incognito users to block third-party cookies by default.” This, plaintiffs’ lawyers noted, “ensures additional privacy for Incognito users going forward, while limiting the amount of data Google collects from them” over the next five years. Plaintiffs’ lawyers said that this means that “Google will collect less data from users’ private browsing sessions” and “Google will make less money from the data.”
“The settlement stops Google from surreptitiously collecting user data worth, by Google’s own estimates, billions of dollars,” Boies said. “Moreover, the settlement requires Google to delete and remediate, in unprecedented scope and scale, the data it improperly collected in the past.”
Google had already updated disclosures to users, changing the splash screen displayed “at the beginning of every Incognito session” to inform users that Google was still collecting private browsing data. Under the settlement, those disclosures to all users must be completed by March 31, after which the disclosures must remain. Google also agreed to “no longer track people’s choice to browse privately,” and the court filing said that “Google cannot roll back any of these important changes.”
Notably, the settlement does not award monetary damages to class members. Instead, Google agreed that class members retain “rights to sue Google individually for damages” through arbitration, which, users’ lawyers wrote, “is important given the significant statutory damages available under the federal and state wiretap statutes.”
“These claims remain available for every single class member, and a very large number of class members recently filed and are continuing to file complaints in California state court individually asserting those damages claims in their individual capacities,” the court filing said.
While “Google supports final approval of the settlement,” the company “disagrees with the legal and factual characterizations contained in the motion,” the court filing said. Google spokesperson José Castañeda told Ars that the tech giant thinks that the “data being deleted isn’t as significant” as Boies represents, confirming that Google was “pleased to settle this lawsuit, which we always believed was meritless.”
“The plaintiffs originally wanted $5 billion and are receiving zero,” Castañeda said. “We never associate data with users when they use Incognito mode. We are happy to delete old technical data that was never associated with an individual and was never used for any form of personalization.”
While Castañeda said that Google was happy to delete the data, a footnote in the court filing noted that initially, “Google claimed in the litigation that it was impossible to identify (and therefore delete) private browsing data because of how it stored data.” Now, under the settlement, however, Google has agreed “to remediate 100 percent of the data set at issue.”
Mitigation efforts include deleting fields Google used to detect users in Incognito mode, “partially redacting IP addresses,” and deleting “detailed URLs, which will prevent Google from knowing the specific pages on a website a user visited when in private browsing mode.” Keeping “only the domain-level portion of the URL (i.e., only the name of the website) will vastly improve user privacy by preventing Google (or anyone who gets their hands on the data) from knowing precisely what users were browsing,” the court filing said.
Because Google did not oppose the motion for final approval, US District Judge Yvonne Gonzalez Rogers is expected to issue an order approving the settlement on July 30.