Google

the-first-android-17-beta-is-now-available-on-pixel-devices

The first Android 17 beta is now available on Pixel devices

In short, the first Android 17 beta is chock full of things that may interest developers and modders, but there’s little in the way of user-facing changes right now.

Android 17 release schedule

Google has made some notable changes to how it releases Android updates, and Android 17 continues the trend. Like last year, there will be two Android 17 releases in 2026. The first one, coming in Q2, will be the more significant of the two. It will include a raft of new APIs, behavioral changes, and feature updates. This split release setup was implemented to better align with when major OEMs release new devices, but Android 17 availability still focuses mainly on Pixels. Google’s phones receive immediate updates, but everyone else has to wait for OEMs to roll out updates over the following weeks or months.

At the end of the year, another version (you can think of it as Android 17.1 even though Google doesn’t give it a name) will become available on supported devices. This “minor SDK release” will include some API and feature changes, but Google doesn’t have any details at this time.

Android release schedule

Credit: Google

Credit: Google

Before we get to that, Google plans to launch a second beta release in March. The company says Beta 2 will include final APIs, allowing developers to complete testing and roll out updates. Developers will have “several months” to get that work done before the final version hits Pixels.

In 2025, Google also changed the way it updates the open source parts of Android. Rather than regular code dumps, Google now only updates the Android Open Source Project (AOSP) twice yearly, in the second and fourth quarters, when new versions are released. That makes it harder to know what to expect from upcoming versions of Android, but Google insists this is more efficient.

If you want to check out Android 17 today, you’ll need a Pixel device. It supports the Pixel 6, Pixel 7, Pixel 8, Pixel 9, and Pixel 10 generations. The Pixel tablet and original Pixel Fold are also included. Other phone makers may release beta builds in the weeks ahead, but it’s a Google-only event for now. You can opt in to get an OTA to Android 17 on the beta program website.

The first Android 17 beta is now available on Pixel devices Read More »

it-took-two-years,-but-google-released-a-youtube-app-on-vision-pro

It took two years, but Google released a YouTube app on Vision Pro

When Apple’s Vision Pro mixed reality headset launched in February 2024, users were frustrated at the lack of a proper YouTube app—a significant disappointment given the device’s focus on video content consumption, and YouTube’s strong library of immersive VR and 360 videos. That complaint continued through the release of the second-generation Vision Pro last year, including in our review.

Now, two years later, an official YouTube app from Google has launched on the Vision Pro’s app store. It’s not just a port of the iPad app, either—it has panels arranged spatially in front of the user as you’d expect, and it supports 3D videos, as well as 360- and 180-degree ones.

YouTube’s App Store listing says users can watch “every video on YouTube” (there’s a screenshot of a special interface for Shorts vertical videos, for example) and that they get “the full signed-in experience” with watch history and so on.

Shortly after the Vision Pro launched, many users complained to YouTube about the lack of an app. They were referred to the web interface—which worked OK for most 2D videos, but it obviously wasn’t an ideal experience—and were told that a Vision Pro app was on the roadmap.

Two years of silence followed. Third-party apps popped up, like the relatively popular Juno app, but it was pulled from the App Store on Google’s claim that it violated API policies. (Some others remained or became available later.)

Google is building out its own XR ambitions, so it’s possible the Vision Pro app benefited from some of that work, but it’s unclear how this all came to be. But it’s here now. Next up: Netflix, right? Sadly, that’s unlikely; unlike Google, Netflix has not announced any intention here.

It took two years, but Google released a YouTube app on Vision Pro Read More »

attackers-prompted-gemini-over-100,000-times-while-trying-to-clone-it,-google-says

Attackers prompted Gemini over 100,000 times while trying to clone it, Google says

On Thursday, Google announced that “commercially motivated” actors have attempted to clone knowledge from its Gemini AI chatbot by simply prompting it. One adversarial session reportedly prompted the model more than 100,000 times across various non-English languages, collecting responses ostensibly to train a cheaper copycat.

Google published the findings in what amounts to a quarterly self-assessment of threats to its own products that frames the company as the victim and the hero, which is not unusual in these self-authored assessments. Google calls the illicit activity “model extraction” and considers it intellectual property theft, which is a somewhat loaded position, given that Google’s LLM was built from materials scraped from the Internet without permission.

Google is also no stranger to the copycat practice. In 2023, The Information reported that Google’s Bard team had been accused of using ChatGPT outputs from ShareGPT, a public site where users share chatbot conversations, to help train its own chatbot. Senior Google AI researcher Jacob Devlin, who created the influential BERT language model, warned leadership that this violated OpenAI’s terms of service, then resigned and joined OpenAI. Google denied the claim but reportedly stopped using the data.

Even so, Google’s terms of service forbid people from extracting data from its AI models this way, and the report is a window into the world of somewhat shady AI model-cloning tactics. The company believes the culprits are mostly private companies and researchers looking for a competitive edge, and said the attacks have come from around the world. Google declined to name suspects.

The deal with distillation

Typically, the industry calls this practice of training a new model on a previous model’s outputs “distillation,” and it works like this: If you want to build your own large language model (LLM) but lack the billions of dollars and years of work that Google spent training Gemini, you can use a previously trained LLM as a shortcut.

Attackers prompted Gemini over 100,000 times while trying to clone it, Google says Read More »

google-recovers-“deleted”-nest-video-in-high-profile-abduction-case

Google recovers “deleted” Nest video in high-profile abduction case

Suspect attempts to cover the camera with a plant.

In statements made by investigators, the video was apparently “recovered from residual data located in backend systems.” It’s unclear how long such data is retained or how easy it is for Google to access it. Some reports claim that it took several days for Google to recover the data.

In large-scale enterprise storage solutions, “deleted” for the user doesn’t always mean that the data is gone. Data that is no longer needed is often compressed and overwritten only as needed. In the meantime, it may be possible to recover the data. That’s something a company like Google could decide to do on its own, or it could be compelled to perform the recovery by a court order. In the Guthrie case, it sounds like Google was voluntarily cooperating with the investigation, which makes sense. Publishing video of the alleged perpetrator could be a major breakthrough as investigators seek help from the public.

It’s not your cloud

There is a temptation to ascribe some malicious intent to Google’s video storage setup. After all, this video expired after three hours, but here it is nine days later. That feels a bit suspicious on the surface, particularly for a company that is so focused on training AI models that feed on video.

We have previously asked Google to explain how it uses Nest to train AI models, and the company claims it does not incorporate user videos into training data, but the way you interact with the service and with your videos is fair game. “We may use your inputs, including prompts and feedback, usage, and outputs from interactions with AI features to further research, tune, and train Google’s generative models, machine learning technologies, and related products and services,” Google said.

Google recovers “deleted” Nest video in high-profile abduction case Read More »

upgraded-google-safety-tools-can-now-find-and-remove-more-of-your-personal-info

Upgraded Google safety tools can now find and remove more of your personal info

Do you feel popular? There are people on the Internet who want to know all about you! Unfortunately, they don’t have the best of intentions, but Google has some handy tools to address that, and they’ve gotten an upgrade today. The “Results About You” tool can now detect and remove more of your personal information. Plus, the tool for removing non-consensual explicit imagery (NCEI) is faster to use. All you have to do is tell Google your personal details first—that seems safe, right?

With today’s upgrade, Results About You gains the ability to find and remove pages that include ID numbers like your passport, driver’s license, and Social Security. You can access the option to add these to Google’s ongoing scans from the settings in Results About You. Just click in the ID numbers section to enable detection.

Naturally, Google has to know what it’s looking for to remove it. So you need to provide at least part of those numbers. Google asks for the full driver’s license number, which is fine, as it’s not as sensitive. For your passport and SSN, you only need the last four digits, which is enough for Google to find the full numbers on webpages.

ID number results detected.

The NCEI tool is geared toward hiding real, explicit images as well as deepfakes and other types of artificial sexualized content. This kind of content is rampant on the Internet right now due to the rapid rise of AI. What used to require Photoshop skills is now just a prompt away, and some AI platforms hardly do anything to prevent it.

Upgraded Google safety tools can now find and remove more of your personal info Read More »

alphabet-selling-very-rare-100-year-bonds-to-help-fund-ai-investment

Alphabet selling very rare 100-year bonds to help fund AI investment

Tony Trzcinka, a US-based senior portfolio manager at Impax Asset Management, which purchased Alphabet’s bonds last year, said he skipped Monday’s offering because of insufficient yields and concerns about overexposure to companies with complex financial obligations tied to AI investments.

“It wasn’t worth it to swap into new ones,” Trzcinka said. “We’ve been very conscious of our exposure to these hyperscalers and their capex budgets.”

Big Tech companies and their suppliers are expected to invest almost $700 billion in AI infrastructure this year and are increasingly turning to the debt markets to finance the giant data center build-out.

Alphabet in November sold $17.5 billion of bonds in the US including a 50-year bond—the longest-dated dollar bond sold by a tech group last year—and raised €6.5 billion on European markets.

Oracle last week raised $25 billion from a bond sale that attracted more than $125 billion of orders.

Alphabet, Amazon, and Meta all increased their capital expenditure plans during their most recent earnings reports, prompting questions about whether they will be able to fund the unprecedented spending spree from their cash flows alone.

Last week, Google’s parent company reported annual sales that topped $400 billion for the first time, beating investors’ expectations for revenues and profits in the most recent quarter. It said it planned to spend as much as $185 billion on capex this year, roughly double last year’s total, to capitalize on booming demand for its Gemini AI assistant.

Alphabet’s long-term debt jumped to $46.5 billion in 2025, up more than four times the previous year, though it held cash and equivalents of $126.8 billion at the year-end.

Investor demand was the strongest on the shortest portion of Monday’s deal, with a three-year offering pricing at only 0.27 percentage points above US Treasuries, versus 0.6 percentage points during initial price discussions, said people familiar with the deal.

The longest portion of the offering, a 40-year bond, is expected to yield 0.95 percentage points over US Treasuries, down from 1.2 percentage points during initial talks, the people said.

Bank of America, Goldman Sachs, and JPMorgan are the bookrunners on the bond sales across three currencies. All three declined to comment or did not immediately respond to requests for comment.

Alphabet did not immediately respond to a request for comment.

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Alphabet selling very rare 100-year bonds to help fund AI investment Read More »

google-experiments-with-locking-youtube-music-lyrics-behind-paywall

Google experiments with locking YouTube Music lyrics behind paywall

The app’s lyrics feature allows listeners to follow along as the song plays. However, only the first few lines are visible once free users in the test hit the lyric cut-off. After that, the lyrics are blurred. Users who want to keep seeing lyrics are advised to upgrade to a premium account, which costs $14 for both YouTube video and music or $11 for music only. The subscription also removes ads and adds features like downloads and higher-quality video streams.

Lyrics paywall in YT music

The new paywall in YouTube Music.

Credit: /u/MrYeet22836 and /u/Vegetable_Common188

The new paywall in YouTube Music. Credit: /u/MrYeet22836 and /u/Vegetable_Common188

This change is not without precedent. Spotify began restricting access to lyrics for free users in 2024. However, the response was so ferociously negative that the company backtracked and restored lyric access to those on ad-supported accounts. YouTube Music doesn’t have the same reach as Spotify, which may help soften the social media shame. Many subscribers are also getting the premium service just because they’re paying for ad-free YouTube and may never know there’s been a change to lyric availability.

As Google has ratcheted up restrictions on free YouTube accounts, the service has only made more money. In Google’s most recent earnings report, it reported $60 billion in YouTube revenue across both ads and subscriptions (both YouTube Premium and YouTube TV). That’s almost $10 billion more than last year.

Lyrics in YouTube Music are provided by third parties that Google has to pay, so it’s not surprising that Google is looking for ways to cover the cost. It is, however, a little surprising that the company hasn’t just used AI to generate lyrics for free. Google has recently tested the patience of YouTube users with a spate of AI features, like unannounced AI upscaling, fake DJs, and comment summaries.

This story was updated with Google’s response. 

Google experiments with locking YouTube Music lyrics behind paywall Read More »

why-darren-aronofsky-thought-an-ai-generated-historical-docudrama-was-a-good-idea

Why Darren Aronofsky thought an AI-generated historical docudrama was a good idea


We hold these truths to be self-evident

Production source says it takes “weeks” to produce just minutes of usable video.

Artist’s conception of critics reacting to the first episodes of “On This Day… 1776” Credit: Primordial Soup

Artist’s conception of critics reacting to the first episodes of “On This Day… 1776” Credit: Primordial Soup

Last week, filmmaker Darren Aronofsky’s AI studio Primordial Soup and Time magazine released the first two episodes of On This Day… 1776. The year-long series of short-form videos features short vignettes describing what happened on that day of the American Revolution 250 years ago, but it does so using “a variety of AI tools” to produce photorealistic scenes containing avatars of historical figures like George Washington, Thomas Paine, and Benjamin Franklin.

In announcing the series, Time Studios President Ben Bitonti said the project provides “a glimpse at what thoughtful, creative, artist-led use of AI can look like—not replacing craft but expanding what’s possible and allowing storytellers to go places they simply couldn’t before.”

The trailer for “On This Day… 1776.”

Outside critics were decidedly less excited about the effort. The AV Club took the introductory episodes to task for “repetitive camera movements [and] waxen characters” that make for “an ugly look at American history.” CNET said that this “AI slop is ruining American history,” calling the videos a “hellish broth of machine-driven AI slop and bad human choices.” The Guardian lamented that the “once-lauded director of Black Swan and The Wrestler has drowned himself in AI slop,” calling the series “embarrassing,” “terrible,” and “ugly as sin.” I could go on.

But this kind of initial reaction apparently hasn’t deterred Primordial Soup from its still-evolving efforts. A source close to the production, who requested anonymity to speak frankly about details of the series’ creation, told Ars that the quality of new episodes would improve as the team’s AI tools are refined throughout the year and as the team learns to better use them.

“We’re going into this fully assuming that we have a lot to learn, that this process is gonna evolve, the tools we’re using are gonna evolve,” the source said. “We’re gonna make mistakes. We’re gonna learn a lot… we’re going to get better at it, [and] the technology will change. We’ll see how audiences are reacting to certain things, what works, what doesn’t work. It’s a huge experiment, really.”

Not all AI

It’s important to note that On This Day… 1776 is not fully crafted by AI. The script, for instance, was written by a team of writers overseen by Aronofsky’s longtime writing partners Ari Handel and Lucas Sussman, as noted by The Hollywood Reporter. That makes criticisms like the Guardian’s of “ChatGPT-sounding sloganeering” in the first episodes both somewhat misplaced and hilariously harsh.

Our production source says the project was always conceived as a human-written effort and that the team behind it had long been planning and researching how to tell this kind of story. “I don’t think [they] even needed that kind of help or wanted that kind of [AI-powered writing] help,” they said. “We’ve all experimented with [AI-powered] writing and the chatbots out there, and you know what kind of quality you get out of that.”

What you see here is not a real human actor, but his lines were written and voiced by humans.

What you see here is not a real human actor, but his lines were written and voiced by humans. Credit: Primordial Soup

The producers also go out of their way to note that all the dialogue in the series is recorded directly by Screen Actors Guild voice actors, not by AI facsimiles. While recently negotiated union rules might have something to do with that, our production source also said the AI-generated voices the team used for temp tracks were noticeably artificial and not ready for a professional production.

Humans are also directly responsible for the music, editing, sound mixing, visual effects, and color correction for the project, according to our source. The only place the “AI-powered tools” come into play is in the video itself, which is crafted with what the announcement calls a “combination of traditional filmmaking tools and emerging AI capabilities.”

In practice, our source says, that means humans create storyboards, find visual references for locations and characters, and set up how they want shots to look. That information, along with the script, gets fed into an AI video generator that creates individual shots one at a time, to be stitched together and cleaned up by humans in traditional post-production.

That process takes the AI-generated cinema conversation one step beyond Ancestra, a short film Primordial Soup released last summer in association with Google DeepMind (which is not involved with the new project). There, AI tools were used to augment “live-action scenes with sequences generated by Veo.”

“Weeks” of prompting and re-prompting

In theory, having an AI model generate a scene in minutes might save a lot of time compared to traditional filmmaking—scouting locations, hiring actors, setting up cameras and sets, and the like. But our production source said the highly iterative process of generating and perfecting shots for On This Day… 1776 still takes “weeks” for each minutes-long video and that “more often than not, we’re pushing deadlines.”

The first episode of On this Day… 1776 features a dramatic flag raising.

Even though the AI model is essentially animating photorealistic avatars, the source said the process is “more like live action filmmaking” because of the lack of fine-grained control over what the video model will generate. “You don’t know if you’re gonna get what you want on the first take or the 12th take or the 40th take,” the source said.

While some shots take less time to get right than others, our source said the AI model rarely produces a perfect, screen-ready shot on the first try. And while some small issues in an AI-generated shot can be papered over in post-production with visual effects or careful editing, most of the time, the team has to go back and tell the model to generate a completely new video with small changes.

“It still takes a lot of work, and it’s not necessarily because it’s wrong, per se, so much as trying to get the right control because you [might] want the light to land on the face in the right way to try to tell the story,” the source said. “We’re still, we’re still striving for the same amount of control that we always have [with live-action production] to really maximize the story and the emotion.”

Quick shots and smaller budgets

Though video models have advanced since the days of the nightmarish clip of Will Smith eating spaghetti, hallucinations and nonsensical images are “still a problem” in producing On This Day… 1776, according to our source. That’s one of the reasons the company decided to use a series of short-form videos rather than a full-length movie telling the same essential story.

“It’s one thing to stay consistent within three minutes. It’s a lot harder and it takes a lot more work to stay consistent within two hours,” the source said. “I don’t know what the upper limit is now [but] the longer you get, the more things start to fall off.”

Stills from an AI-generated video of Will Smith eating spaghetti.

We’ve come a long way from the circa-2023 videos of Will Smith eating spaghetti.

We’ve come a long way from the circa-2023 videos of Will Smith eating spaghetti. Credit: chaindrop / Reddit

Keeping individual shots short also allows for more control and fewer “reshoots” for an AI-animated production like this. “When you think about it, if you’re trying to create a 20-second clip, you have all these things that are happening, and if one of those things goes wrong in 20 seconds, you have to start over,” our source said. “And the chance of something going wrong in 20 seconds is pretty high. The chance of something going wrong in eight seconds is a lot lower.”

While our production source couldn’t give specifics on how much the team was spending to generate so much AI-modeled video, they did suggest that the process was still a good deal cheaper than filming a historical docudrama like this on location.

“I mean, we could never achieve what we’re doing here for this amount of money, which I think is pretty clear when you watch this,” they said. In future episodes, the source promised, “you’ll see where there’s things that cameras just can’t even do” as a way to “make the most of that medium.”

“Let’s see what we can do”

If you’ve been paying attention to how fast things have been moving with AI-generated video, you might think that AI models will soon be able to produce Hollywood-quality cinema with nothing but a simple prompt. But our source said that working on On This Day… 1776 highlights just how important it is for humans to still be in the loop on something like this.

“Personally, I don’t think we’re ever gonna get there [replacing human editors],” he said. “We actually desperately need an editor. We need another set of eyes who can look at the cut and say, ‘If we get out of this shot a little early, then we can create a little bit of urgency. If we linger on this thing a little longer…’ You still really need that.”

AI Ben Franklin and AI Thomas Paine toast to the war propaganda effort.

AI Ben Franklin and AI Thomas Paine toast to the war propaganda effort. Credit: Primordial Soup

That could be good news for human editors. But On This Day… 1776 also suggests a world where on-screen (or even motion-captured) human actors are fully replaced by AI-generated avatars. When I asked our source why the producers felt that AI was ready to take over that specifically human part of the film equation, though, the response surprised me.

“I don’t know that we do know that, honestly,” they said. “I think we know that the technology is there to try. And I think as storytellers we’re really interested in using… all the different tools that we can to try to get our story across and to try to make audiences feel something.”

“It’s not often that we have huge new tools like this,” the source continued. “I mean, it’s never happened in my lifetime. But when you do [get these new tools], you want to start playing with them… We have to try things in order to know if it works, if it doesn’t work.”

“So, you know, we have the tools now. Let’s see what we can do.”

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Why Darren Aronofsky thought an AI-generated historical docudrama was a good idea Read More »

waymo-leverages-genie-3-to-create-a-world-model-for-self-driving-cars

Waymo leverages Genie 3 to create a world model for self-driving cars

On the road with AI

The Waymo World Model is not just a straight port of Genie 3 with dashcam videos stuffed inside. Waymo and DeepMind used a specialized post-training process to make the new model generate both 2D video and 3D lidar outputs of the same scene. While cameras are great for visualizing fine details, Waymo says lidar is necessary to add critical depth information to what a self-driving car “sees” on the road—maybe someone should tell Tesla about that.

Using a world model allows Waymo to take video from its vehicles and use prompts to change the route the vehicle takes, which it calls driving action control. These simulations, which come with lidar maps, reportedly offer greater realism and consistency than older reconstructive simulation methods.

With the world model, Waymo can see what would happen if the car took a different turn.

This model can also help improve the self-driving AI even without adding or removing everything. There are plenty of dashcam videos available for training self-driving vehicles, but they lack the multimodal sensor data of Waymo’s vehicles. Dropping such a video into the Waymo World Model generates matching sensor data, showing how the driving AI would have seen that situation.

While the Waymo World Model can create entirely synthetic scenes, the company seems mostly interested in “mutating” the conditions in real videos. The blog post contains examples of changing the time of day or weather, adding new signage, or placing vehicles in unusual places. Or, hey, why not an elephant in the road?

Waymo is ready in case an elephant shows up.

Waymo’s early test cities were consistently sunny (like Phoenix) with little inclement weather. These kinds of simulations could help the cars adapt to the more varied conditions. The new markets include places with more difficult conditions, including Boston and Washington, D.C.

Of course, the benefit of the new AI model will depend on how accurately Genie 3 can simulate the real world. The test videos we’ve seen of Genie 3 run the gamut from pretty believable to uncanny valley territory, but Waymo believes the technology has improved to the point that it can teach self-driving cars a thing or two.

Waymo leverages Genie 3 to create a world model for self-driving cars Read More »

neocities-founder-stuck-in-chatbot-hell-after-bing-blocked-1.5-million-sites

Neocities founder stuck in chatbot hell after Bing blocked 1.5 million sites


Microsoft won’t explain why Bing blocked 1.5 million Neocities websites.

Credit: Aurich Lawson | NeoCities

One of the weirdest corners of the Internet is suddenly hard to find on Bing, after the search engine inexplicably started blocking approximately 1.5 million independent websites hosted on Neocities.

Founded in 2013 to archive the “aesthetic awesomeness” of GeoCities websites, Neocities keeps the spirit of the 1990s Internet alive. It lets users design free websites without relying on standardized templates devoid of personality. For hundreds of thousands of people building websites around art, niche fandoms, and special expertise—or simply seeking a place to get a little weird online—Neocities provides a blank canvas that can be endlessly personalized when compared to a Facebook page. Delighted visitors discovering these sites are more likely to navigate by hovering flashing pointers over a web of spinning GIFs than clicking a hamburger menu or infinitely scrolling.

That’s the style of Internet that Kyle Drake, Neocities’ founder, strives to maintain. So he was surprised when he noticed that Bing was curiously blocking Neocities sites last summer. At first, the issue seemed resolved by contacting Microsoft, but after receiving more recent reports that users were struggling to log in, Drake discovered that another complete block was implemented in January. Even more concerning, he saw that after delisting the front page, Bing had started pointing users to a copycat site where he was alarmed to learn they were providing their login credentials.

Monitoring stats, Drake was stunned to see that Bing traffic had suddenly dropped from about half a million daily visitors to zero. He immediately reported the issue using Bing webmaster tools. Concerned that Bing was not just disrupting traffic but possibly also putting Neocities users at risk if bad actors were gaming search results, he hoped for a prompt resolution.

“This one site that was just a copy of our front page, I didn’t know if it was a phishing attack or what it was, I was just like, ‘whoa, what the heck?’” Drake told Ars.

However, weeks went by as Drake hit wall after wall, submitting nearly a dozen tickets while trying to get past the Bing chatbot to find a support member to fix the issue. Frustrated, he tried other internal channels as well, including offering to buy ads to see if an ads team member could help.

“I tried everything,” Drake said, but nothing worked. Neocities sites remained unlisted on Bing.

Although Bing only holds about 4.5 percent of the global search engine market, Drake said it was “embarrassing” that Neocities sites can’t be discovered using the default Windows search engine. He also noted that many other search engines license Bing data, further compounding the issue.

Ultimately, it’s affecting a lot of people, Drake said, but he suspects that his support tickets are being buried in probably trillions of requests each day from people wanting to improve their Bing indexing.

“There’s probably an actual human being at Bing that actually could fix this,” Drake told Ars, but “when you go to the webmaster tools,” you’re stuck talking to an AI chatbot, and “it’s all kind of automated.”

Ars reached Microsoft for comment, and the company took action to remove some inappropriate blocks.

Within 24 hours, the Neocities front page appeared in search results, but Drake ran tests over the next few days that showed that most subdomains are still being blocked, including popular Neocities sites that should garner high rankings.

Pressed to investigate further, Microsoft confirmed that some Neocities sites were delisted for violating policies designed to keep low-quality sites out of search results.

However, Microsoft would not identify which sites were problematic or directly connect with Neocities to resolve a seemingly significant amount of ongoing site blocks that do not appear to be linked to violations. Instead, Microsoft recommended that Neocities find a way to work directly with Microsoft, despite Ars confirming that Microsoft is currently ignoring an open ticket.

For Drake, “the current state of things is unknown.” It’s hard to tell if popular Neocities sites are still being blocked or if possibly Bing’s reindexing process is slow. Microsoft declined to clarify.

He’s still hoping that Microsoft will eventually resolve all the improper blocks, making it possible for Bing users to use the search engine not just to find businesses or information but also to discover creative people making websites just for fun. With so much AI slop invading social networks and search engines, Drake sees Neocities as “one of the last bastions of human content.”

“I hope we can resolve this amicably for both of us and that this doesn’t happen again in the future,” Drake said. “It’s really important for the future of the small web, and for quality content for web surfers in an increasingly generative AI world, that creative sites made by real humans are able to get a fair shot in search engine results.”

Bing deranked suspected phishing site

After Drake failed to quietly resolve the issue with Bing, he felt that he had no choice but to alert users to the potential risks from Bing’s delisting.

In a blog post in late January, Drake warned that Bing had “completely blocked” all Neocities subdomains from its search index. Even worse, “Bing was also placing what appeared to be a phishing attack against Neocities on the first page of search results,” Drake said.

“This is not only bad for search results, it’s very possible that it is actively dangerous,” Drake said.

After “several” complaints, Bing eventually deranked the suspected phishing site, Drake confirmed. But Bing “declined to reverse the block or provide a clear, actionable explanation for it,” which leaves Neocities users vulnerable, he said.

Since “it’s easy to get higher pagerank than a blocked site,” Drake warned that “it is possibly only a matter of time before another concerning site appears on Bing searches for Neocities.”

The blog emphasized that Google, the platform’s biggest traffic driver, was not blocking Neocities, nor was any search engine unlinked to Bing data. Urging a boycott that may force a resolution, Drake wrote, “we are recommending that Neocities users, and the broader Internet in general, not use Bing or search engines that source their results from Bing until this issue is resolved.

“If you use Bing or Bing-powered search engines, Neocities sites will not appear in your search results, regardless of content quality, originality, or compliance with webmaster guidelines,” Drake said. “If any Neocities-like sites appear on these results, they may be active phishing attacks against Neocities and should be treated with caution.”

Bing still blocking popular Neocities sites

Drake doesn’t want to boycott Bing, but in his blog, he said that Microsoft left him no choice but public disclosure:

“We did not want to write this post. We try very hard to have a good relationship with search engine providers. We would much rather quietly resolve this issue with Bing staff and move on. But after months of attempting to engage constructively through multiple channels, it became clear that silence only harms our users. Especially those who don’t realize their sites are invisible on some search engines.”

Drake told Ars that he thinks most people don’t realize how big Neocities has gotten since its early days reviving GeoCities’ spunk. The platform hosts 1,459,700 websites that have drawn in 13 billion visitors. Over the years, it has been profiled in Wired and The New York Times, and more recently, it has become a popular hub for gaming communities, Polygon reported.

As Neocities grew, Drake told Ars that much of his focus has been on improving content moderation. He works closely with a full-time dedicated content moderation staffer to quickly take down any problematic sites within 24 hours, he said. That effort includes reviewing reports and proactively screening new sites, with Drake noting that “our name domain provider requires us to take them down within 48 hours.”

Microsoft prohibits things like scraping content that could be considered copyright infringement or automatically generating content using “garbage text” to game the rankings. It also monitors for malicious behavior like phishing, as well as for prompt injection attacks on Bing’s large language model.

It’s unclear what kind of violations Microsoft found ahead of instituting the complete block; however, Drake told Ars that he has yet to identify any content that may have triggered it. He said he would promptly remove any websites flagged by Microsoft, if he could only talk to someone who could share that information.

“Naturally, we still don’t catch 100 percent of the sites with proactive moderation, and occasionally some problematic sites do get missed,” Drake said.

Although Drake is curious to learn more about what triggered the blocks, he told Ars that it’s clear that non-violative sites are still invisible on Bing.

One of the longest-running and most popular Neocities sites, Wired Sound for Wired People, is a perfect example. The bizarre, somewhat creepy anime fanpage is “very popular” and “has a lot of links to it all over the web,” Drake said. Yet if you search for its subdomain, “fauux,” the site no longer appears in Bing search results, as of this writing, while Google reliably spits it out as the top result.

Drake said that he still believes that Bing is blocking content by mistake, but Bing’s automated support tools aren’t making it easy to defend creators who are randomly blocked by one of the world’s biggest search engines.

“We have one of the lowest ratios of crap to legitimate content, human-made content, on the Internet,” Drake said. “And it’s really frustrating to see that all these human beings making really cool sites that people want to go to are just not available on the default Windows search engine.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Neocities founder stuck in chatbot hell after Bing blocked 1.5 million sites Read More »

google-hints-at-big-airdrop-expansion-for-android-“very-soon”

Google hints at big AirDrop expansion for Android “very soon”

Android has its own AirDrop-like feature called Quick Share (formerly Google Nearby Share), but until recently, it couldn’t communicate with Apple’s AirDrop. As we reported in November, the European Union required Apple to implement the Wi-Fi Aware standard in AirDrop, which enabled Google to add support for the Pixel 10 lineup. Google confirmed it didn’t need to work with Apple at all to make that happen.

As part of the Quick Share updates, Google has added an extension to the Play Store that allows Quick Share to operate as a full, updatable APK rather than an element of Play Services. That should make it easier to roll out new features to the entire Android ecosystem. Currently, the extension only supports a smattering of Android phones, but we can expect that list to expand as AirDrop comes to more devices this year.

With AirDrop support, Android devices can send files to iOS and macOS devices without downloading third-party apps. However, the functionality requires Apple users to enable the “Everyone for 10 minutes” connectivity option. While Google can shoehorn Android into the Wi-Fi Aware system, it cannot use Apple’s contact-based sharing options. That probably won’t change with the pending update.

Of course, “very soon” in Google-speak can mean many things. The company does like to pair Android ecosystem updates with Pixel Drops, and the next one of those is expected in March, with changes to location privacy, At a Glance, and more.

Google hints at big AirDrop expansion for Android “very soon” Read More »

google-court-filings-suggest-chromeos-has-an-expiration-date

Google court filings suggest ChromeOS has an expiration date

The documents suggest that Google will wash its hands of ChromeOS once the current support window closes. Google promises 10 years of Chromebook support, but that’s not counted from the date of purchase—Chromebooks are based on a handful of hardware platforms dictated by Google, with the most recent launching in 2023. That means Google has to support the newest devices through 2033. The “timeline to phase out ChromeOS is 2034,” says the filing.

Android goes big

From the start, the ChromeOS experience was focused on the web. Google initially didn’t even support running local apps, but little by little, its aspirations grew. Over the years, it has added Linux apps and Android apps. And it even tried to get Steam games running on Chromebooks—it gave up on that last one just recently. It also tried to shoehorn AI features into ChromeOS with the Chromebook Plus platform, to little effect.

Android was barely getting off the ground when ChromeOS began its journey, but as we approach the 2030s, Google clearly wants a more powerful desktop platform. Android has struggled on larger screens, but Aluminium is a long-running project to fix that. Whatever we see in 2028 may not even look like the Android we know from phones. It will have many of the same components under the hood, though.

Aluminum vs ChromeOS

Aluminium will have Google apps at the core.

Credit: US v. Google

Aluminium will have Google apps at the core. Credit: US v. Google

Google could get everything it wants with the upcoming Aluminium release. When running on powerful laptop hardware, Android’s performance and capabilities should far outstrip ChromeOS. Aluminium is also expected to run Google apps like Chrome and the Play Store with special system privileges, leaving third-party apps with fewer features. That gives Google more latitude in how it manages the platform and retains users, all without running afoul of recent antitrust rulings.

Google court filings suggest ChromeOS has an expiration date Read More »