Google

record-scratch—google’s-lyria-3-ai-music-model-is-coming-to-gemini-today

Record scratch—Google’s Lyria 3 AI music model is coming to Gemini today

Sour notes

AI-generated music is not a new phenomenon. Several companies offer models that ingest and homogenize human-created music, and the resulting tracks can sound remarkably “real,” if a bit overproduced. Streaming services have already been inundated with phony AI artists, some of which have gathered thousands of listeners who may not even realize they’re grooving to the musical equivalent of a blender set to purée.

Still, you have to seek out tools like that, and Google is bringing similar capabilities to the Gemini app. As one of the most popular AI platforms, we’re probably about to see a lot more AI music on the Internet. Google says tracks generated with Lyria 3 will have an audio version of Google’s SynthID embedded within. That means you’ll always be able to check if a piece of audio was created with Google’s AI by uploading it to Gemini, similar to the way you can check images and videos for SynthID tags.

Google also says it has sought to create a music AI that respects copyright and partner agreements. If you name a specific artist in your prompt, Gemini won’t attempt to copy that artist’s sound. Instead, it’s trained to take that as “broad creative inspiration.” Although it also notes this process is not foolproof, and some of that original expression might imitate an artist too much. In those cases, Google invites users to report such shared content.

Lyria 3 is going live in the Gemini web interface today and should be available in the mobile app within a few days. It works in English, German, Spanish, French, Hindi, Japanese, Korean, and Portuguese, but Google plans to add more languages soon. While all users will have some access to music generation, those with AI Pro and AI Ultra subscriptions will have higher usage limits, but the specifics are unclear.

Record scratch—Google’s Lyria 3 AI music model is coming to Gemini today Read More »

google’s-pixel-10a-arrives-on-march-5-for-$499-with-specs-and-design-of-yesteryear

Google’s Pixel 10a arrives on March 5 for $499 with specs and design of yesteryear

It’s that time of year—a new budget Pixel phone is about to hit virtual shelves. The Pixel 10a will be available on March 5, and pre-orders go live today. The 9a will still be on sale for a while, but the 10a will be headlining Google’s store. However, you might not notice unless you keep up with the Pixel numbering scheme. This year’s A-series Pixel is virtually identical to last year’s, both inside and out.

Last year’s Pixel 9a was a notable departure from the older design language, but Google made few changes for 2026. We liked that the Pixel 9a emphasized battery capacity and moved to a flat camera bump, and this time, it’s really flat. Google says the camera now sits totally flush with the back panel. This is probably the only change you’ll be able to identify visually.

Specs at a glance: Google Pixel 9a vs. Pixel 10a
Phone Pixel 9a Pixel 10a
SoC Google Tensor G4 Google Tensor G4
Memory 8GB 8GB
Storage 128GB, 256GB 128GB, 256GB
Display 1080×2424 6.3″ pOLED, 60–120 Hz, Gorilla Glass 3, 2700 nits (peak) 1080×2424 6.3″ pOLED, 60–120 Hz, Gorilla Glass 7i, 3000 nits (peak)
Cameras 48 MP primary, f/1.7, OIS; 13 MP ultrawide, f/2.2; 13 MP selfie, f/2.2 48 MP primary, f/1.7, OIS; 13 MP ultrawide, f/2.2; 13 MP selfie, f/2.2
Software Android 15 (at launch), 7 years of OS updates Android 16, 7 years of OS updates
Battery 5,100 mAh, 23 W wired charging, 7.5 W wireless charging 5,100 mAh, 30 W wired charging, 10 W wireless charging
Connectivity Wi-Fi 6e, NFC, Bluetooth 5.3, sub-6 GHz 5G, USB-C 3.2 Wi-Fi 6e, NFC, Bluetooth 6.0, sub-6 GHz 5G, USB-C 3.2
Measurements 154.7×73.3×8.9 mm; 185 g 153.9×73×9 mm; 183 g

Google also says the new Pixel will have a slightly upgraded screen. The resolution, size, and refresh rate are unchanged, but peak brightness has been bumped from 2,700 nits to 3,000 nits (the same as the base model Pixel 10). Plus, the cover glass has finally moved beyond Gorilla Glass 3 to Gorilla Glass 7i, which supposedly has improved scratch and drop protection.

Pixel 10a in Berry

Credit: Google

Credit: Google

Google notes that more of the phone is constructed from recycled material, 100 percent for the aluminum frame and 81 percent for the plastic back. There’s also recycled gold, tungsten, cobalt, and copper inside, amounting to about 36 percent of the phone’s weight. The phone also continues to have a physical SIM slot, which was removed from the Pixel 10 series last year. The device’s USB-C 3.2 port can also charge slightly faster than the 9a (30 W versus 23 W), and wireless charging has gone from 7.5 W to 10 W. There are no Qi2 magnets inside, though.

Internally, the Pixel 10a is even more like its predecessor. Unlike past A-series phones, this one doesn’t have the latest Tensor chip—it’s sticking with the same Tensor G4 from the 9a. That’s a bummer, as the G5 was a bigger leap than most of Google’s chip upgrades. The company says it stuck with the G4 to “balance affordability and performance.”

Google’s Pixel 10a arrives on March 5 for $499 with specs and design of yesteryear Read More »

platforms-bend-over-backward-to-help-dhs-censor-ice-critics,-advocates-say

Platforms bend over backward to help DHS censor ICE critics, advocates say


Pam Bondi and Kristi Noem sued for coercing platforms into censoring ICE posts.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Pressure is mounting on tech companies to shield users from unlawful government requests that advocates say are making it harder to reliably share information about Immigration and Customs Enforcement (ICE) online.

Alleging that ICE officers are being doxed or otherwise endangered, Trump officials have spent the last year targeting an unknown number of users and platforms with demands to censor content. Early lawsuits show that platforms have caved, even though experts say they could refuse these demands without a court order.

In a lawsuit filed on Wednesday, the Foundation for Individual Rights and Expression (FIRE) accused Attorney General Pam Bondi and Department of Homeland Security Secretary Kristi Noem of coercing tech companies into removing a wide range of content “to control what the public can see, hear, or say about ICE operations.”

It’s the second lawsuit alleging that Bondi and DHS officials are using regulatory power to pressure private platforms to suppress speech protected by the First Amendment. It follows a complaint from the developer of an app called ICEBlock, which Apple removed from the App Store in October. Officials aren’t rushing to resolve that case—last month, they requested more time to respond—so it may remain unclear until March what defense they plan to offer for the takedown demands.

That leaves community members who monitor ICE in a precarious situation, as critical resources could disappear at the department’s request with no warning.

FIRE says people have legitimate reasons to share information about ICE. Some communities focus on helping people avoid dangerous ICE activity, while others aim to hold the government accountable and raise public awareness of how ICE operates. Unless there’s proof of incitement to violence or a true threat, such expression is protected.

Despite the high bar for censoring online speech, lawsuits trace an escalating pattern of DHS increasingly targeting websites, app stores, and platforms—many that have been willing to remove content the government dislikes.

Officials have ordered ICE-monitoring apps to be removed from app stores and even threatened to sanction CNN for simply reporting on the existence of one such app. Officials have also demanded that Meta delete at least one Chicago-based Facebook group with 100,000 members and made multiple unsuccessful attempts to unmask anonymous users behind other Facebook groups. Even encrypted apps like Signal don’t feel safe from officials’ seeming overreach. FBI Director Kash Patel recently said he has opened an investigation into Signal chats used by Minnesota residents to track ICE activity, NBC News reported.

As DHS censorship threats increase, platforms have done little to shield users, advocates say. Not only have they sometimes failed to reject unlawful orders that simply provided a “a bare mention of ‘officer safety/doxing’” as justification, but in one case, Google complied with a subpoena that left a critical section blank, the Electronic Frontier Foundation (EFF) reported.

For users, it’s increasingly difficult to trust that platforms won’t betray their own policies when faced with government intimidation, advocates say. Sometimes platforms notify users before complying with government requests, giving users a chance to challenge potentially unconstitutional demands. But in other cases, users learn about the requests only as platforms comply with them—even when those platforms have promised that would never happen.

Government emails with platforms may be exposed

Platforms could face backlash from users if lawsuits expose their communications to the government, a possibility in the coming months. Last fall, the EFF sued after DOJ, DHS, ICE, and Customs and Border Patrol failed to respond to Freedom of Information Act requests seeking emails between the government and platforms about takedown demands. Other lawsuits may surface emails in discovery. In the coming weeks, a judge will set a schedule for EFF’s litigation.

“The nature and content of the Defendants’ communications with these technology companies” is “critical for determining whether they crossed the line from governmental cajoling to unconstitutional coercion,” EFF’s complaint said.

EFF Senior Staff Attorney Mario Trujillo told Ars that the EFF is confident it can win the fight to expose government demands, but like most FOIA lawsuits, the case is expected to move slowly. That’s unfortunate, he said, because ICE activity is escalating, and delays in addressing these concerns could irreparably harm speech at a pivotal moment.

Like users, platforms are seemingly victims, too, FIRE senior attorney Colin McDonnell told Ars.

They’ve been forced to override their own editorial judgment while navigating implicit threats from the government, he said.

“If Attorney General Bondi demands that they remove speech, the platform is going to feel like they have to comply; they don’t have a choice,” McDonnell said.

But platforms do have a choice and could be doing more to protect users, the EFF has said. Platforms could even serve as a first line of defense, requiring officials to get a court order before complying with any requests.

Platforms may now have good reason to push back against government requests—and to give users the tools to do the same. Trujillo noted that while courts have been slow to address the ICEBlock removal and FOIA lawsuits, the government has quickly withdrawn requests to unmask Facebook users soon after litigation began.

“That’s like an acknowledgement that the Trump administration, when actually challenged in court, wasn’t even willing to defend itself,” Trujillo said.

Platforms could view that as evidence that government pressure only works when platforms fail to put up a bare-minimum fight, Trujillo said.

Platforms “bend over backward” to appease DHS

An open letter from the EFF and the American Civil Liberties Union (ACLU) documented two instances of tech companies complying with government demands without first notifying users.

The letter called out Meta for unmasking at least one user without prior notice, which groups noted “potentially” occured due to a “technical glitch.”

More troubling than buggy notifications, however, is the possibility that platforms may be routinely delaying notice until it’s too late.

After Google “received an ICE subpoena for user data and fulfilled it on the same day that it notified the user,” the company admitted that “sometimes when Google misses its response deadline, it complies with the subpoena and provides notice to a user at the same time to minimize the delay for an overdue production,” the letter said.

“This is a worrying admission that violates [Google’s] clear promise to users, especially because there is no legal consequence to missing the government’s response deadline,” the letter said.

Platforms face no sanctions for refusing to comply with government demands that have not been court-ordered, the letter noted. That’s why the EFF and ACLU have urged companies to use their “immense resources” to shield users who may not be able to drop everything and fight unconstitutional data requests.

In their letter, the groups asked companies to insist on court intervention before complying with a DHS subpoena. They should also resist DHS “gag orders” that ask platforms to hand over data without notifying users.

Instead, they should commit to giving users “as much notice as possible when they are the target of a subpoena,” as well as a copy of the subpoena. Ideally, platforms would also link users to legal aid resources and take up legal fights on behalf of vulnerable users, advocates suggested.

That’s not what’s happening so far. Trujillo told Ars that it feels like “companies have bent over backward to appease the Trump administration.”

The tide could turn this year if courts side with app makers behind crowdsourcing apps like ICEBlock and Eyes Up, who are suing to end the alleged government coercion. FIRE’s McDonnell, who represents the creator of Eyes Up, told Ars that platforms may feel more comfortable exercising their own editorial judgment moving forward if a court declares they were coerced into removing content.

DHS can’t use doxing to dodge First Amendment

FIRE’s lawsuit accuses Bondi and Noem of coercing Meta to disable a Facebook group with 100,000 members called “ICE Sightings–Chicagoland.”

The popularity of that group surged during “Operation Midway Blitz,” when hundreds of agents arrested more than 4,500 people over weeks of raids that used tear gas in neighborhoods and caused car crashes and other violence. Arrests included US citizens and immigrants of lawful status, which “gave Chicagoans reason to fear being injured or arrested due to their proximity to ICE raids, no matter their immigration status,” FIRE’s complaint said.

Kassandra Rosado, a lifelong Chicagoan and US citizen of Mexican descent, started the Facebook group and served as admin, moderating content with other volunteers. She prohibited “hate speech or bullying” and “instructed group members not to post anything threatening, hateful, or that promoted violence or illegal conduct.”

Facebook only ever flagged five posts that supposedly violated community guidelines, but in warnings, the company reassured Rosado that “groups aren’t penalized when members or visitors break the rules without admin approval.”

Rosado had no reason to suspect that her group was in danger of removal. When Facebook disabled her group, it told Rosado the group violated community standards “multiple times.” But her complaint noted that, confusingly, “Facebook policies don’t provide for disabling groups if a few members post ostensibly prohibited content; they call for removing groups when the group moderator repeatedly either creates prohibited content or affirmatively ‘approves’ such content.”

Facebook’s decision came after a right-wing influencer, Laura Loomer, tagged Noem and Bondi in a social media post alleging that the group was “getting people killed.” Within two days, Bondi bragged that she had gotten the group disabled while claiming that it “was being used to dox and target [ICE] agents in Chicago.”

McDonnell told Ars it seems clear that Bondi selectively uses the term “doxing” when people post images from ICE arrests. He pointed to “ICE’s own social media accounts,” which share favorable opinions of ICE alongside videos and photos of ICE arrests that Bondi doesn’t consider doxing.

“Rosado’s creation of Facebook groups to send and receive information about where and how ICE carries out its duties in public, to share photographs and videos of ICE carrying out its duties in public, and to exchange opinions about and criticism of ICE’s tactics in carrying out its duties, is speech protected by the First Amendment,” FIRE argued.

The same goes for speech managed by Mark Hodges, a US citizen who resides in Indiana. He created an app called Eyes Up to serve as an archive of ICE videos. Apple removed Eyes Up from the App Store around the same time that it removed ICEBlock.

“It is just videos of what government employees did in public carrying out their duties,” McDonnell said. “It’s nothing even close to threatening or doxing or any of these other theories that the government has used to justify suppressing speech.”

Bondi bragged that she had gotten ICEBlock banned, and FIRE’s complaint confirmed that Hodges’ company received the same notification that ICEBlock’s developer got after Bondi’s victory lap. The notice said that Apple received “information” from “law enforcement” claiming that the apps had violated Apple guidelines against “defamatory, discriminatory, or mean-spirited content.”

Apple did not reach the same conclusion when it independently reviewed Eyes Up prior to government meddling, FIRE’s complaint said. Notably, the app remains available in Google Play, and Rosado now manages a new Facebook group with similar content but somewhat tighter restrictions on who can join. Neither activity has required urgent intervention from either tech giants or the government.

McDonnell told Ars that it’s harmful for DHS to water down the meaning of doxing when pushing platforms to remove content critical of ICE.

“When most of us hear the word ‘doxing,’ we think of something that’s threatening, posting private information along with home addresses or places of work,” McDonnell said. “And it seems like the government is expanding that definition to encompass just sharing, even if there’s no threats, nothing violent. Just sharing information about what our government is doing.”

Expanding the definition and then using that term to justify suppressing speech is concerning, he said, especially since the First Amendment includes no exception for “doxing,” even if DHS ever were to provide evidence of it.

To suppress speech, officials must show that groups are inciting violence or making true threats. FIRE has alleged that the government has not met “the extraordinary justifications required for a prior restraint” on speech and is instead using vague doxing threats to discriminate against speech based on viewpoint. They’re seeking a permanent injunction barring officials from coercing tech companies into censoring ICE posts.

If plaintiffs win, the censorship threats could subside, and tech companies may feel safe reinstating apps and Facebook groups, advocates told Ars. That could potentially revive archives documenting thousands of ICE incidents and reconnect webs of ICE watchers who lost access to valued feeds.

Until courts possibly end threats of censorship, the most cautious community members are moving local ICE-watch efforts to group chats and listservs that are harder for the government to disrupt, Trujillo told Ars.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Platforms bend over backward to help DHS censor ICE critics, advocates say Read More »

the-first-android-17-beta-is-now-available-on-pixel-devices

The first Android 17 beta is now available on Pixel devices

In short, the first Android 17 beta is chock full of things that may interest developers and modders, but there’s little in the way of user-facing changes right now.

Android 17 release schedule

Google has made some notable changes to how it releases Android updates, and Android 17 continues the trend. Like last year, there will be two Android 17 releases in 2026. The first one, coming in Q2, will be the more significant of the two. It will include a raft of new APIs, behavioral changes, and feature updates. This split release setup was implemented to better align with when major OEMs release new devices, but Android 17 availability still focuses mainly on Pixels. Google’s phones receive immediate updates, but everyone else has to wait for OEMs to roll out updates over the following weeks or months.

At the end of the year, another version (you can think of it as Android 17.1 even though Google doesn’t give it a name) will become available on supported devices. This “minor SDK release” will include some API and feature changes, but Google doesn’t have any details at this time.

Android release schedule

Credit: Google

Credit: Google

Before we get to that, Google plans to launch a second beta release in March. The company says Beta 2 will include final APIs, allowing developers to complete testing and roll out updates. Developers will have “several months” to get that work done before the final version hits Pixels.

In 2025, Google also changed the way it updates the open source parts of Android. Rather than regular code dumps, Google now only updates the Android Open Source Project (AOSP) twice yearly, in the second and fourth quarters, when new versions are released. That makes it harder to know what to expect from upcoming versions of Android, but Google insists this is more efficient.

If you want to check out Android 17 today, you’ll need a Pixel device. It supports the Pixel 6, Pixel 7, Pixel 8, Pixel 9, and Pixel 10 generations. The Pixel tablet and original Pixel Fold are also included. Other phone makers may release beta builds in the weeks ahead, but it’s a Google-only event for now. You can opt in to get an OTA to Android 17 on the beta program website.

The first Android 17 beta is now available on Pixel devices Read More »

it-took-two-years,-but-google-released-a-youtube-app-on-vision-pro

It took two years, but Google released a YouTube app on Vision Pro

When Apple’s Vision Pro mixed reality headset launched in February 2024, users were frustrated at the lack of a proper YouTube app—a significant disappointment given the device’s focus on video content consumption, and YouTube’s strong library of immersive VR and 360 videos. That complaint continued through the release of the second-generation Vision Pro last year, including in our review.

Now, two years later, an official YouTube app from Google has launched on the Vision Pro’s app store. It’s not just a port of the iPad app, either—it has panels arranged spatially in front of the user as you’d expect, and it supports 3D videos, as well as 360- and 180-degree ones.

YouTube’s App Store listing says users can watch “every video on YouTube” (there’s a screenshot of a special interface for Shorts vertical videos, for example) and that they get “the full signed-in experience” with watch history and so on.

Shortly after the Vision Pro launched, many users complained to YouTube about the lack of an app. They were referred to the web interface—which worked OK for most 2D videos, but it obviously wasn’t an ideal experience—and were told that a Vision Pro app was on the roadmap.

Two years of silence followed. Third-party apps popped up, like the relatively popular Juno app, but it was pulled from the App Store on Google’s claim that it violated API policies. (Some others remained or became available later.)

Google is building out its own XR ambitions, so it’s possible the Vision Pro app benefited from some of that work, but it’s unclear how this all came to be. But it’s here now. Next up: Netflix, right? Sadly, that’s unlikely; unlike Google, Netflix has not announced any intention here.

It took two years, but Google released a YouTube app on Vision Pro Read More »

attackers-prompted-gemini-over-100,000-times-while-trying-to-clone-it,-google-says

Attackers prompted Gemini over 100,000 times while trying to clone it, Google says

On Thursday, Google announced that “commercially motivated” actors have attempted to clone knowledge from its Gemini AI chatbot by simply prompting it. One adversarial session reportedly prompted the model more than 100,000 times across various non-English languages, collecting responses ostensibly to train a cheaper copycat.

Google published the findings in what amounts to a quarterly self-assessment of threats to its own products that frames the company as the victim and the hero, which is not unusual in these self-authored assessments. Google calls the illicit activity “model extraction” and considers it intellectual property theft, which is a somewhat loaded position, given that Google’s LLM was built from materials scraped from the Internet without permission.

Google is also no stranger to the copycat practice. In 2023, The Information reported that Google’s Bard team had been accused of using ChatGPT outputs from ShareGPT, a public site where users share chatbot conversations, to help train its own chatbot. Senior Google AI researcher Jacob Devlin, who created the influential BERT language model, warned leadership that this violated OpenAI’s terms of service, then resigned and joined OpenAI. Google denied the claim but reportedly stopped using the data.

Even so, Google’s terms of service forbid people from extracting data from its AI models this way, and the report is a window into the world of somewhat shady AI model-cloning tactics. The company believes the culprits are mostly private companies and researchers looking for a competitive edge, and said the attacks have come from around the world. Google declined to name suspects.

The deal with distillation

Typically, the industry calls this practice of training a new model on a previous model’s outputs “distillation,” and it works like this: If you want to build your own large language model (LLM) but lack the billions of dollars and years of work that Google spent training Gemini, you can use a previously trained LLM as a shortcut.

Attackers prompted Gemini over 100,000 times while trying to clone it, Google says Read More »

google-recovers-“deleted”-nest-video-in-high-profile-abduction-case

Google recovers “deleted” Nest video in high-profile abduction case

Suspect attempts to cover the camera with a plant.

In statements made by investigators, the video was apparently “recovered from residual data located in backend systems.” It’s unclear how long such data is retained or how easy it is for Google to access it. Some reports claim that it took several days for Google to recover the data.

In large-scale enterprise storage solutions, “deleted” for the user doesn’t always mean that the data is gone. Data that is no longer needed is often compressed and overwritten only as needed. In the meantime, it may be possible to recover the data. That’s something a company like Google could decide to do on its own, or it could be compelled to perform the recovery by a court order. In the Guthrie case, it sounds like Google was voluntarily cooperating with the investigation, which makes sense. Publishing video of the alleged perpetrator could be a major breakthrough as investigators seek help from the public.

It’s not your cloud

There is a temptation to ascribe some malicious intent to Google’s video storage setup. After all, this video expired after three hours, but here it is nine days later. That feels a bit suspicious on the surface, particularly for a company that is so focused on training AI models that feed on video.

We have previously asked Google to explain how it uses Nest to train AI models, and the company claims it does not incorporate user videos into training data, but the way you interact with the service and with your videos is fair game. “We may use your inputs, including prompts and feedback, usage, and outputs from interactions with AI features to further research, tune, and train Google’s generative models, machine learning technologies, and related products and services,” Google said.

Google recovers “deleted” Nest video in high-profile abduction case Read More »

upgraded-google-safety-tools-can-now-find-and-remove-more-of-your-personal-info

Upgraded Google safety tools can now find and remove more of your personal info

Do you feel popular? There are people on the Internet who want to know all about you! Unfortunately, they don’t have the best of intentions, but Google has some handy tools to address that, and they’ve gotten an upgrade today. The “Results About You” tool can now detect and remove more of your personal information. Plus, the tool for removing non-consensual explicit imagery (NCEI) is faster to use. All you have to do is tell Google your personal details first—that seems safe, right?

With today’s upgrade, Results About You gains the ability to find and remove pages that include ID numbers like your passport, driver’s license, and Social Security. You can access the option to add these to Google’s ongoing scans from the settings in Results About You. Just click in the ID numbers section to enable detection.

Naturally, Google has to know what it’s looking for to remove it. So you need to provide at least part of those numbers. Google asks for the full driver’s license number, which is fine, as it’s not as sensitive. For your passport and SSN, you only need the last four digits, which is enough for Google to find the full numbers on webpages.

ID number results detected.

The NCEI tool is geared toward hiding real, explicit images as well as deepfakes and other types of artificial sexualized content. This kind of content is rampant on the Internet right now due to the rapid rise of AI. What used to require Photoshop skills is now just a prompt away, and some AI platforms hardly do anything to prevent it.

Upgraded Google safety tools can now find and remove more of your personal info Read More »

alphabet-selling-very-rare-100-year-bonds-to-help-fund-ai-investment

Alphabet selling very rare 100-year bonds to help fund AI investment

Tony Trzcinka, a US-based senior portfolio manager at Impax Asset Management, which purchased Alphabet’s bonds last year, said he skipped Monday’s offering because of insufficient yields and concerns about overexposure to companies with complex financial obligations tied to AI investments.

“It wasn’t worth it to swap into new ones,” Trzcinka said. “We’ve been very conscious of our exposure to these hyperscalers and their capex budgets.”

Big Tech companies and their suppliers are expected to invest almost $700 billion in AI infrastructure this year and are increasingly turning to the debt markets to finance the giant data center build-out.

Alphabet in November sold $17.5 billion of bonds in the US including a 50-year bond—the longest-dated dollar bond sold by a tech group last year—and raised €6.5 billion on European markets.

Oracle last week raised $25 billion from a bond sale that attracted more than $125 billion of orders.

Alphabet, Amazon, and Meta all increased their capital expenditure plans during their most recent earnings reports, prompting questions about whether they will be able to fund the unprecedented spending spree from their cash flows alone.

Last week, Google’s parent company reported annual sales that topped $400 billion for the first time, beating investors’ expectations for revenues and profits in the most recent quarter. It said it planned to spend as much as $185 billion on capex this year, roughly double last year’s total, to capitalize on booming demand for its Gemini AI assistant.

Alphabet’s long-term debt jumped to $46.5 billion in 2025, up more than four times the previous year, though it held cash and equivalents of $126.8 billion at the year-end.

Investor demand was the strongest on the shortest portion of Monday’s deal, with a three-year offering pricing at only 0.27 percentage points above US Treasuries, versus 0.6 percentage points during initial price discussions, said people familiar with the deal.

The longest portion of the offering, a 40-year bond, is expected to yield 0.95 percentage points over US Treasuries, down from 1.2 percentage points during initial talks, the people said.

Bank of America, Goldman Sachs, and JPMorgan are the bookrunners on the bond sales across three currencies. All three declined to comment or did not immediately respond to requests for comment.

Alphabet did not immediately respond to a request for comment.

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Alphabet selling very rare 100-year bonds to help fund AI investment Read More »

google-experiments-with-locking-youtube-music-lyrics-behind-paywall

Google experiments with locking YouTube Music lyrics behind paywall

The app’s lyrics feature allows listeners to follow along as the song plays. However, only the first few lines are visible once free users in the test hit the lyric cut-off. After that, the lyrics are blurred. Users who want to keep seeing lyrics are advised to upgrade to a premium account, which costs $14 for both YouTube video and music or $11 for music only. The subscription also removes ads and adds features like downloads and higher-quality video streams.

Lyrics paywall in YT music

The new paywall in YouTube Music.

Credit: /u/MrYeet22836 and /u/Vegetable_Common188

The new paywall in YouTube Music. Credit: /u/MrYeet22836 and /u/Vegetable_Common188

This change is not without precedent. Spotify began restricting access to lyrics for free users in 2024. However, the response was so ferociously negative that the company backtracked and restored lyric access to those on ad-supported accounts. YouTube Music doesn’t have the same reach as Spotify, which may help soften the social media shame. Many subscribers are also getting the premium service just because they’re paying for ad-free YouTube and may never know there’s been a change to lyric availability.

As Google has ratcheted up restrictions on free YouTube accounts, the service has only made more money. In Google’s most recent earnings report, it reported $60 billion in YouTube revenue across both ads and subscriptions (both YouTube Premium and YouTube TV). That’s almost $10 billion more than last year.

Lyrics in YouTube Music are provided by third parties that Google has to pay, so it’s not surprising that Google is looking for ways to cover the cost. It is, however, a little surprising that the company hasn’t just used AI to generate lyrics for free. Google has recently tested the patience of YouTube users with a spate of AI features, like unannounced AI upscaling, fake DJs, and comment summaries.

This story was updated with Google’s response. 

Google experiments with locking YouTube Music lyrics behind paywall Read More »

why-darren-aronofsky-thought-an-ai-generated-historical-docudrama-was-a-good-idea

Why Darren Aronofsky thought an AI-generated historical docudrama was a good idea


We hold these truths to be self-evident

Production source says it takes “weeks” to produce just minutes of usable video.

Artist’s conception of critics reacting to the first episodes of “On This Day… 1776” Credit: Primordial Soup

Artist’s conception of critics reacting to the first episodes of “On This Day… 1776” Credit: Primordial Soup

Last week, filmmaker Darren Aronofsky’s AI studio Primordial Soup and Time magazine released the first two episodes of On This Day… 1776. The year-long series of short-form videos features short vignettes describing what happened on that day of the American Revolution 250 years ago, but it does so using “a variety of AI tools” to produce photorealistic scenes containing avatars of historical figures like George Washington, Thomas Paine, and Benjamin Franklin.

In announcing the series, Time Studios President Ben Bitonti said the project provides “a glimpse at what thoughtful, creative, artist-led use of AI can look like—not replacing craft but expanding what’s possible and allowing storytellers to go places they simply couldn’t before.”

The trailer for “On This Day… 1776.”

Outside critics were decidedly less excited about the effort. The AV Club took the introductory episodes to task for “repetitive camera movements [and] waxen characters” that make for “an ugly look at American history.” CNET said that this “AI slop is ruining American history,” calling the videos a “hellish broth of machine-driven AI slop and bad human choices.” The Guardian lamented that the “once-lauded director of Black Swan and The Wrestler has drowned himself in AI slop,” calling the series “embarrassing,” “terrible,” and “ugly as sin.” I could go on.

But this kind of initial reaction apparently hasn’t deterred Primordial Soup from its still-evolving efforts. A source close to the production, who requested anonymity to speak frankly about details of the series’ creation, told Ars that the quality of new episodes would improve as the team’s AI tools are refined throughout the year and as the team learns to better use them.

“We’re going into this fully assuming that we have a lot to learn, that this process is gonna evolve, the tools we’re using are gonna evolve,” the source said. “We’re gonna make mistakes. We’re gonna learn a lot… we’re going to get better at it, [and] the technology will change. We’ll see how audiences are reacting to certain things, what works, what doesn’t work. It’s a huge experiment, really.”

Not all AI

It’s important to note that On This Day… 1776 is not fully crafted by AI. The script, for instance, was written by a team of writers overseen by Aronofsky’s longtime writing partners Ari Handel and Lucas Sussman, as noted by The Hollywood Reporter. That makes criticisms like the Guardian’s of “ChatGPT-sounding sloganeering” in the first episodes both somewhat misplaced and hilariously harsh.

Our production source says the project was always conceived as a human-written effort and that the team behind it had long been planning and researching how to tell this kind of story. “I don’t think [they] even needed that kind of help or wanted that kind of [AI-powered writing] help,” they said. “We’ve all experimented with [AI-powered] writing and the chatbots out there, and you know what kind of quality you get out of that.”

What you see here is not a real human actor, but his lines were written and voiced by humans.

What you see here is not a real human actor, but his lines were written and voiced by humans. Credit: Primordial Soup

The producers also go out of their way to note that all the dialogue in the series is recorded directly by Screen Actors Guild voice actors, not by AI facsimiles. While recently negotiated union rules might have something to do with that, our production source also said the AI-generated voices the team used for temp tracks were noticeably artificial and not ready for a professional production.

Humans are also directly responsible for the music, editing, sound mixing, visual effects, and color correction for the project, according to our source. The only place the “AI-powered tools” come into play is in the video itself, which is crafted with what the announcement calls a “combination of traditional filmmaking tools and emerging AI capabilities.”

In practice, our source says, that means humans create storyboards, find visual references for locations and characters, and set up how they want shots to look. That information, along with the script, gets fed into an AI video generator that creates individual shots one at a time, to be stitched together and cleaned up by humans in traditional post-production.

That process takes the AI-generated cinema conversation one step beyond Ancestra, a short film Primordial Soup released last summer in association with Google DeepMind (which is not involved with the new project). There, AI tools were used to augment “live-action scenes with sequences generated by Veo.”

“Weeks” of prompting and re-prompting

In theory, having an AI model generate a scene in minutes might save a lot of time compared to traditional filmmaking—scouting locations, hiring actors, setting up cameras and sets, and the like. But our production source said the highly iterative process of generating and perfecting shots for On This Day… 1776 still takes “weeks” for each minutes-long video and that “more often than not, we’re pushing deadlines.”

The first episode of On this Day… 1776 features a dramatic flag raising.

Even though the AI model is essentially animating photorealistic avatars, the source said the process is “more like live action filmmaking” because of the lack of fine-grained control over what the video model will generate. “You don’t know if you’re gonna get what you want on the first take or the 12th take or the 40th take,” the source said.

While some shots take less time to get right than others, our source said the AI model rarely produces a perfect, screen-ready shot on the first try. And while some small issues in an AI-generated shot can be papered over in post-production with visual effects or careful editing, most of the time, the team has to go back and tell the model to generate a completely new video with small changes.

“It still takes a lot of work, and it’s not necessarily because it’s wrong, per se, so much as trying to get the right control because you [might] want the light to land on the face in the right way to try to tell the story,” the source said. “We’re still, we’re still striving for the same amount of control that we always have [with live-action production] to really maximize the story and the emotion.”

Quick shots and smaller budgets

Though video models have advanced since the days of the nightmarish clip of Will Smith eating spaghetti, hallucinations and nonsensical images are “still a problem” in producing On This Day… 1776, according to our source. That’s one of the reasons the company decided to use a series of short-form videos rather than a full-length movie telling the same essential story.

“It’s one thing to stay consistent within three minutes. It’s a lot harder and it takes a lot more work to stay consistent within two hours,” the source said. “I don’t know what the upper limit is now [but] the longer you get, the more things start to fall off.”

Stills from an AI-generated video of Will Smith eating spaghetti.

We’ve come a long way from the circa-2023 videos of Will Smith eating spaghetti.

We’ve come a long way from the circa-2023 videos of Will Smith eating spaghetti. Credit: chaindrop / Reddit

Keeping individual shots short also allows for more control and fewer “reshoots” for an AI-animated production like this. “When you think about it, if you’re trying to create a 20-second clip, you have all these things that are happening, and if one of those things goes wrong in 20 seconds, you have to start over,” our source said. “And the chance of something going wrong in 20 seconds is pretty high. The chance of something going wrong in eight seconds is a lot lower.”

While our production source couldn’t give specifics on how much the team was spending to generate so much AI-modeled video, they did suggest that the process was still a good deal cheaper than filming a historical docudrama like this on location.

“I mean, we could never achieve what we’re doing here for this amount of money, which I think is pretty clear when you watch this,” they said. In future episodes, the source promised, “you’ll see where there’s things that cameras just can’t even do” as a way to “make the most of that medium.”

“Let’s see what we can do”

If you’ve been paying attention to how fast things have been moving with AI-generated video, you might think that AI models will soon be able to produce Hollywood-quality cinema with nothing but a simple prompt. But our source said that working on On This Day… 1776 highlights just how important it is for humans to still be in the loop on something like this.

“Personally, I don’t think we’re ever gonna get there [replacing human editors],” he said. “We actually desperately need an editor. We need another set of eyes who can look at the cut and say, ‘If we get out of this shot a little early, then we can create a little bit of urgency. If we linger on this thing a little longer…’ You still really need that.”

AI Ben Franklin and AI Thomas Paine toast to the war propaganda effort.

AI Ben Franklin and AI Thomas Paine toast to the war propaganda effort. Credit: Primordial Soup

That could be good news for human editors. But On This Day… 1776 also suggests a world where on-screen (or even motion-captured) human actors are fully replaced by AI-generated avatars. When I asked our source why the producers felt that AI was ready to take over that specifically human part of the film equation, though, the response surprised me.

“I don’t know that we do know that, honestly,” they said. “I think we know that the technology is there to try. And I think as storytellers we’re really interested in using… all the different tools that we can to try to get our story across and to try to make audiences feel something.”

“It’s not often that we have huge new tools like this,” the source continued. “I mean, it’s never happened in my lifetime. But when you do [get these new tools], you want to start playing with them… We have to try things in order to know if it works, if it doesn’t work.”

“So, you know, we have the tools now. Let’s see what we can do.”

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Why Darren Aronofsky thought an AI-generated historical docudrama was a good idea Read More »

waymo-leverages-genie-3-to-create-a-world-model-for-self-driving-cars

Waymo leverages Genie 3 to create a world model for self-driving cars

On the road with AI

The Waymo World Model is not just a straight port of Genie 3 with dashcam videos stuffed inside. Waymo and DeepMind used a specialized post-training process to make the new model generate both 2D video and 3D lidar outputs of the same scene. While cameras are great for visualizing fine details, Waymo says lidar is necessary to add critical depth information to what a self-driving car “sees” on the road—maybe someone should tell Tesla about that.

Using a world model allows Waymo to take video from its vehicles and use prompts to change the route the vehicle takes, which it calls driving action control. These simulations, which come with lidar maps, reportedly offer greater realism and consistency than older reconstructive simulation methods.

With the world model, Waymo can see what would happen if the car took a different turn.

This model can also help improve the self-driving AI even without adding or removing everything. There are plenty of dashcam videos available for training self-driving vehicles, but they lack the multimodal sensor data of Waymo’s vehicles. Dropping such a video into the Waymo World Model generates matching sensor data, showing how the driving AI would have seen that situation.

While the Waymo World Model can create entirely synthetic scenes, the company seems mostly interested in “mutating” the conditions in real videos. The blog post contains examples of changing the time of day or weather, adding new signage, or placing vehicles in unusual places. Or, hey, why not an elephant in the road?

Waymo is ready in case an elephant shows up.

Waymo’s early test cities were consistently sunny (like Phoenix) with little inclement weather. These kinds of simulations could help the cars adapt to the more varied conditions. The new markets include places with more difficult conditions, including Boston and Washington, D.C.

Of course, the benefit of the new AI model will depend on how accurately Genie 3 can simulate the real world. The test videos we’ve seen of Genie 3 run the gamut from pretty believable to uncanny valley territory, but Waymo believes the technology has improved to the point that it can teach self-driving cars a thing or two.

Waymo leverages Genie 3 to create a world model for self-driving cars Read More »