youtube

it-took-two-years,-but-google-released-a-youtube-app-on-vision-pro

It took two years, but Google released a YouTube app on Vision Pro

When Apple’s Vision Pro mixed reality headset launched in February 2024, users were frustrated at the lack of a proper YouTube app—a significant disappointment given the device’s focus on video content consumption, and YouTube’s strong library of immersive VR and 360 videos. That complaint continued through the release of the second-generation Vision Pro last year, including in our review.

Now, two years later, an official YouTube app from Google has launched on the Vision Pro’s app store. It’s not just a port of the iPad app, either—it has panels arranged spatially in front of the user as you’d expect, and it supports 3D videos, as well as 360- and 180-degree ones.

YouTube’s App Store listing says users can watch “every video on YouTube” (there’s a screenshot of a special interface for Shorts vertical videos, for example) and that they get “the full signed-in experience” with watch history and so on.

Shortly after the Vision Pro launched, many users complained to YouTube about the lack of an app. They were referred to the web interface—which worked OK for most 2D videos, but it obviously wasn’t an ideal experience—and were told that a Vision Pro app was on the roadmap.

Two years of silence followed. Third-party apps popped up, like the relatively popular Juno app, but it was pulled from the App Store on Google’s claim that it violated API policies. (Some others remained or became available later.)

Google is building out its own XR ambitions, so it’s possible the Vision Pro app benefited from some of that work, but it’s unclear how this all came to be. But it’s here now. Next up: Netflix, right? Sadly, that’s unlikely; unlike Google, Netflix has not announced any intention here.

It took two years, but Google released a YouTube app on Vision Pro Read More »

“ig-is-a-drug”:-internal-messages-may-doom-meta-at-social-media-addiction-trial

“IG is a drug”: Internal messages may doom Meta at social media addiction trial


Social media addiction test case

A loss could cost social media companies billions and force changes on platforms.

Mark Zuckerberg testifies during the US Senate Judiciary Committee hearing, “Big Tech and the Online Child Sexual Exploitation Crisis,” in 2024.

Anxiety, depression, eating disorders, and death. These can be the consequences for vulnerable kids who get addicted to social media, according to more than 1,000 personal injury lawsuits that seek to punish Meta and other platforms for allegedly prioritizing profits while downplaying child safety risks for years.

Social media companies have faced scrutiny before, with congressional hearings forcing CEOs to apologize, but until now, they’ve never had to convince a jury that they aren’t liable for harming kids.

This week, the first high-profile lawsuit—considered a “bellwether” case that could set meaningful precedent in the hundreds of other complaints—goes to trial. That lawsuit documents the case of a 19-year-old, K.G.M, who hopes the jury will agree that Meta and YouTube caused psychological harm by designing features like infinite scroll and autoplay to push her down a path that she alleged triggered depression, anxiety, self-harm, and suicidality.

TikTok and Snapchat were also targeted by the lawsuit, but both have settled. The Snapchat settlement came last week, while TikTok settled on Tuesday just hours before the trial started, Bloomberg reported.

For now, YouTube and Meta remain in the fight. K.G.M. allegedly started watching YouTube when she was 6 years old and joined Instagram by age 11. She’s fighting to claim untold damages—including potentially punitive damages—to help her family recoup losses from her pain and suffering and to punish social media companies and deter them from promoting harmful features to kids. She also wants the court to require prominent safety warnings on platforms to help parents be aware of the risks.

Platforms failed to blame mom for not reading TOS

A loss could cost social media companies billions, CNN reported.

To avoid that, platforms have alleged that other factors caused K.G.M.’s psychological harm—like school bullies and family troubles—while insisting that Section 230 and the First Amendment protect platforms from being blamed for any harmful content targeted to K.G.M.

They also argued that K.G.M.’s mom never read the terms of service and, therefore, supposedly would not have benefited from posted warnings. And ByteDance, before settling, seemingly tried to pass the buck by claiming that K.G.M. “already suffered mental health harms before she began using TikTok.”

But the judge, Carolyn B. Kuhl, wrote in a ruling denying all platforms’ motions for summary judgment that K.G.M. showed enough evidence that her claims don’t stem from content to go to trial.

Further, platforms can’t liken warnings buried in terms of service to prominently displayed warnings, Kuhl said, since K.G.M.’s mom testified she would have restricted the minor’s app usage if she were aware of the alleged risks.

Two platforms settling before the trial seems like a good sign for K.G.M. However, Snapchat has not settled other social media addiction lawsuits that it’s involved in, including one raised by school districts, and perhaps is waiting to see how K.G.M.’s case shakes out before taking further action.

To win, K.G.M.’s lawyers will need to “parcel out” how much harm is attributed to each platform, due to design features, not the content that was targeted to K.G.M., Clay Calvert, a technology policy expert and senior fellow at a think tank called the American Enterprise Institute, wrote. Internet law expert Eric Goldman told The Washington Post that detailing those harms will likely be K.G.M.’s biggest struggle, since social media addiction has yet to be legally recognized, and tracing who caused what harms may not be straightforward.

However, Matthew Bergman, founder of the Social Media Victims Law Center and one of K.G.M.’s lawyers, told the Post that K.G.M. is prepared to put up this fight.

“She is going to be able to explain in a very real sense what social media did to her over the course of her life and how in so many ways it robbed her of her childhood and her adolescence,” Bergman said.

Internal messages may be “smoking-gun evidence”

The research is unclear on whether social media is harmful for kids or whether social media addiction exists, Tamar Mendelson, a professor at Johns Hopkins Bloomberg School of Public Health, told the Post. And so far, research only shows a correlation between Internet use and mental health, Mendelson noted, which could doom K.G.M.’s case and others’.

However, social media companies’ internal research might concern a jury, Bergman told the Post. On Monday, the Tech Oversight Project, a nonprofit working to rein in Big Tech, published a report analyzing recently unsealed documents in K.G.M.’s case that supposedly provide “smoking-gun evidence” that platforms “purposefully designed their social media products to addict children and teens with no regard for known harms to their wellbeing”—while putting increased engagement from young users at the center of their business models.

In the report, Sacha Haworth, executive director of The Tech Oversight Project, accused social media companies of “gaslighting and lying to the public for years.”

Most of the recently unsealed documents highlighted in the report came from Meta, which also faces a trial from dozens of state attorneys general on social media addiction this year.

Those documents included an email stating that Mark Zuckerberg—who is expected to testify at K.G.M.’s trial—decided that Meta’s top priority in 2017 was teens who must be locked in to using the company’s family of apps.

The next year, a Facebook internal document showed that the company pondered letting “tweens” access a private mode inspired by the popularity of fake Instagram accounts teens know as “finstas.” That document included an “internal discussion on how to counter the narrative that Facebook is bad for youth and admission that internal data shows that Facebook use is correlated with lower well-being (although it says the effect reverses longitudinally).”

Other allegedly damning documents showed Meta seemingly bragging that “teens can’t switch off from Instagram even if they want to” and an employee declaring, “oh my gosh yall IG is a drug,” likening all social media platforms to “pushers.”

Similarly, a 2020 Google document detailed the company’s plan to keep kids engaged “for life,” despite internal research showing young YouTube users were more likely to “disproportionately” suffer from “habitual heavy use, late night use, and unintentional use” deteriorating their “digital well-being.”

Shorts, YouTube’s feature that rivals TikTok, also is a concern for parents suing, and three years later, documents showed Google choosing to target teens with Shorts, despite research flagging that the “two biggest challenges for teen wellbeing on YouTube” were prominently linked to watching shorts. Those challenges included Shorts bombarding teens with “low quality content recommendations that can convey & normalize unhealthy beliefs or behaviors” and teens reporting that “prolonged unintentional use” was “displacing valuable activities like time with friends or sleep.”

Bergman told the Post that these documents will help the jury decide if companies owed young users better protections sooner but prioritized profits while pushing off interventions that platforms have more recently introduced amid mounting backlash.

“Internal documents that have been held establishing the willful misconduct of these companies are going to—for the first time—be given a public airing,” Bergman said. “The public is going to know for the first time what social media companies have done to prioritize their profits over the safety of our kids.”

Platforms failed to get experts’ testimony tossed

One seeming advantage K.G.M. has heading into the trial is that tech companies failed to get expert testimony dismissed that backs up her claims.

Platforms tried to exclude testimony from several experts, including Kara Bagot, a board-certified adult, child, and adolescent psychiatrist, as well as Arturo Bejar, a former Meta safety researcher and whistleblower. They claimed that experts’ opinions were irrelevant because they were based on K.G.M.’s interactions with content. They also suggested that child safety experts’ opinions “violate the standards of reliability” since the causal links they draw don’t account for “alternative explanations” and allegedly “contradict the experts’ own statements in non-litigation contexts.”

However, Kuhl ruled that platforms will have the opportunity to counter experts’ opinions at trial, while reminding social media companies that “ultimately, the critical question of causation is one that must be determined by the jury.” Only one expert’s testimony was excluded, Social Media Victims Law Center noted, a licensed clinical psychologist deemed unqualified.

“Testimony by Bagot as to design features that were employed on TikTok as well as on other social media platforms is directly relevant to the question of whether those design features cause the type of harms allegedly suffered by K.G.M. here,” Kuhl wrote.

That means that a jury will get a chance to weigh Bagot’s opinion that “social media overuse and addiction causes or plays a substantial role in causing or exacerbating psychopathological harms in children and youth, including depression, anxiety and eating disorders, as well as internalizing and externalizing psychopathological symptoms.”

The jury will also consider the insights and information Bejar (a fact witness and former consultant for the company) will share about Meta’s internal safety studies. That includes hearing about “his personal knowledge and experience related to how design defects on Meta’s platforms can cause harm to minors (e.g., age verification, reporting processes, beauty filters, public like counts, infinite scroll, default settings, private messages, reels, ephemeral content, and connecting children with adult strangers),” as well as “harms associated with Meta’s platforms including addiction/problematic use, anxiety, depression, eating disorders, body dysmorphia, suicidality, self-harm, and sexualization.” 

If K.G.M. can convince the jury that she was not harmed by platforms’ failure to remove content but by companies “designing their platforms to addict kids” and “developing algorithms that show kids not what they want to see but what they cannot look away from,” Bergman thinks her case could become a “data point” for “settling similar cases en masse,” he told Barrons.

“She is very typical of so many children in the United States—the harms that they’ve sustained and the way their lives have been altered by the deliberate design decisions of the social media companies,” Bergman told the Post.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

“IG is a drug”: Internal messages may doom Meta at social media addiction trial Read More »

australian-plumber-is-a-youtube-sensation

Australian plumber is a YouTube sensation

My personal favorites are when Bruce takes on clogged restaurant grease traps, including the one at the top of this article in which he pulls out a massive greaseberg “the size of a chihuahua.” When it’s Bruce versus a nasty grease trap, the man remains undefeated (well, almost—sometimes he needs to get a grease trap pumped out before he can fix the problem). And I have learned more than I probably ever needed to know about how grease traps work.

schematic illustration showing how a grease trap works

Credit: YouTube/Drain Cleaning Australia

Credit: YouTube/Drain Cleaning Australia

Each video is its own little adventure. Bruce arrives on a job, checks out the problem (“she is chock-a-block, mate!”), and starts methodically working that problem until he solves it, which inevitably involves firing up “the bloody jet” to blast through blockages with 5,000 psi of water pressure (“Go, you good thing!”). This being Australia, he’ll occasionally encounter not just cockroaches but poisonous spiders and snakes. And he’s caught so many facefulls of wastewater and sewage while jetting that he really ought to invest in a hazmat suit. Even the cheesy canned techno-music playing during lulls in the action is low-budget perfection.

Bruce isn’t the only plumber with a YouTube channel—it’s a surprisingly good-size subgenre—but he’s the most colorful and entertaining. His unbridled enthusiasm for what many would consider the dirtiest of jobs is positively infectious. He regularly effuses about having the best job in the world, insisting that unclogging gross drains is “living the dream,” and regularly asks his audience, “How good is this? I mean, where else would you rather be?” Sure, he says it with an ironic (unseen) wink at the camera, but deep down, you know he truly loves the work.

And you know what? Bruce is right. It might not be your definition of “what dreams are made of,” but there really is something profoundly satisfying about a free-flowing drain—and a job well done.

Australian plumber is a YouTube sensation Read More »

google-temporarily-disabled-youtube’s-advanced-captions-without-warning

Google temporarily disabled YouTube’s advanced captions without warning

YouTubers have been increasingly frustrated with Google’s management of the platform, with disinformation welcomed back and an aggressive push for more AI (except where Google doesn’t like it). So it’s no surprise that creators have been up in arms over the suspicious removal of YouTube’s advanced SRV3 caption format. You don’t have to worry too much just yet—Google says this is only temporary, and it’s working on a fix for the underlying bug.

Google added support for this custom subtitle format around 2018, giving creators more customization options than with traditional captions. SRV3 (also known as YTT or YouTube Timed Text) allows for custom colors, transparency, animations, fonts, and precise positioning in videos. Uploaders using this format can color-code and position captions to help separate multiple speakers, create sing-along animations, or style them to match the video.

Over the last several days, creators who’ve become accustomed to this level of control have been dismayed to see that YouTube is no longer accepting videos with this Google-created format. Many worried Google had ditched the format entirely, which could be problematic for all those previously uploaded videos.

Google has now posted a brief statement and confirmed to Ars that it has not ended support for SRV3. However, all is not well. The company says it has temporarily limited the serving of SRV3 caption files because they may break playback for some users. That’s pretty vague, but it sounds like developers made a change to the platform without taking into account how it might interfere with SRV3 captions. Rather than allow those videos to be non-functional, it’s disabling most of the captions.

Google temporarily disabled YouTube’s advanced captions without warning Read More »

youtube-bans-two-popular-channels-that-created-fake-ai-movie-trailers

YouTube bans two popular channels that created fake AI movie trailers

Deadline reports that the behavior of these creators ran afoul of YouTube’s spam and misleading-metadata policies. At the same time, Google loves generative AI—YouTube has added more ways for creators to use generative AI, and the company says more gen AI tools are coming in the future. It’s quite a tightrope for Google to walk.

AI movie trailers

A selection of videos from the now-defunct Screen Culture channel.

Credit: Ryan Whitwam

A selection of videos from the now-defunct Screen Culture channel. Credit: Ryan Whitwam

While passing off AI videos as authentic movie trailers is definitely spammy conduct, the recent changes to the legal landscape could be a factor, too. Disney recently entered into a partnership with OpenAI, bringing its massive library of characters to the company’s Sora AI video app. At the same time, Disney sent a cease-and-desist letter to Google demanding the removal of Disney content from Google AI. The letter specifically cited AI content on YouTube as a concern.

Both the banned trailer channels made heavy use of Disney properties, sometimes even incorporating snippets of real trailers. For example, Screen Culture created 23 AI trailers for The Fantastic Four: First Steps, some of which outranked the official trailer in searches. It’s unclear if either account used Google’s Veo models to create the trailers, but Google’s AI will recreate Disney characters without issue.

While Screen Culture and KH Studio were the largest purveyors of AI movie trailers, they are far from alone. There are others with five and six-digit subscriber counts, some of which include disclosures about fan-made content. Is that enough to save them from the ban hammer? Many YouTube viewers probably hope not.

YouTube bans two popular channels that created fake AI movie trailers Read More »

meta-wins-monopoly-trial,-convinces-judge-that-social-networking-is-dead

Meta wins monopoly trial, convinces judge that social networking is dead


People are “bored” by their friends’ content, judge ruled, siding with Meta.

Mark Zuckerberg arrives at court after The Federal Trade Commission alleged the acquisitions of Instagram in 2012 and WhatsApp in 2014 gave Meta a social media monopoly. Credit: Bloomberg / Contributor | Bloomberg

After years of pushback from the Federal Trade Commission over Meta’s acquisitions of Instagram and WhatsApp, Meta has defeated the FTC’s monopoly claims.

In a Tuesday ruling, US District Judge James Boasberg said the FTC failed to show that Meta has a monopoly in a market dubbed “personal social networking.” In that narrowly defined market, the FTC unsuccessfully argued, Meta supposedly faces only two rivals, Snapchat and MeWe, which struggle to compete due to its alleged monopoly.

But the days of grouping apps into “separate markets of social networking and social media” are over, Boasberg wrote. He cited the Greek philosopher Heraclitus, who “posited that no man can ever step into the same river twice,” while telling the FTC they missed their chance to block Meta’s purchase.

Essentially, Boasberg agreed with Meta that social media—as it was known in Facebook’s early days—is dead. And that means that Meta now competes with a broader set of rival apps, which includes two hugely popular platforms: TikTok and YouTube.

“When the evidence implies that consumers are reallocating massive amounts of time from Meta’s apps to these rivals and that the amount of substitution has forced Meta to invest gobs of cash to keep up, the answer is clear: Meta is not a monopolist insulated from competition,” Boasberg wrote.

In fact, adding just TikTok alone to the market defeated the FTC’s claims, Boasberg wrote, leaving him to conclude that “Meta holds no monopoly in the relevant market.”

The FTC is not happy about the loss, which comes after Boasberg determined that one of the agency’s key expert witnesses, Scott Hemphill, could not have approached his testimony “with an open mind.” According to Boasberg, Hemphill was aligned with figures publicly calling for the breakup of Facebook, and that made “neutral evaluation of his opinions more difficult” in a case with little direct evidence of monopoly harms.

“We are deeply disappointed in this decision,” Joe Simonson, the FTC’s director of public affairs, told CNBC. “The deck was always stacked against us with Judge Boasberg, who is currently facing articles of impeachment. We are reviewing all our options.”

For Meta, the win ends years of FTC fights intended to break up the company’s family of apps: Facebook, Instagram, and WhatsApp.

“The Court’s decision today recognizes that Meta faces fierce competition,” Jennifer Newstead, Meta’s chief legal officer, said. “Our products are beneficial for people and businesses and exemplify American innovation and economic growth. We look forward to continuing to partner with the Administration and to invest in America.”

Reels’ popularity helped save Meta

Meta app users clicking on Reels helped Meta win.

Boasberg noted that “a majority of Americans’ time” on both Facebook and Instagram “is now spent watching videos,” with Reels becoming “the single most-used part of Facebook.” That puts Meta apps more on par with entertainment apps like TikTok and YouTube, the judge said.

While “connecting with friends remains an important part of both apps,” the judge cited Meta’s evidence showing that Meta had to pump more recommended content from strangers into users’ feeds to account for a trend where its users grew increasingly less inclined to post publicly.

“Both scrolling and sharing have transformed” since Facebook was founded, Boasberg wrote, citing six factors that he concluded invalidated the FTC’s market definition as markets exist today.

Initial factors that shifted markets were due to leaps in innovation. “First, smartphone usage exploded,” Boasberg explained, then “cell phone data got better,” which made it easier to watch videos without frustrating “freezing and buffering.” Soon after, content recommendation systems got better, with “advanced AI algorithms” helping users “find engaging videos about the things” they “care most about in the world.”

Other factors stemmed from social changes, the judge suggested, describing the fourth factor as a trend where Meta app users started feeling “increasingly bored by their friends’ posts.”

“Longtime users’ friend lists” start fresh, but over time, they “become an often-outdated archive of people they once knew: a casual friend from college, a long-ago friend from summer camp, some guy they met at a party once,” Boasberg wrote. “Posts from friends have therefore grown less interesting.”

Then came TikTok, the fifth factor, Boasberg said, which forced Meta to “evolve” Facebook and Instagram by adding Reels.

And finally, “those five changes both caused and were reinforced by a change in social norms, which evolved to discourage public posting,” Boasberg wrote. “People have increasingly become less interested in blasting out public posts that hundreds of others can see.”

As a result of these tech advancements and social trends, Boasberg said, “Facebook, Instagram, TikTok, and YouTube have thus evolved to have nearly identical main features.” That reality undermined the FTC’s claims that users preferred Facebook and Instagram before Meta shifted its focus away from friends-and-family content.

“The Court simply does not find it credible that users would prefer the Facebook and Instagram apps that existed ten years ago to the versions that exist today,” Boasberg wrote.

Meta apps have not deteriorated, judge ruled

Boasberg repeatedly emphasized that the FTC failed to prove that Meta has a monopoly “now,” either actively or imminently causing harms.

The FTC tried to win by claiming that “Meta has degraded its apps’ quality by increasing their ad load, that falling user sentiment shows that the apps have deteriorated and that Meta has sabotaged its apps by underinvesting in friend sharing,” Boasberg noted.

But, Boasberg said, the FTC failed to show that Meta’s app quality has diminished—a trend that Cory Doctorow dubbed “enshittification,” which Meta apparently successfully argued is not real.

The judge was also swayed by Meta’s arguments that users like seeing ads. Meta showed evidence that it can only profitably increase its ad load when ad quality improves; otherwise, it risks losing engagement. Because “the rate at which users buy something or subscribe to a service based on Meta’s ads has steadily risen,” this suggested “that the ads have gotten more and more likely to connect users to products in which they have an interest,” Boasberg said.

Additionally, surveys of Meta app users that show declining user sentiment are not evidence that its apps are deteriorating in quality, Boasberg said, but are more about “brand reputation.”

“That is unsurprising: ask people how they feel about, say, Exxon Mobil, and their answers will tell you very little about how good its oil is,” Boasberg wrote. “The FTC’s claim that worsening sentiment shows a worsening product is unpersuasive.”

Finally, the FTC’s claim that Meta underinvested in friends-and-family content, to the detriment of its core app users, “makes no sense,” Boasberg wrote, given Meta’s data showing that user posting declined.

“While it is true that users see less content from their friends these days, that is largely due to the friends themselves: people simply post less,” Boasberg wrote. “Users are not seeing less friend content because Meta is hiding it from them, but instead because there is less friend content for Meta to show.”

It’s not even “clear that users want more friend posts,” the judge noted, agreeing with Meta that “instead, what users really seem to want is Reels.”

Further, if Meta were a monopolist, Boasberg seemed to suggest that the platform might be more invested in forcing friends-and-family content than Reels, since “Reels earns Meta less money” due to its smaller ad load.

“Courts presume that sophisticated corporations act rationally,” Boasberg wrote. “Here, the FTC has not offered even an ordinarily persuasive case that Meta is making the economically irrational choice to underinvest in its most lucrative offerings. It certainly has not made a particularly persuasive one.”

Among the critics unhappy with the ruling is Nidhi Hegde, executive director of the American Economic Liberties Project, who suggested that Boasberg’s ruling was “a colossally wrong decision” that “turns a willful blind eye to Meta’s enormous power over social media and the harms that flow from it.”

“Judge Boasberg has purposefully ignored the overwhelming evidence of how Meta became a monopoly—not by building a better product, but by buying its rivals to shut down any real competitors before they could grow,” Hegde said. “These deals let Meta fuse Facebook, Instagram, and WhatsApp into one machine that poisons our children and discourse, bullies publishers and advertisers, and destroys the possibility of healthy online connections with friends and family. By pretending that TikTok’s rise wipes away over a decade of illegal conduct, this court has effectively told every aspiring monopolist that our current justice system is on their side.”

On the other side, industry groups cheered the ruling. Matt Schruers, president of the Computer & Communications Industry Association, suggested that Boasberg concluded “what every Internet user knows—that Meta competes with a number of platforms and the company’s relevant market shares are therefore nowhere close to those required to establish monopoly power.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Meta wins monopoly trial, convinces judge that social networking is dead Read More »

youtube-tv’s-disney-blackout-reminds-users-that-they-don’t-own-what-they-stream

YouTube TV’s Disney blackout reminds users that they don’t own what they stream

“I don’t know (or care) which side is responsible for this, but the DVR is not VOD, it is your recording, and shows recorded before the dispute should be available. This is a hard lesson for us all,” an apparently affected customer wrote on Reddit this week.

For current or former cable subscribers, this experience isn’t new. Carrier disputes have temporarily and permanently killed cable subscribers’ access to many channels over the years. And since the early 2000s, many cable companies have phased out DVRs with local storage in favor of cloud-based DVRs. Since then, cable companies have been able to revoke customers’ access to DVR files if, for example, the customer stopped paying for the channel from which the content was recorded. What we’re seeing with YouTube TV’s DVR feature is one of several ways that streaming services mirror cable companies.

Google exits Movies Anywhere

In a move that appears to be best described as tit for tat, Google has removed content purchased via Google Play and YouTube from Movies Anywhere, a Disney-owned unified platform that lets people access digital video purchases from various distributors, including Amazon Prime Video and Fandango.

In removing users’ content, Google may gain some leverage in its discussions with Disney, which is reportedly seeking a larger carriage fee from YouTube TV. The content removals, however, are just one more pain point of the fragmented streaming landscape customers are already dealing with.

Customers inconvenienced

As of this writing, Google and Disney have yet to reach an agreement. On Monday, Google publicly rejected Disney’s request to restore ABC to YouTube TV for yesterday’s election day, although the company showed a willingness to find a way to quickly bring back ABC and ESPN (“the channels that people want,” per Google). Disney has escalated things by making its content unavailable to rent or purchase from all Google platforms.

Google is trying to appease customers by saying it will give YouTube TV subscribers a $20 credit if Disney “content is unavailable for an extended period of time.” Some people online have reported receiving a $10 credit already.

Regardless of how this saga ends, the immediate effects have inconvenienced customers of both companies. People subscribe to streaming services and rely on digital video purchases and recordings for easy, instant access, which Google and Disney’s disagreement has disrupted. The squabble has also served as another reminder that in the streaming age, you don’t really own anything.

YouTube TV’s Disney blackout reminds users that they don’t own what they stream Read More »

youtube-denies-ai-was-involved-with-odd-removals-of-tech-tutorials

YouTube denies AI was involved with odd removals of tech tutorials


YouTubers suspect AI is bizarrely removing popular video explainers.

This week, tech content creators began to suspect that AI was making it harder to share some of the most highly sought-after tech tutorials on YouTube, but now YouTube is denying that odd removals were due to automation.

Creators grew alarmed when educational videos that YouTube had allowed for years were suddenly being bizarrely flagged as “dangerous” or “harmful,” with seemingly no way to trigger human review to overturn removals. AI seemed to be running the show, with creators’ appeals seemingly getting denied faster than a human could possibly review them.

Late Friday, a YouTube spokesperson confirmed that videos flagged by Ars have been reinstated, promising that YouTube will take steps to ensure that similar content isn’t removed in the future. But, to creators, it remains unclear why the videos got taken down, as YouTube claimed that both initial enforcement decisions and decisions on appeals were not the result of an automation issue.

Shocked creators were stuck speculating

Rich White, a computer technician who runs an account called CyberCPU Tech, had two videos removed that demonstrated workarounds to install Windows 11 on unsupported hardware.

These videos are popular, White told Ars, with people looking to bypass Microsoft account requirements each time a new build is released. For tech content creators like White, “these are bread and butter videos,” dependably yielding “extremely high views,” he said.

Because there’s such high demand, many tech content creators’ channels are filled with these kinds of videos. White’s account has “countless” examples, he said, and in the past, YouTube even featured his most popular video in the genre on a trending list.

To White and others, it’s unclear exactly what has changed on YouTube that triggered removals of this type of content.

YouTube only seemed to be removing recently posted content, White told Ars. However, if the takedowns ever impacted older content, entire channels documenting years of tech tutorials risked disappearing in “the blink of an eye,” another YouTuber behind a tech tips account called Britec09 warned after one of his videos was removed.

The stakes appeared high for everyone, White warned, in a video titled “YouTube Tech Channels in Danger!”

White had already censored content that he planned to post on his channel, fearing it wouldn’t be worth the risk of potentially losing his account, which began in 2020 as a side hustle but has since become his primary source of income. If he continues to change the content he posts to avoid YouTube penalties, it could hurt his account’s reach and monetization. Britec told Ars that he paused a sponsorship due to the uncertainty that he said has already hurt his channel and caused a “great loss of income.”

YouTube’s policies are strict, with the platform known to swiftly remove accounts that receive three strikes for violating community guidelines within 90 days. But, curiously, White had not received any strikes following his content removals. Although Britec reported that his account had received a strike following his video’s removal, White told Ars that YouTube so far had only given him two warnings, so his account is not yet at risk of a ban.

Creators weren’t sure why YouTube might deem this content as harmful, so they tossed around some theories. It seemed possible, White suggested in his video, that AI was detecting this content as “piracy,” but that shouldn’t be the case, he claimed, since his guides require users to have a valid license to install Windows 11. He also thinks it’s unlikely that Microsoft prompted the takedowns, suggesting tech content creators have a “love-hate relationship” with the tech company.

“They don’t like what we’re doing, but I don’t think they’re going to get rid of it,” White told Ars, suggesting that Microsoft “could stop us in our tracks” if it were motivated to end workarounds. But Microsoft doesn’t do that, White said, perhaps because it benefits from popular tutorials that attract swarms of Windows 11 users who otherwise may not use “their flagship operating system” if they can’t bypass Microsoft account requirements.

Those users could become loyal to Microsoft, White said. And eventually, some users may even “get tired of bypassing the Microsoft account requirements, or Microsoft will add a new feature that they’ll happily get the account for, and they’ll relent and start using a Microsoft account,” White suggested in his video. “At least some people will, not me.”

Microsoft declined Ars’ request to comment.

To White, it seemed possible that YouTube was leaning on AI  to catch more violations but perhaps recognized the risk of over-moderation and, therefore, wasn’t allowing AI to issue strikes on his account.

But that was just a “theory” that he and other creators came up with, but couldn’t confirm, since YouTube’s chatbot that supports creators seemed to also be “suspiciously AI-driven,” seemingly auto-responding even when a “supervisor” is connected, White said in his video.

Absent more clarity from YouTube, creators who post tutorials, tech tips, and computer repair videos were spooked. Their biggest fear was that unexpected changes to automated content moderation could unexpectedly knock them off YouTube for posting videos that in tech circles seem ordinary and commonplace, White and Britec said.

“We are not even sure what we can make videos on,” White said. “Everything’s a theory right now because we don’t have anything solid from YouTube.”

YouTube recommends making the content it’s removing

White’s channel gained popularity after YouTube highlighted an early trending video that he made, showing a workaround to install Windows 11 on unsupported hardware. Following that video, his channel’s views spiked, and then he gradually built up his subscriber base to around 330,000.

In the past, White’s videos in that category had been flagged as violative, but human review got them quickly reinstated.

“They were striked for the same reason, but at that time, I guess the AI revolution hadn’t taken over,” White said. “So it was relatively easy to talk to a real person. And by talking to a real person, they were like, ‘Yeah, this is stupid.’ And they brought the videos back.”

Now, YouTube suggests that human review is causing the removals, which likely doesn’t completely ease creators’ fears about arbitrary takedowns.

Britec’s video was also flagged as dangerous or harmful. He has managed his account that currently has nearly 900,000 subscribers since 2009, and he’s worried he risked losing “years of hard work,” he said in his video.

Britec told Ars that “it’s very confusing” for panicked tech content creators trying to understand what content is permissible. It’s particularly frustrating, he noted in his video, that YouTube’s creator tool inspiring “ideas” for posts seemed to contradict the mods’ content warnings and continued to recommend that creators make content on specific topics like workarounds to install Windows 11 on unsupported hardware.

Screenshot from Britec09’s YouTube video, showing YouTube prompting creators to make content that could get their channels removed. Credit: via Britec09

“This tool was to give you ideas for your next video,” Britec said. “And you can see right here, it’s telling you to create content on these topics. And if you did this, I can guarantee you your channel will get a strike.”

From there, creators hit what White described as a “brick wall,” with one of his appeals denied within one minute, which felt like it must be an automated decision. As Britec explained, “You will appeal, and your appeal will be rejected instantly. You will not be speaking to a human being. You’ll be speaking to a bot or AI. The bot will be giving you automated responses.”

YouTube insisted that the decisions weren’t automated, even when an appeal was denied within one minute.

White told Ars that it’s easy for creators to be discouraged and censor their channels rather than fight with the AI. After wasting “an hour and a half trying to reason with an AI about why I didn’t violate the community guidelines” once his first appeal was quickly denied, he “didn’t even bother using the chat function” after the second appeal was denied even faster, White confirmed in his video.

“I simply wasn’t going to do that again,” White said.

All week, the panic spread, reaching fans who follow tech content creators. On Reddit, people recommended saving tutorials lest they risk YouTube taking them down.

“I’ve had people come out and say, ‘This can’t be true. I rely on this every time,’” White told Ars.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

YouTube denies AI was involved with odd removals of tech tutorials Read More »

tv-focused-youtube-update-brings-ai-upscaling,-shopping-qr-codes

TV-focused YouTube update brings AI upscaling, shopping QR codes

YouTube has been streaming for 20 years, but it was only in the last couple that it came to dominate TV streaming. Google’s video platform attracts more TV viewers than Netflix, Disney+, and all the other apps, and Google is looking to further beef up its big-screen appeal with a new raft of features, including shopping, immersive channel surfing, and an official version of the AI upscaling that had creators miffed a few months back.

According to Google, YouTube’s growth has translated into higher payouts. The number of channels earning more than $100,000 annually is up 45 percent in 2025 versus 2024. YouTube is now giving creators some tools to boost their appeal (and hopefully their income) on TV screens. Those elaborate video thumbnails featuring surprised, angry, smiley hosts are about to get even prettier with the new 50MB file size limit. That’s up from a measly 2MB.

Video upscaling is also coming to YouTube, and creators will be opted in automatically. To start, YouTube will be upscaling lower-quality videos to 1080p. In the near future, Google plans to support “super resolution” up to 4K.

The site stresses that it’s not modifying original files—creators will have access to both the original and upscaled files, and they can opt out of upscaling. In addition, super resolution videos will be clearly labeled on the user side, allowing viewers to select the original upload if they prefer. The lack of transparency was a sticking point for creators, some of whom complained about the sudden artificial look of their videos during YouTube’s testing earlier this year.

TV-focused YouTube update brings AI upscaling, shopping QR codes Read More »

youtube’s-likeness-detection-has-arrived-to-help-stop-ai-doppelgangers

YouTube’s likeness detection has arrived to help stop AI doppelgängers

AI content has proliferated across the Internet over the past few years, but those early confabulations with mutated hands have evolved into synthetic images and videos that can be hard to differentiate from reality. Having helped to create this problem, Google has some responsibility to keep AI video in check on YouTube. To that end, the company has started rolling out its promised likeness detection system for creators.

Google’s powerful and freely available AI models have helped fuel the rise of AI content, some of which is aimed at spreading misinformation and harassing individuals. Creators and influencers fear their brands could be tainted by a flood of AI videos that show them saying and doing things that never happened—even lawmakers are fretting about this. Google has placed a large bet on the value of AI content, so banning AI from YouTube, as many want, simply isn’t happening.

Earlier this year, YouTube promised tools that would flag face-stealing AI content on the platform. The likeness detection tool, which is similar to the site’s copyright detection system, has now expanded beyond the initial small group of testers. YouTube says the first batch of eligible creators have been notified that they can use likeness detection, but interested parties will need to hand Google even more personal information to get protection from AI fakes.

Sneak Peek: Likeness Detection on YouTube.

Currently, likeness detection is a beta feature in limited testing, so not all creators will see it as an option in YouTube Studio. When it does appear, it will be tucked into the existing “Content detection” menu. In YouTube’s demo video, the setup flow appears to assume the channel has only a single host whose likeness needs protection. That person must verify their identity, which requires a photo of a government ID and a video of their face. It’s unclear why YouTube needs this data in addition to the videos people have already posted with their oh-so stealable faces, but rules are rules.

YouTube’s likeness detection has arrived to help stop AI doppelgängers Read More »

trump-obtains-another-settlement-as-youtube-agrees-to-pay-$24.5-million

Trump obtains another settlement as YouTube agrees to pay $24.5 million

Google owner Alphabet today agreed to pay $24.5 million to settle a lawsuit that President Trump filed against YouTube in 2021. Trump sued YouTube over his account being suspended after Trump supporters’ January 6 attack on the US Capitol.

Alphabet agreed to pay $22 million “to settle and resolve with Plaintiff Donald J. Trump… which he has directed to be contributed, on his behalf, to the Trust for the National Mall, a 501(c)(3) tax-exempt entity dedicated to restoring, preserving, and elevating the National Mall, to support the construction of the White House State Ballroom,” a court filing said. Trump recently announced plans for the 90,000-square-foot ballroom.

The settlement notice, filed today in US District Court for the Northern District of California, said Alphabet will also pay $2.5 million to settle claims with plaintiffs the American Conservative Union, Andrew Baggiani, Austen Fletcher, Maryse Veronica Jean-Louis, Frank Valentine, Kelly Victory, and Naomi Wolf. Under the settlement, Alphabet admits no wrongdoing and the parties agreed to dismiss the case.

When contacted by Ars today, Google said it would not provide any comment beyond what is in the court filing. Trump was suspended from major social media platforms after the January 6, 2021, attack and was subsequently impeached by the House of Representatives for incitement of insurrection.

Meta settled a similar lawsuit in January this year, agreeing to pay $25 million overall, including $22 million toward Trump’s presidential library. In February, Elon Musk’s X agreed to a $10 million settlement.

“Google executives were eager to keep their settlement smaller than the one paid by rival Meta, according to people familiar with the matter,” The Wall Street Journal wrote today.

Trump obtains another settlement as YouTube agrees to pay $24.5 million Read More »

youtube-music-is-testing-ai-hosts-that-will-interrupt-your-tunes

YouTube Music is testing AI hosts that will interrupt your tunes

YouTube has a new Labs program, allowing listeners to “discover the next generation of YouTube.” In case you were wondering, that generation is apparently all about AI. The streaming site says Labs will offer a glimpse of the AI features it’s developing for YouTube Music, and it starts with AI “hosts” that will chime in while you’re listening to music. Yes, really.

The new AI music hosts are supposed to provide a richer listening experience, according to YouTube. As you’re listening to tunes, the AI will generate audio snippets similar to, but shorter than, the fake podcasts you can create in NotebookLM. The “Beyond the Beat” host will break in every so often with relevant stories, trivia, and commentary about your musical tastes. YouTube says this feature will appear when you are listening to mixes and radio stations.

The experimental feature is intended to be a bit like having a radio host drop some playful banter while cueing up the next song. It sounds a bit like Spotify’s AI DJ, but the YouTube AI doesn’t create playlists like Spotify’s robot. This is still generative AI, which comes with the risk of hallucinations and low-quality slop, neither of which belongs in your music. That said, Google’s Audio Overviews are often surprisingly good in small doses.

YouTube Music is testing AI hosts that will interrupt your tunes Read More »