Policy

trump-on-why-he-pardoned-binance-ceo:-“are-you-ready?-i-don’t-know-who-he-is.”

Trump on why he pardoned Binance CEO: “Are you ready? I don’t know who he is.”

“My sons are involved in crypto much more than I—me,” Trump said on 60 Minutes. “I—I know very little about it, other than one thing. It’s a huge industry. And if we’re not gonna be the head of it, China, Japan, or someplace else is. So I am behind it 100 percent.”

Did Trump ever meet Zhao? Did he form his own opinion about Zhao’s conviction, or was he merely “told about it”? Trump doesn’t seem to know:

This man was treated really badly by the Biden administration. And he was given a jail term. He’s highly respected. He’s a very successful guy. They sent him to jail and they really set him up. That’s my opinion. I was told about it.

I said, “Eh, it may look bad if I do it. I have to do the right thing.” I don’t know the man at all. I don’t think I ever met him. Maybe I did. Or, you know, somebody shook my hand or something. But I don’t think I ever met him. I have no idea who he is. I was told that he was a victim, just like I was and just like many other people, of a vicious, horrible group of people in the Biden administration.

Trump: “A lot people say that he wasn’t guilty”

Pointing out that Trump’s pardon of Zhao came after Binance helped facilitate a $2 billion purchase of World Liberty’s stablecoin, O’Donnell asked Trump to address the appearance of a pay-to-play deal.

“Well, here’s the thing, I know nothing about it because I’m too busy doing the other… I can only tell you this. My sons are into it. I’m glad they are, because it’s probably a great industry, crypto. I think it’s good… I know nothing about the guy, other than I hear he was a victim of weaponization by government. When you say the government, you’re talking about the Biden government. It’s a corrupt government. Biden was the most corrupt president and he was the worst president we’ve ever had.”

Trump on why he pardoned Binance CEO: “Are you ready? I don’t know who he is.” Read More »

internet-archive’s-legal-fights-are-over,-but-its-founder-mourns-what-was-lost

Internet Archive’s legal fights are over, but its founder mourns what was lost


“We survived, but it wiped out the library,” Internet Archive’s founder says.

Internet Archive founder Brewster Kahle celebrates 1 trillion web pages on stage with staff. Credit: via the Internet Archive

This month, the Internet Archive’s Wayback Machine archived its trillionth webpage, and the nonprofit invited its more than 1,200 library partners and 800,000 daily users to join a celebration of the moment. To honor “three decades of safeguarding the world’s online heritage,” the city of San Francisco declared October 22 to be “Internet Archive Day.” The Archive was also recently designated a federal depository library by Sen. Alex Padilla (D-Calif.), who proclaimed the organization a “perfect fit” to expand “access to federal government publications amid an increasingly digital landscape.”

The Internet Archive might sound like a thriving organization, but it only recently emerged from years of bruising copyright battles that threatened to bankrupt the beloved library project. In the end, the fight led to more than 500,000 books being removed from the Archive’s “Open Library.”

“We survived,” Internet Archive founder Brewster Kahle told Ars. “But it wiped out the Library.”

An Internet Archive spokesperson confirmed to Ars that the archive currently faces no major lawsuits and no active threats to its collections. Kahle thinks “the world became stupider” when the Open Library was gutted—but he’s moving forward with new ideas.

History of the Internet Archive

Kahle has been striving since 1996 to transform the Internet Archive into a digital Library of Alexandria—but “with a better fire protection plan,” joked Kyle Courtney, a copyright lawyer and librarian who leads the nonprofit eBook Study Group, which helps states update laws to protect libraries.

When the Wayback Machine was born in 2001 as a way to take snapshots of the web, Kahle told The New York Times that building free archives was “worth it.” He was also excited that the Wayback Machine had drawn renewed media attention to libraries.

At the time, law professor Lawrence Lessig predicted that the Internet Archive would face copyright battles, but he also believed that the Wayback Machine would change the way the public understood copyright fights.

”We finally have a clear and tangible example of what’s at stake,” Lessig told the Times. He insisted that Kahle was “defining the public domain” online, which would allow Internet users to see ”how easy and important” the Wayback Machine “would be in keeping us sane and honest about where we’ve been and where we’re going.”

Kahle suggested that IA’s legal battles weren’t with creators or publishers so much as with large media companies that he thinks aren’t “satisfied with the restriction you get from copyright.”

“They want that and more,” Kahle said, pointing to e-book licenses that expire as proof that libraries increasingly aren’t allowed to own their collections. He also suspects that such companies wanted the Wayback Machine dead—but the Wayback Machine has survived and proved itself to be a unique and useful resource.

The Internet Archive also began archiving—and then lending—e-books. For a decade, the Archive had loaned out individual e-books to one user at a time without triggering any lawsuits. That changed when IA decided to temporarily lift the cap on loans from its Open Library project to create a “National Emergency Library” as libraries across the world shut down during the early days of the COVID-19 pandemic. The project eventually grew to 1.4 million titles.

But lifting the lending restrictions also brought more scrutiny from copyright holders, who eventually sued the Archive. Litigation went on for years. In 2024, IA lost its final appeal in a lawsuit brought by book publishers over the Archive’s Open Library project, which used a novel e-book lending model to bypass publishers’ licensing fees and checkout limitations. Damages could have topped $400 million, but publishers ultimately announced a “confidential agreement on a monetary payment” that did not bankrupt the Archive.

Litigation has continued, though. More recently, the Archive settled another suit over its Great 78 Project after music publishers sought damages of up to $700 million. A settlement in that case, reached last month, was similarly confidential. In both cases, IA’s experts challenged publishers’ estimates of their losses as massively inflated.

For Internet Archive fans, a group that includes longtime Internet users, researchers, students, historians, lawyers, and the US government, the end of the lawsuits brought a sigh of relief. The Archive can continue—but it can’t run one of its major programs in the same way.

What the Internet Archive lost

To Kahle, the suits have been an immense setback to IA’s mission.

Publishers had argued that the Open Library’s lending harmed the e-book market, but IA says its vision for the project was not to frustrate e-book sales (which it denied its library does) but to make it easier for researchers to reference e-books by allowing Wikipedia to link to book scans. Wikipedia has long been one of the most visited websites in the world, and the Archive wanted to deepen its authority as a research tool.

“One of the real purposes of libraries is not just access to information by borrowing a book that you might buy in a bookstore,” Kahle said. “In fact, that’s actually the minority. Usually, you’re comparing and contrasting things. You’re quoting. You’re checking. You’re standing on the shoulders of giants.”

Meredith Rose, senior policy counsel for Public Knowledge, told Ars that the Internet Archive’s Wikipedia enhancements could have served to surface information that’s often buried in books, giving researchers a streamlined path to source accurate information online.

But Kahle said the lawsuits against IA showed that “massive multibillion-dollar media conglomerates” have their own interests in controlling the flow of information. “That’s what they really succeeded at—to make sure that Wikipedia readers don’t get access to books,” Kahle said.

At the heart of the Open Library lawsuit was publishers’ market for e-book licenses, which libraries complain provide only temporary access for a limited number of patrons and cost substantially more than the acquisition of physical books. Some states are crafting laws to restrict e-book licensing, with the aim of preserving library functions.

“We don’t want libraries to become Hulu or Netflix,” said Courtney of the eBook Study Group, posting warnings to patrons like “last day to check out this book, August 31st, then it goes away forever.”

He, like Kahle, is concerned that libraries will become unable to fulfill their longtime role—preserving culture and providing equal access to knowledge. Remote access, Courtney noted, benefits people who can’t easily get to libraries, like the elderly, people with disabilities, rural communities, and foreign-deployed troops.

Before the Internet Archive cases, libraries had won some important legal fights, according to Brandon Butler, a copyright lawyer and executive director of Re:Create, a coalition of “libraries, civil libertarians, online rights advocates, start-ups, consumers, and technology companies” that is “dedicated to balanced copyright and a free and open Internet.”

But the Internet Archive’s e-book fight didn’t set back libraries, Butler said, because the loss didn’t reverse any prior court wins. Instead, IA had been “exploring another frontier” beyond the Google Books ruling, which deemed Google’s searchable book excerpts a transformative fair use, hoping that linking to books from Wikipedia would also be deemed fair use. But IA “hit the edge” of what courts would allow, Butler said.

IA basically asked, “Could fair use go this much farther?” Butler said. “And the courts said, ‘No, this is as far as you go.’”

To Kahle, the cards feel stacked against the Internet Archive, with courts, lawmakers, and lobbyists backing corporations seeking “hyper levels of control.” He said IA has always served as a research library—an online destination where people can cross-reference texts and verify facts, just like perusing books at a local library.

“We’re just trying to be a library,” Kahle said. “A library in a traditional sense. And it’s getting hard.”

Fears of big fines may delay digitization projects

President Donald Trump’s cuts to the federal Institute of Museum and Library Services have put America’s public libraries at risk, and reduced funding will continue to challenge libraries in the coming years, ALA has warned. Butler has also suggested that under-resourced libraries may delay digitization efforts for preservation purposes if they worry that publishers may threaten costly litigation.

He told Ars he thinks courts are getting it right on recent fair use rulings. But he noted that libraries have fewer resources for legal fights because copyright law “has this provision that says, well, if you’re a copyright holder, you really don’t have to prove that you suffered any harm at all.”

“You can just elect [to receive] a massive payout based purely on the fact that you hold a copyright and somebody infringed,” Butler said. “And that’s really unique. Almost no other country in the world has that sort of a system.”

So while companies like AI firms may be able to afford legal fights with rights holders, libraries must be careful, even when they launch projects that seem “completely harmless and innocuous,” Butler said. Consider the Internet Archive’s Great 78 Project, which digitized 400,000 old shellac records, known as 78s, that were originally pressed from 1898 to the 1950s.

“The idea that somebody’s going to stream a 78 of an Elvis song instead of firing it up on their $10-a-month Spotify subscription is silly, right?” Butler said. “It doesn’t pass the laugh test, but given the scale of the project—and multiply that by the statutory damages—and that makes this an extremely dangerous project all of a sudden.”

Butler suggested that statutory damages could disrupt the balance that ensures the public has access to knowledge, creators get paid, and human creativity thrives, as AI advances and libraries’ growth potentially stalls.

“It sets the risk so high that it may force deals in situations where it would be better if people relied on fair use. Or it may scare people from trying new things because of the stakes of a copyright lawsuit,” Butler said.

Courtney, who co-wrote a whitepaper detailing the legal basis for different forms of “controlled digital lending” like the Open Library project uses, suggested that Kahle may be the person who’s best prepared to push the envelope on copyright.

When asked how the Internet Archive managed to avoid financial ruin, Courtney said it survived “only because their leader” is “very smart and capable.” Of all the “flavors” of controlled digital lending (CDL) that his paper outlined, Kahle’s methodology for the Open Library Project was the most “revolutionary,” Courtney said.

Importantly, IA’s loss did not doom other kinds of CDL that other archives use, he noted, nor did it prevent libraries from trying new things.

“Fair use is a case-by-case determination” that will be made as urgent preservation needs arise, Courtney told Ars, and “libraries have a ton of stuff that aren’t going to make the jump to digital unless we digitize them. No one will have access to them.”

What’s next for the Internet Archive?

The lawsuits haven’t dampened Kahle’s resolve to expand IA’s digitization efforts, though. Moving forward, the group will be growing a project called Democracy’s Library, which is “a free, open, online compendium of government research and publications from around the world” that will be conveniently linked in Wikipedia articles to help researchers discover them.

The Archive is also collecting as many physical materials as possible to help preserve knowledge, even as “the library system is largely contracting,” Kahle said. He noted that libraries historically tend to grow in societies that prioritize education and decline in societies where power is being concentrated, and he’s worried about where the US is headed. That makes it hard to predict if IA—or any library project—will be supported in the long term.

With governments globally partnering with the biggest tech companies to try to win the artificial intelligence race, critics have warned of threats to US democracy, while the White House has escalated its attack on libraries, universities, and science over the past year.

Meanwhile, AI firms face dozens of lawsuits from creators and publishers, which Kahle thinks only the biggest tech companies can likely afford to outlast. The momentum behind AI risks giving corporations even more control over information, Kahle said, and it’s uncertain if archives dedicated to preserving the public memory will survive attacks from multiple fronts.

“Societies that are [growing] are the ones that need to educate people” and therefore promote libraries, Kahle said. But when societies are “going down,” such as in times of war, conflict, and social upheaval, libraries “tend to get destroyed by the powerful. It used to be king and church, and it’s now corporations and governments.” (He recommended The Library: A Fragile History as a must-read to understand the challenges libraries have always faced.)

Kahle told Ars he’s not “black and white” on AI, and he even sees some potential for AI to enhance library services.

He’s more concerned that libraries in the US are losing support and may soon cease to perform classic functions that have always benefited civilizations—like buying books from small publishers and local authors, supporting intellectual endeavors, and partnering with other libraries to expand access to diverse collections.

To prevent these cultural and intellectual losses, he plans to position IA as a refuge for displaced collections, with hopes to digitize as much as possible while defending the early dream that the Internet could equalize access to information and supercharge progress.

“We want everyone [to be] a reader,” Kahle said, and that means “we want lots of publishers, we want lots of vendors, booksellers, lots of libraries.”

But, he asked, “Are we going that way? No.”

To turn things around, Kahle suggested that copyright laws be “re-architected” to ensure “we have a game with many winners”—where authors, publishers, and booksellers get paid, library missions are respected, and progress thrives. Then society can figure out “what do we do with this new set of AI tools” to keep the engine of human creativity humming.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Internet Archive’s legal fights are over, but its founder mourns what was lost Read More »

at&t-sues-ad-industry-watchdog-instead-of-pulling-ads-that-slam-t-mobile

AT&T sues ad industry watchdog instead of pulling ads that slam T-Mobile


Self-regulation breakdown

National Advertising Division said AT&T ad and press release broke program rule.

Credit: Getty Images | AaronP/Bauer-Griffin

AT&T yesterday sued the advertising industry’s official watchdog over the group’s demand that AT&T stop using its rulings for advertising and promotional purposes.

As previously reported, BBB National Programs’ National Advertising Division (NAD) found that AT&T violated a rule “by issuing a video advertisement and press release that use the NAD process and its findings for promotional purposes,” and sent a cease-and-desist letter to the carrier. The NAD operates the US advertising industry’s system of self-regulation, which is designed to handle complaints that advertisers file against each other and minimize government regulation of false and misleading claims.

While it’s clear that both AT&T and T-Mobile have a history of misleading ad campaigns, AT&T portrays itself as a paragon of honesty in new ads calling T-Mobile “the master of breaking promises.” An AT&T press release about the ad campaign said the NAD “asked T-Mobile to correct their marketing claims 16 times over the last four years,” and an AT&T commercial said T-Mobile has faced more challenges for deceptive ads from competitors than all other telecom providers in that time.

While the NAD describes AT&T’s actions as a clear-cut violation of rules that advertisers agree to in the self-regulatory process, AT&T disputed the accusation in a lawsuit filed in US District Court for the Northern District of Texas. “We stand by our campaign to shine a light on deceptive advertising from our competitors and oppose demands to silence the truth,” AT&T said in a press release.

AT&T’s lawsuit asked the court for a declaration, stating “that it has not violated NAD’s procedures” and that “NAD has no legal basis to enforce its demand for censorship.” The lawsuit complained that AT&T hasn’t been able to run its advertisements widely because “NAD’s inflammatory and baseless accusations have now intimidated multiple TV networks into pulling AT&T’s advertisement.”

AT&T claims rule no longer applies

AT&T’s claim that it didn’t violate an NAD rule hinges partly on when its press release was issued. The carrier claims the rule against referencing NAD decisions only applies for a short period of time after each NAD ruling.

“NAD now takes the remarkable position that any former participant in an NAD proceeding is forever barred from truthfully referencing NAD’s own public findings about a competitor’s deceptive advertising,” AT&T said. The lawsuit argued that “if NAD’s procedures were ever binding on AT&T, their binding effect ceased at the conclusion of the proceeding or a reasonable time thereafter.”

AT&T also slammed the NAD for failing to rein in T-Mobile’s deceptive ads. The group’s slow process let T-Mobile air deceptive advertisements without meaningful consequences, and the “NAD has repeatedly failed to refer continued violations to the FTC,” AT&T said.

“Over the past several years, NAD has repeatedly deemed T-Mobile’s ads to be misleading, false, or unsubstantiated,” AT&T said. “But over and over, T-Mobile has gamed the system to avoid timely redressing its behavior. NAD’s process is often slow, and T-Mobile knows it can make that process even slower by asking for extensions and delaying fixes.”

We’ve reported extensively on both carriers’ history of misleading advertisements over the years. That includes T-Mobile promising never to raise prices on certain plans and then raising them anyway. AT&T used to advertise 4G LTE service as “5GE,” and was rebuked for an ad that falsely claimed the carrier was already offering cellular coverage from space. AT&T and T-Mobile have both gotten in trouble for misleading promises of unlimited data.

AT&T says vague ad didn’t violate rule

AT&T’s lawsuit alleged that the NAD press release “intentionally impl[ied] that AT&T mischaracterized NAD’s prior decisions about T-Mobile’s deceptive advertising.” However, the NAD’s public stance is that AT&T violated the rule by using NAD decisions for promotional purposes, not by mischaracterizing the decisions.

NAD procedures state that companies participating in the system agree “not to mischaracterize any decision, abstract, or press release issued or use and/or disseminate such decision, abstract or press release for advertising and/or promotional purposes.” The NAD announcement didn’t make any specific allegations of AT&T mischaracterizing its decisions but said that AT&T violated the rules “by issuing a video advertisement and press release that use the NAD process and its findings for promotional purposes.”

The NAD said AT&T committed a “direct violation” of the rules by running an ad and issuing a press release “making representations regarding the alleged results of a competitor’s participation in BBB National Program’s advertising industry self-regulatory process.” The “alleged results” phrase may be why AT&T is claiming the NAD accused it of mischaracterizing decisions. There could also be more specific allegations in the cease-and-desist letter, which wasn’t made public.

AT&T claims its TV ads about T-Mobile don’t violate the rule because they only refer to “challenges” to T-Mobile advertising and “do not reference any decision, abstract, or press release.”

AT&T quibbles over rule meaning

AT&T further argues that a press release can’t violate the prohibition against using NAD decisions “for advertising and/or promotional purposes.” While press releases are clearly promotional in nature, AT&T says that part of the NAD rules doesn’t apply to press releases issued by advertisers like itself. Specifically, AT&T said that “the permissibility of press releases is not governed by Section 2.1(I)(2)(b), which applies to uses ‘for advertising and/or promotional purposes.’”

But the NAD procedures also bar participants in the process from issuing certain kinds of press releases. AT&T describes the rule about press releases as being in a different section than the rule about advertising and promotional purposes, but it’s actually all part of the same sentence. The rule says, “By participating in an NAD or NARB proceeding, the parties agree: (a) not to issue a press release regarding any decisions issued; and/or (b) not to mischaracterize any decision, abstract or press release issued or use and/or disseminate such decision, abstract or press release for advertising and/or promotional purposes.”

AT&T argues that the rule only bars press releases at the time of each NAD decision. The rule’s “meaning is clear in context: When NAD or NARB [National Advertising Review Board] issues a decision, no party is allowed to issue a press release to announce that decision,” AT&T said. “Instead, NAD issues its own press release to announce the decision. AT&T did not issue a press release to announce any decision, and indeed its advertisements (and press release announcing its advertising campaign) do not mention any particular NAD decision. In fact, AT&T’s press release does not use the word ‘decision’ at all.”

AT&T said that because it only made a short reference to NAD decisions, “AT&T’s press release about its new advertising campaign is therefore not a press release about an NAD decision as contemplated by Section 2.1(I)(2)(a).” AT&T also said it’s not a violation because the press release simply stated the number of rulings against T-Mobile and did not specifically cite any of those 16 decisions.

“AT&T’s press release does not include, attach, copy, or even cite any specific decision, abstract, or press release either in part or in whole,” AT&T’s lawsuit said. AT&T further said the NAD rule doesn’t apply to any proceeding AT&T wasn’t involved in, and that “AT&T did not initiate several of the proceedings against T-Mobile included in the one-sentence reference.”

We contacted the NAD about AT&T’s lawsuit but the group declined to comment.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

AT&T sues ad industry watchdog instead of pulling ads that slam T-Mobile Read More »

youtube-denies-ai-was-involved-with-odd-removals-of-tech-tutorials

YouTube denies AI was involved with odd removals of tech tutorials


YouTubers suspect AI is bizarrely removing popular video explainers.

This week, tech content creators began to suspect that AI was making it harder to share some of the most highly sought-after tech tutorials on YouTube, but now YouTube is denying that odd removals were due to automation.

Creators grew alarmed when educational videos that YouTube had allowed for years were suddenly being bizarrely flagged as “dangerous” or “harmful,” with seemingly no way to trigger human review to overturn removals. AI seemed to be running the show, with creators’ appeals seemingly getting denied faster than a human could possibly review them.

Late Friday, a YouTube spokesperson confirmed that videos flagged by Ars have been reinstated, promising that YouTube will take steps to ensure that similar content isn’t removed in the future. But, to creators, it remains unclear why the videos got taken down, as YouTube claimed that both initial enforcement decisions and decisions on appeals were not the result of an automation issue.

Shocked creators were stuck speculating

Rich White, a computer technician who runs an account called CyberCPU Tech, had two videos removed that demonstrated workarounds to install Windows 11 on unsupported hardware.

These videos are popular, White told Ars, with people looking to bypass Microsoft account requirements each time a new build is released. For tech content creators like White, “these are bread and butter videos,” dependably yielding “extremely high views,” he said.

Because there’s such high demand, many tech content creators’ channels are filled with these kinds of videos. White’s account has “countless” examples, he said, and in the past, YouTube even featured his most popular video in the genre on a trending list.

To White and others, it’s unclear exactly what has changed on YouTube that triggered removals of this type of content.

YouTube only seemed to be removing recently posted content, White told Ars. However, if the takedowns ever impacted older content, entire channels documenting years of tech tutorials risked disappearing in “the blink of an eye,” another YouTuber behind a tech tips account called Britec09 warned after one of his videos was removed.

The stakes appeared high for everyone, White warned, in a video titled “YouTube Tech Channels in Danger!”

White had already censored content that he planned to post on his channel, fearing it wouldn’t be worth the risk of potentially losing his account, which began in 2020 as a side hustle but has since become his primary source of income. If he continues to change the content he posts to avoid YouTube penalties, it could hurt his account’s reach and monetization. Britec told Ars that he paused a sponsorship due to the uncertainty that he said has already hurt his channel and caused a “great loss of income.”

YouTube’s policies are strict, with the platform known to swiftly remove accounts that receive three strikes for violating community guidelines within 90 days. But, curiously, White had not received any strikes following his content removals. Although Britec reported that his account had received a strike following his video’s removal, White told Ars that YouTube so far had only given him two warnings, so his account is not yet at risk of a ban.

Creators weren’t sure why YouTube might deem this content as harmful, so they tossed around some theories. It seemed possible, White suggested in his video, that AI was detecting this content as “piracy,” but that shouldn’t be the case, he claimed, since his guides require users to have a valid license to install Windows 11. He also thinks it’s unlikely that Microsoft prompted the takedowns, suggesting tech content creators have a “love-hate relationship” with the tech company.

“They don’t like what we’re doing, but I don’t think they’re going to get rid of it,” White told Ars, suggesting that Microsoft “could stop us in our tracks” if it were motivated to end workarounds. But Microsoft doesn’t do that, White said, perhaps because it benefits from popular tutorials that attract swarms of Windows 11 users who otherwise may not use “their flagship operating system” if they can’t bypass Microsoft account requirements.

Those users could become loyal to Microsoft, White said. And eventually, some users may even “get tired of bypassing the Microsoft account requirements, or Microsoft will add a new feature that they’ll happily get the account for, and they’ll relent and start using a Microsoft account,” White suggested in his video. “At least some people will, not me.”

Microsoft declined Ars’ request to comment.

To White, it seemed possible that YouTube was leaning on AI  to catch more violations but perhaps recognized the risk of over-moderation and, therefore, wasn’t allowing AI to issue strikes on his account.

But that was just a “theory” that he and other creators came up with, but couldn’t confirm, since YouTube’s chatbot that supports creators seemed to also be “suspiciously AI-driven,” seemingly auto-responding even when a “supervisor” is connected, White said in his video.

Absent more clarity from YouTube, creators who post tutorials, tech tips, and computer repair videos were spooked. Their biggest fear was that unexpected changes to automated content moderation could unexpectedly knock them off YouTube for posting videos that in tech circles seem ordinary and commonplace, White and Britec said.

“We are not even sure what we can make videos on,” White said. “Everything’s a theory right now because we don’t have anything solid from YouTube.”

YouTube recommends making the content it’s removing

White’s channel gained popularity after YouTube highlighted an early trending video that he made, showing a workaround to install Windows 11 on unsupported hardware. Following that video, his channel’s views spiked, and then he gradually built up his subscriber base to around 330,000.

In the past, White’s videos in that category had been flagged as violative, but human review got them quickly reinstated.

“They were striked for the same reason, but at that time, I guess the AI revolution hadn’t taken over,” White said. “So it was relatively easy to talk to a real person. And by talking to a real person, they were like, ‘Yeah, this is stupid.’ And they brought the videos back.”

Now, YouTube suggests that human review is causing the removals, which likely doesn’t completely ease creators’ fears about arbitrary takedowns.

Britec’s video was also flagged as dangerous or harmful. He has managed his account that currently has nearly 900,000 subscribers since 2009, and he’s worried he risked losing “years of hard work,” he said in his video.

Britec told Ars that “it’s very confusing” for panicked tech content creators trying to understand what content is permissible. It’s particularly frustrating, he noted in his video, that YouTube’s creator tool inspiring “ideas” for posts seemed to contradict the mods’ content warnings and continued to recommend that creators make content on specific topics like workarounds to install Windows 11 on unsupported hardware.

Screenshot from Britec09’s YouTube video, showing YouTube prompting creators to make content that could get their channels removed. Credit: via Britec09

“This tool was to give you ideas for your next video,” Britec said. “And you can see right here, it’s telling you to create content on these topics. And if you did this, I can guarantee you your channel will get a strike.”

From there, creators hit what White described as a “brick wall,” with one of his appeals denied within one minute, which felt like it must be an automated decision. As Britec explained, “You will appeal, and your appeal will be rejected instantly. You will not be speaking to a human being. You’ll be speaking to a bot or AI. The bot will be giving you automated responses.”

YouTube insisted that the decisions weren’t automated, even when an appeal was denied within one minute.

White told Ars that it’s easy for creators to be discouraged and censor their channels rather than fight with the AI. After wasting “an hour and a half trying to reason with an AI about why I didn’t violate the community guidelines” once his first appeal was quickly denied, he “didn’t even bother using the chat function” after the second appeal was denied even faster, White confirmed in his video.

“I simply wasn’t going to do that again,” White said.

All week, the panic spread, reaching fans who follow tech content creators. On Reddit, people recommended saving tutorials lest they risk YouTube taking them down.

“I’ve had people come out and say, ‘This can’t be true. I rely on this every time,’” White told Ars.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

YouTube denies AI was involved with odd removals of tech tutorials Read More »

fcc-to-rescind-ruling-that-said-isps-are-required-to-secure-their-networks

FCC to rescind ruling that said ISPs are required to secure their networks

The Federal Communications Commission will vote in November to repeal a ruling that requires telecom providers to secure their networks, acting on a request from the biggest lobby groups representing Internet providers.

FCC Chairman Brendan Carr said the ruling, adopted in January just before Republicans gained majority control of the commission, “exceeded the agency’s authority and did not present an effective or agile response to the relevant cybersecurity threats.” Carr said the vote scheduled for November 20 comes after “extensive FCC engagement with carriers” who have taken “substantial steps… to strengthen their cybersecurity defenses.”

The FCC’s January 2025 declaratory ruling came in response to attacks by China, including the Salt Typhoon infiltration of major telecom providers such as Verizon and AT&T. The Biden-era FCC found that the Communications Assistance for Law Enforcement Act (CALEA), a 1994 law, “affirmatively requires telecommunications carriers to secure their networks from unlawful access or interception of communications.”

“The Commission has previously found that section 105 of CALEA creates an affirmative obligation for a telecommunications carrier to avoid the risk that suppliers of untrusted equipment will ‘illegally activate interceptions or other forms of surveillance within the carrier’s switching premises without its knowledge,’” the January order said. “With this Declaratory Ruling, we clarify that telecommunications carriers’ duties under section 105 of CALEA extend not only to the equipment they choose to use in their networks, but also to how they manage their networks.”

ISPs get what they want

The declaratory ruling was paired with a Notice of Proposed Rulemaking that would have led to stricter rules requiring specific steps to secure networks against unauthorized interception. Carr voted against the decision at the time.

Although the declaratory ruling didn’t yet have specific rules to go along with it, the FCC at the time said it had some teeth. “Even absent rules adopted by the Commission, such as those proposed below, we believe that telecommunications carriers would be unlikely to satisfy their statutory obligations under section 105 without adopting certain basic cybersecurity practices for their communications systems and services,” the January order said. “For example, basic cybersecurity hygiene practices such as implementing role-based access controls, changing default passwords, requiring minimum password strength, and adopting multifactor authentication are necessary for any sensitive computer system. Furthermore, a failure to patch known vulnerabilities or to employ best practices that are known to be necessary in response to identified exploits would appear to fall short of fulfilling this statutory obligation.”

FCC to rescind ruling that said ISPs are required to secure their networks Read More »

tiktok-may-become-more-right-wing-as-china-signals-approval-for-us-sale

TikTok may become more right-wing as China signals approval for US sale

TikTok US app may look radically different

If the sale goes through without major changes to the terms, TikTok could radically change for US users.

After US owners take over, they will have to retrain TikTok’s algorithm, perhaps shifting what content Americans see on the platform.

Some speculate that TikTokers may only connect with American users through the app, but that’s likely inaccurate, as global content will remain available.

While global content will still be displayed on TikTok’s US app, it’s unclear how it may be filtered, Kelley Cotter, an assistant professor who studies social media algorithms in the Department of Human-Centered Computing and Social Informatics at Pennsylvania State University, told Scientific American.

Cotter suggested that TikTok’s US owners may also tweak the algorithm or change community guidelines to potentially alter what content is accessed on the app. For example, during conversations leading up to the law that requires either the sale of TikTok to US allies or a nationwide ban, Republican lawmakers voiced concerns “that there were greater visibility of Palestinian hashtags on TikTok over Israeli hashtags.”

If Trump’s deal goes through, the president has already suggested that he’d like to see the app go “100 percent MAGA.” And Cotter suggested that the conservative slant of Trump’s hand-picked TikTok US investors—including Oracle, Silver Lake, and Andreessen Horowitz—could help Trump achieve that goal.

“An owner that has a strong ideological point of view and has the will to make that a part of the app, it is possible, through tweaking the algorithm, to sort of reshape the overall composition of content on the platform,” Cotter said.

If left-leaning users abandon TikTok as the app shifts to US ownership, TikTok’s content could change meaningfully, Cotter said.

“It could result in a situation,” Cotter suggested, where TikTok would be “an app that is composed by only people based in the US but only a subset of American users and particularly ones that perhaps might be right-leaning.” That could “have very big impact on the kinds of content that you see there.”

For TikTok’s US users bracing for a feared right-wing overhaul of their feeds, there’s also the potential for the app to become glitchy as all US users are hastily transferred over to the new app. Any technical issues could also drive users off the app, perhaps further altering content.

Ars updated this story on Oct. 30 to note that speculation that American users will be siloed off is inaccurate.

TikTok may become more right-wing as China signals approval for US sale Read More »

meta-denies-torrenting-porn-to-train-ai,-says-downloads-were-for-“personal-use”

Meta denies torrenting porn to train AI, says downloads were for “personal use”

Instead, Meta argued, available evidence “is plainly indicative” that the flagged adult content was torrented for “private personal use”—since the small amount linked to Meta IP addresses and employees represented only “a few dozen titles per year intermittently obtained one file at a time.”

“The far more plausible inference to be drawn from such meager, uncoordinated activity is that disparate individuals downloaded adult videos for personal use,” Meta’s filing said.

For example, unlike lawsuits raised by book authors whose works are part of an enormous dataset used to train AI, the activity on Meta’s corporate IP addresses only amounted to about 22 downloads per year. That is nowhere near the “concerted effort to collect the massive datasets Plaintiffs allege are necessary for effective AI training,” Meta argued.

Further, that alleged activity can’t even reliably be linked to any Meta employee, Meta argued.

Strike 3 “does not identify any of the individuals who supposedly used these Meta IP addresses, allege that any were employed by Meta or had any role in AI training at Meta, or specify whether (and which) content allegedly downloaded was used to train any particular Meta model,” Meta wrote.

Meanwhile, “tens of thousands of employees,” as well as “innumerable contractors, visitors, and third parties access the Internet at Meta every day,” Meta argued. So while it’s “possible one or more Meta employees” downloaded Strike 3’s content over the last seven years, “it is just as possible” that a “guest, or freeloader,” or “contractor, or vendor, or repair person—or any combination of such persons—was responsible for that activity,” Meta suggested.

Other alleged activity included a claim that a Meta contractor was directed to download adult content at his father’s house, but those downloads, too, “are plainly indicative of personal consumption,” Meta argued. That contractor worked as an “automation engineer,” Meta noted, with no apparent basis provided for why he would be expected to source AI training data in that role. “No facts plausibly” tie “Meta to those downloads,” Meta claimed.

Meta denies torrenting porn to train AI, says downloads were for “personal use” Read More »

republican-plan-would-make-deanonymization-of-census-data-trivial

Republican plan would make deanonymization of census data trivial


“Differential privacy” algorithm prevents statistical data from being tied to individuals.

President Donald Trump and the Republican Party have spent the better part of the president’s second term radically reshaping the federal government. But in recent weeks, the GOP has set its sights on taking another run at an old target: the US census.

Since the first Trump administration, the right has sought to add a question to the census that captures a respondent’s immigration status and to exclude noncitizens from the tallies that determine how seats in Congress are distributed. In 2019, the Supreme Court struck down an attempt by the first Trump administration to add a citizenship question to the census.

But now, a little-known algorithmic process called “differential privacy,” created to keep census data from being used to identify individual respondents, has become the right’s latest focus. WIRED spoke to six experts about the GOP’s ongoing effort to falsely allege that a system created to protect people’s privacy has made the data from the 2020 census inaccurate.

If successful, the campaign to get rid of differential privacy could not only radically change the kind of data made available, but could put the data of every person living in the US at risk. The campaign could also discourage immigrants from participating in the census entirely.

The Census Bureau regularly publishes anonymized data so that policymakers and researchers can use it. That data is also sensitive: Conducted every 10 years, the census counts every person living in the United States, citizen and noncitizen alike. The data includes detailed information like the race, sex, and age, as well the languages they speak, their home address, economic status, and the number of people living in a house. This data is used for allocating the federal funds that support public services like schools and hospitals, as well as for how a state’s population is divided up and represented in Congress. The more people in a state, the more congressional representation—and more votes in the Electoral College.

As computers got increasingly sophisticated and data more abundant and accessible, census employees and researchers realized the data published by the Census Bureau could be reverse engineered to identify individual people. According to Title XIII of the US Code, it is illegal for census workers to publish any data that would identify individual people, their homes, or businesses. A government employee revealing this kind of information could be punished with thousands of dollars in fines or even a possible prison sentence.

For individuals, this could mean, for instance, someone could use census data without differential privacy to identify transgender youth, according to research from the University of Washington.

For immigrants, the prospect of being reidentified through census data could “create panic among noncitizens as well as their families and friends,” says Danah Boyd, a census expert and the founder of Data & Society, a nonprofit research group focused on the downstream effects of technology. LGBTQ+ people might not “feel safe sharing that they are in a same-sex marriage. There are plenty of people in certain geographies who do not want data like this to be public,” she says. This could also mean that information that might be available only through something like a search warrant would suddenly be obtainable. “Unmasking published records is not illegal. Then you can match it to large law enforcement databases without actually breaching the law.”

A need for noise

Differential privacy keeps that data private. It’s a mathematical framework whereby a statistical output can’t be used to determine any individual’s data in a dataset, and the bureau’s algorithm for differential privacy is called TopDown. It injects “noise” into the data starting at the highest level (national), moving progressively downward. There are certain constraints placed around the kind of noise that can be introduced—for instance, the total number of people in a state or census block has to remain the same. But other demographic characteristics, like race or gender, are randomly reassigned to individual records within a set tranche of data. This way, the overall number of people with a certain characteristic remains constant, while the characteristics associated with any one record don’t describe an individual person. In other words, you’ll know how many women or Hispanic people are in a census block, just not exactly where.

“Differential privacy solves a particular problem, which is if you release a lot of information, a lot of statistics, based on the same set of confidential data, eventually somebody can piece together what that confidential data had to be,” says Simson Garfinkel, former senior computer scientist for confidentiality and data access at the Census Bureau.

Differential privacy was first used on data from the 2020 census. Even though one couldn’t identify a specific individual from the data, “you can still get an accurate count on things that are important for funding and voting rights,” says Moon Duchin, a mathematics professor at Tufts University who worked with census data to inform electoral maps in Alabama. The first use of differential privacy for the census happened under the Trump presidency, though the reports themselves were published after he left office. Civil servants, not political appointees, are the ones responsible for determining how census data is collected and analyzed. Emails obtained by the Brennan Center later claimed that the officials at the Census Bureau, overseen by then-Commerce Secretary Wilbur Ross, expressed an “unusually high degree” of interest in the “technical matters” of the process, which deputy director and COO of the bureau Ron Jarmin called “unprecedented.”

It’s this data from the 2020 census that Republicans have taken issue with. On August 21, the Center for Renewing America, a right-wing think tank founded by Russ Vought, currently the director of the US Office of Management and Budget, published a blog post alleging that differential privacy “may have played a significant role in tilting the political scales favorably toward Democrats for apportionment and redistricting purposes.” The post goes on to acknowledge that, even if a citizenship question was added to the census—which Trump attempted during his first administration—differential privacy “algorithm will be able to mask characteristic data, including citizenship status.”

Duchin and other experts who spoke to WIRED say that differential privacy does not change apportionment, or how seats in Congress are distributed—several red states, including Texas and Florida, gained representation after the 2020 census, while blue states like California lost representatives.

COUNTing the cost

On August 28, Republican Representative August Pfluger introduced the COUNT Act. If passed, it would add a citizenship question to the census and force the Census Bureau to “cease utilization of the differential privacy process.” Pfluger’s office did not immediately respond to a request for comment.

“Differential privacy is a punching bag that’s meant here as an excuse to redo the census,” says Duchin. “That is what’s going on, if you ask me.”

On October 6, Senator Jim Banks, a Republican from Indiana, sent a letter to Secretary of Commerce Howard Lutnick, urging him to “investigate and correct errors from the 2020 Census that handed disproportionate political power to Democrats and illegal aliens.” The letter goes on to allege that the use of differential privacy “alters the total population of individual voting districts.” Similar to the COUNT Act and the Renewing America post, the letter also states that the 2030 Census “must request citizenship status.”

Peter Bernegger, a Wisconsin-based “election integrity” activist who is facing a criminal charge of simulating the legal process for allegedly falsifying a subpoena, amplified Banks’ letter on X, alleging that the use of differential privacy was part of “election rigging by the Obama/Biden administrations.” Bernegger’s post was viewed more than 236,000 times.

Banks’ office and Bernegger did not immediately respond to a request for comment.

“No differential privacy was ever applied to the data used to apportion the House of Representatives, so the claim that seats in the House were affected is simply false,” says John Abowd, former associate director for research and methodology and chief scientist at the United States Census Bureau. Abowd oversaw the implementation of differential privacy while at the Census Bureau. He says that the data from the 2020 census has been successfully used by red and blue states, as well as redistricting commissions, and that the only difference from previous census data was that no one would be able to “reconstruct accurate, identifiable individual data to enhance the other databases that they use (voter rolls, drivers licenses, etc.).”

With a possible addition of the citizenship question, proposed by both Banks and the COUNT Act, Boyd says that census data would be even more sensitive, because that kind of information is not readily available in commercial data. “Plenty of data brokers would love to get their hands on that data.”

Shortly after Senator Banks published his letter, Abowd found himself in the spotlight. On October 9, the X account @amuse posted a blog-length post alleging that Abowd was the bureaucrat who “stole the House.” The post also alleged, without evidence, that the census results meant that “Republican states are projected to lose almost $90 billion in federal funds across the decade as a result of the miscounts. Democratic states are projected to gain $57 billion.” The account has more than 666,000 followers, including billionaire Elon Musk, venture capitalist Marc Andreessen, and US pardon attorney Ed Martin. (Abowd told WIRED he was “keeping an eye” on the post, which was viewed more than 360,000 times.) That same week, America First Legal, the conservative nonprofit founded by now deputy chief of staff for policy Stephen Miller, posted about a complaint the group had recently filed in Florida, challenging the 2020 census results, alleging they were based upon flawed statistical methods, one of which was differential privacy.

The results of all this, experts tell WIRED, are that fewer people will feel safe participating in the census and that the government will likely need to spend even more resources to try to get an accurate count. Undercounting could lead to skewed numbers that could impact everything from congressional representation to the amount of funding a municipality might receive from the government.

Neither the proposed COUNT Act nor Senator Banks’ letter outlines an alternative to differential privacy. This means that the Census Bureau would likely be left with two options: Publish data that could put people at risk (which could lead to legal consequences for its staff), or publish less data. “At present, I do not know of any alternative to differential privacy that can safeguard the personal data that the US Census Bureau uses in their work on the decennial census,” says Abraham Flaxman, an associate professor of health metrics sciences at the University of Washington, whose team conducted the study on transgender youth.

Getting rid of differential privacy is not a “light thing,” says a Census employee familiar with the bureau’s privacy methods and who requested anonymity because they were not authorized to speak to the press. “It may be for the layperson. But the entire apparatus of disclosure avoidance at the bureau has been geared for the last almost 10 years on differential privacy.” According to the employee, there is no immediately clear method to replace differential privacy.

Boyd says that the safest bet would simply be “what is known as suppression, otherwise known as ‘do not publish.’” (This, according to Garfinkel, was the backup plan if differential privacy had not been implemented for the 2020 census.)

Another would be for the Census Bureau to only publish population counts, meaning that demographic information like the race or age of respondents would be left out. “This is a problem, because we use census data to combat discrimination,” says Boyd. “The consequences of losing this data is not being able to pursue equity.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Republican plan would make deanonymization of census data trivial Read More »

if-things-in-america-weren’t-stupid-enough,-texas-is-suing-tylenol-maker

If things in America weren’t stupid enough, Texas is suing Tylenol maker

While the underlying cause or causes of autism spectrum disorder remain elusive and appear likely to be a complex interplay of genetic and environmental factors, President Trump and his anti-vaccine health secretary Robert F. Kennedy Jr.—neither of whom have any scientific or medical background whatsoever—have decided to pin the blame on Tylenol, a common pain reliever and fever reducer that has no proven link to autism.

And now, Texas Attorney General Ken Paxton is suing the maker of Tylenol, Kenvue and Johnson & Johnson, who previously sold Tylenol, claiming that they have been “deceptively marketing Tylenol” knowing that it “leads to a significantly increased risk of autism and other disorders.”

To back that claim, Paxton relies on the “considerable body of evidence… recently highlighted by the Trump Administration.”

Of course, there is no “considerable” evidence for this claim, only tenuous associations and conflicting studies. Trump and Kennedy’s justification for blaming Tylenol was revealed in a rambling, incoherent press conference last month, in which Trump spoke of a “rumor” about Tylenol and his “opinion” on the matter. Still, he firmly warned against its use, saying well over a dozen times: “don’t take Tylenol.”

“Don’t take Tylenol. There’s no downside. Don’t take it. You’ll be uncomfortable. It won’t be as easy maybe, but don’t take it if you’re pregnant. Don’t take Tylenol and don’t give it to the baby after the baby is born,” he said.

“Scientifically unfounded”

As Ars has reported previously, there are some studies that have found an association between use of Tylenol (aka acetaminophen or paracetamol) and a higher risk of autism. But, many of the studies finding such an association have significant flaws. Other studies have found no link. That includes a highly regarded Swedish study that compared autism risk among siblings with different acetaminophen exposures during pregnancy, but otherwise similar genetic and environmental risks. Acetaminophen didn’t make a difference, suggesting other genetic and/or environmental factors might explain any associations. Further, even if there is a real association (aka a correlation) between acetaminophen use and autism risk, that does not mean the pain reliever is the cause of autism.

If things in America weren’t stupid enough, Texas is suing Tylenol maker Read More »

python-plan-to-boost-software-security-foiled-by-trump-admin’s-anti-dei-rules

Python plan to boost software security foiled by Trump admin’s anti-DEI rules

“Given the value of the grant to the community and the PSF, we did our utmost to get clarity on the terms and to find a way to move forward in concert with our values. We consulted our NSF contacts and reviewed decisions made by other organizations in similar circumstances, particularly The Carpentries,” the Python Software Foundation said.

Board voted unanimously to withdraw application

The Carpentries, which teaches computational and data science skills to researchers, said in June that it withdrew its grant proposal after “we were notified that our proposal was flagged for DEI content, namely, for ‘the retention of underrepresented students, which has a limitation or preference in outreach, recruitment, participation that is not aligned to NSF priorities.’” The Carpentries was also concerned about the National Science Foundation rule against grant recipients advancing or promoting DEI in “any” program, a change that took effect in May.

“These new requirements mean that, in order to accept NSF funds, we would need to agree to discontinue all DEI focused programming, even if those activities are not carried out with NSF funds,” The Carpentries’ announcement in June said, explaining the decision to rescind the proposal.

The Python Software Foundation similarly decided that it “can’t agree to a statement that we won’t operate any programs that ‘advance or promote’ diversity, equity, and inclusion, as it would be a betrayal of our mission and our community,” it said yesterday. The foundation board “voted unanimously to withdraw” the application.

The Python foundation said it is disappointed because the project would have offered “invaluable advances to the Python and greater open source community, protecting millions of PyPI users from attempted supply-chain attacks.” The plan was to “create new tools for automated proactive review of all packages uploaded to PyPI, rather than the current process of reactive-only review. These novel tools would rely on capability analysis, designed based on a dataset of known malware. Beyond just protecting PyPI users, the outputs of this work could be transferable for all open source software package registries, such as NPM and Crates.io, improving security across multiple open source ecosystems.”

The foundation is still hoping to do that work and ended its blog post with a call for donations from individuals and companies that use Python.

Python plan to boost software security foiled by Trump admin’s anti-DEI rules Read More »

australia’s-social-media-ban-is-“problematic,”-but-platforms-will-comply-anyway

Australia’s social media ban is “problematic,” but platforms will comply anyway

Social media platforms have agreed to comply with Australia’s social media ban for users under 16 years old, begrudgingly embracing the world’s most restrictive online child safety law.

On Tuesday, Meta, Snap, and TikTok confirmed to Australia’s parliament that they’ll start removing and deactivating more than a million underage accounts when the law’s enforcement begins on December 10, Reuters reported.

Firms risk fines of up to $32.5 million for failing to block underage users.

Age checks are expected to be spotty, however, and Australia is still “scrambling” to figure out “key issues around enforcement,” including detailing firms’ precise obligations, AFP reported.

An FAQ managed by Australia’s eSafety regulator noted that platforms will be expected to find the accounts of all users under 16.

Those users must be allowed to download their data easily before their account is removed.

Some platforms can otherwise allow users to simply deactivate and retain their data until they reach age 17. Meta and TikTok expect to go that route, but Australia’s regulator warned that “users should not rely on platforms to provide this option.”

Additionally, platforms must prepare to catch kids who skirt age gates, the regulator said, and must block anyone under 16 from opening a new account. Beyond that, they’re expected to prevent “workarounds” to “bypass restrictions,” such as kids using AI to fake IDs, deepfakes to trick face scans, or the use of virtual private networks (VPNs) to alter their location to basically anywhere else in the world with less restrictive child safety policies.

Kids discovered inappropriately accessing social media should be easy to report, too, Australia’s regulator said.

Australia’s social media ban is “problematic,” but platforms will comply anyway Read More »

trump’s-ucla-deal:-pay-us-$1b+,-and-we-can-still-cut-your-grants-again

Trump’s UCLA deal: Pay us $1B+, and we can still cut your grants again

On Friday, the California Supreme Court ordered the University of California system to release the details of a proposed deal from the federal government that would restore research grants that were suspended by the Trump administration. The proposed deal, first issued in August, had remained confidential as a suit filed by faculty at UCLA made its way through appeals. With California’s top court now weighing in, the university administrators have released the document, still marked “draft” and “confidential attorney work product.”

Most of the demands will seem unsurprising to those familiar with the Trump administration’s interest: an end to all diversity programs and those supporting transgender individuals, plus a sharp crackdown on campus protests. The eye-opening portion comes at the price tag of nearly $1.2 billion paid out, with UCLA covering all the costs of compliance. And, as written, the deal wouldn’t stop the Trump administration from cutting the grants for other reasons or imposing more intrusive regulations, such as those mentioned in its university compact.

Familiar concerns

In many ways, the proposed deal is much more focused than the odd list of demands the administration sent Harvard University earlier this year, in that it targets issues that the administration has focused on repeatedly. These include an end to all diversity programs at both the faculty and student levels. It demands that UCLA agree to “remove explicit or implicit goals for compositional diversity based on race, sex, or ethnicity, including eliminating any secretive or proxy-based ‘diversity’ hiring processes.”

Foreign students are also targeted, with UCLA being told to set up a program to ensure that no “foreign students likely to engage in anti-Western, anti-American, or antisemitic disruptions or harassment” are admitted. “UCLA will also develop training materials to socialize international students to the norms of a campus dedicated to free inquiry and open debate.” The hospital associated with the university would also be forbidden from engaging in any gender-affirming care, and UCLA would not only prohibit transgender athletes but also strip any prior ones of any achievements.

Trump’s UCLA deal: Pay us $1B+, and we can still cut your grants again Read More »