content moderation

x’s-globe-trotting-defense-of-ads-on-nazi-posts-violates-tos,-media-matters-says

X’s globe-trotting defense of ads on Nazi posts violates TOS, Media Matters says

Part of the problem appeared to be decreased spending from big brands that did return, like reportedly Apple. Other dips were linked to X’s decision to partner with adtech companies, splitting ad revenue with Magnite, Google, and PubMatic, Business Insider reported. The CEO of marketing consultancy Ebiquity, Ruben Schreurs, told Business Insider that most of the top 100 global advertisers he works with were still hesitant to invest in X, confirming “no signs of a mass return.”

For X, the ad boycott has tanked revenue for years, even putting X on the brink of bankruptcy, Musk claimed. The billionaire paid $44 billion for the platform, and at the end of 2024, Fidelity estimated that X was worth just $9.4 billion, CNN reported.

But at the start of 2025, analysts predicted that advertisers may return to X to garner political favor with Musk, who remains a senior advisor in the Trump administration. Perhaps more importantly in the short-term, sources also told Bloomberg that X could potentially raise as much as Musk paid—$44 billion—from investors willing to help X pay down its debt to support new payments and video products.

That could put a Band-Aid on X’s financial wounds as Yaccarino attempts to persuade major brands that X isn’t toxic (while X sues some of them) and Musk tries to turn the social media platform once known as Twitter into an “everything app” as ubiquitous in the US as WeChat in China.

MMFA alleges that its research, which shows how toxic X is today, has been stifled by Musk’s suits, but other groups have filled the gap. The Center for Countering Digital Hate has resumed its reporting since defeating X’s lawsuit last March, and, most recently, University of California, Berkeley, researchers conducted a February analysis showing that “hate speech on the social media platform X rose about 50 percent” in the eight months after Musk’s 2022 purchase, which suggests that advertisers had potentially good reason to be spooked by changes at X and that those changes continue to keep them at bay today.

“Musk has continually tried to blame others for this loss in revenue since his takeover,” MMFA’s complaint said, alleging that all three suits were filed to intimidate MMFA “for having dared to publish an article Musk did not like.”

X’s globe-trotting defense of ads on Nazi posts violates TOS, Media Matters says Read More »

“zero-warnings”:-longtime-youtuber-rails-against-unexplained-channel-removal

“Zero warnings”: Longtime YouTuber rails against unexplained channel removal

Artemiy Pavlov, the founder of a small but mighty music software brand called Sinesvibes, spent more than 15 years building a YouTube channel with all original content to promote his business’ products. Over all those years, he never had any issues with YouTube’s automated content removal system—until Monday, when YouTube, without issuing a single warning, abruptly deleted his entire channel.

“What a ‘nice’ way to start a week!” Pavlov posted on Bluesky. “Our channel on YouTube has been deleted due to ‘spam and deceptive policies.’ Which is the biggest WTF moment in our brand’s history on social platforms. We have only posted demos of our own original products, never anything else….”

Officially, YouTube told Pavlov that his channel violated YouTube’s “spam, deceptive practices, and scam policy,” but Pavlov could think of no videos that might be labeled as violative.

“We have nothing to hide,” Pavlov told Ars, calling YouTube’s decision to delete the channel with “zero warnings” a “terrible, terrible day for an independent, honest software brand.”

“We have never been involved with anything remotely shady,” Pavlov said. “We have never taken a single dollar dishonestly from anyone. And we have thousands of customers that stand by our brand.”

Ars saw Pavolov’s post and reached out to YouTube to find out why the channel was targeted for takedown. About three hours later, the channel was suddenly restored. That’s remarkably fast, as YouTube can sometimes take days or weeks to review an appeal. A YouTube spokesperson later confirmed that the Sinesvibes channel was reinstated due to the regular appeals process, indicating perhaps that YouTube could see that Sinesvibes’ removal was an obvious mistake.

Developer calls for more human review

For small brands like Sinesvibes, even spending half a day in limbo was a cause for crisis. Immediately, the brand worried about 50 broken product pages for one of its distributors, as well as “hundreds if not thousands of news articles posted about our software on dozens of different websites.” Unsure if the channel would ever be restored, Sinesvibes spent most of Monday surveying the damage.

Now that the channel is restored, Pavlov is stuck confronting how much of the Sinesvibes brand depends on the YouTube channel remaining online while still grappling with uncertainty since the reason behind the ban remains unknown. He told Ars that’s why, for small brands, simply having a channel reinstated doesn’t resolve all their concerns.

“Zero warnings”: Longtime YouTuber rails against unexplained channel removal Read More »

meta-kills-diversity-programs,-claiming-dei-has-become-“too-charged”

Meta kills diversity programs, claiming DEI has become “too charged”

Meta has reportedly ended diversity, equity, and inclusion (DEI) programs that influenced staff hiring and training, as well as vendor decisions, effective immediately.

According to an internal memo viewed by Axios and verified by Ars, Meta’s vice president of human resources, Janelle Gale, told Meta employees that the shift was due to “legal and policy landscape surrounding diversity, equity, and inclusion efforts in the United States is changing.”

It’s another move by Meta that some view as part of the company’s larger effort to align with the incoming Trump administration’s politics. In December, Donald Trump promised to crack down on DEI initiatives at companies and on college campuses, The Guardian reported.

Earlier this week, Meta cut its fact-checking program, which was introduced in 2016 after Trump’s first election to prevent misinformation from spreading. In a statement announcing Meta’s pivot to X’s Community Notes-like approach to fact-checking, Meta CEO Mark Zuckerberg claimed that fact-checkers were “too politically biased” and “destroyed trust” on Meta platforms like Facebook, Instagram, and Threads.

Trump has also long promised to renew his war on alleged social media censorship while in office. Meta faced backlash this week over leaked rule changes relaxing Meta’s hate speech policies, The Intercept reported, which Zuckerberg said were “out of touch with mainstream discourse.”  Those changes included allowing anti-trans slurs previously banned, as well as permitting women to be called “property” and gay people to be called “mentally ill,” Mashable reported. In a statement, GLAAD said that rolling back safety guardrails risked turning Meta platforms into “unsafe landscapes filled with dangerous hate speech, violence, harassment, and misinformation” and alleged that Meta appeared to be willing to “normalize anti-LGBTQ hatred for profit.”

Meta kills diversity programs, claiming DEI has become “too charged” Read More »

elon-musk’s-x-may-succeed-in-blocking-calif.-content-moderation-law-on-appeal

Elon Musk’s X may succeed in blocking Calif. content moderation law on appeal

Judgment call —

Elon Musk’s X previously failed to block the law on First Amendment grounds.

Elon Musk’s X may succeed in blocking Calif. content moderation law on appeal

Elon Musk’s fight defending X’s content moderation decisions isn’t just with hate speech researchers and advertisers. He has also long been battling regulators, and this week, he seemed positioned to secure a potentially big win in California, where he’s hoping to permanently block a law that he claims unconstitutionally forces his platform to justify its judgment calls.

At a hearing Wednesday, three judges in the 9th US Circuit Court of Appeals seemed inclined to agree with Musk that a California law requiring disclosures from social media companies that clearly explain their content moderation choices likely violates the First Amendment.

Passed in 2022, AB-587 forces platforms like X to submit a “terms of service report” detailing how they moderate several categories of controversial content. Those categories include hate speech or racism, extremism or radicalization, disinformation or misinformation, harassment, and foreign political interference, which X’s lawyer, Joel Kurtzberg, told judges yesterday “are the most controversial categories of so-called awful but lawful speech.”

The law would seemingly require more transparency than ever from X, making it easy for users to track exactly how much controversial content X flags and removes—and perhaps most notably for advertisers, how many users viewed concerning content.

To block the law, X sued in 2023, arguing that California was trying to dictate its terms of service and force the company to make statements on content moderation that could generate backlash. X worried that the law “impermissibly” interfered with both “the constitutionally protected editorial judgments” of social media companies, as well as impacted users’ speech by requiring companies “to remove, demonetize, or deprioritize constitutionally protected speech that the state deems undesirable or harmful.”

Any companies found to be non-compliant could face stiff fines of up to $15,000 per violation per day, which X considered “draconian.” But last year, a lower court declined to block the law, prompting X to appeal, and yesterday, the appeals court seemed more sympathetic to X’s case.

At the hearing, Kurtzberg told judges that the law was “deeply threatening to the well-established First Amendment interests” of an “extraordinary diversity of” people, which is why X’s complaint was supported by briefs from reporters, freedom of the press advocates, First Amendment scholars, “conservative entities,” and people across the political spectrum.

All share “a deep concern about a statute that, on its face, is aimed at pressuring social media companies to change their content moderation policies, so as to carry less or even no expression that’s viewed by the state as injurious to its people,” Kurtzberg told judges.

When the court pointed out that seemingly the law simply required X to abide by content moderation policies for each category defined in its own terms of service—and did not compel X to adopt any policy or position that it did not choose—Kurtzberg pushed back.

“They don’t mandate us to define the categories in a specific way, but they mandate us to take a position on what the legislature makes clear are the most controversial categories to moderate and define,” Kurtzberg said. “We are entitled to respond to the statute by saying we don’t define hate speech or racism. But the report also asks about policies that are supposedly, quote, ‘intended’ to address those categories, which is a judgment call.”

“This is very helpful,” Judge Anthony Johnstone responded. “Even if you don’t yourself define those categories in the terms of service, you read the law as requiring you to opine or discuss those categories, even if they’re not part of your own terms,” and “you are required to tell California essentially your views on hate speech, extremism, harassment, foreign political interference, how you define them or don’t define them, and what you choose to do about them?”

“That is correct,” Kurtzberg responded, noting that X considered those categories the most “fraught” and “difficult to define.”

Elon Musk’s X may succeed in blocking Calif. content moderation law on appeal Read More »

robert-f-kennedy-jr.-sues-meta,-citing-chatbot’s-reply-as-evidence-of-shadowban

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban

Screenshot from the documentary <em>Who Is Bobby Kennedy?</em>” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/05/Who-Is-Bobby-Kennedy-screenshot-via-YouTube-800×422.jpg”></img><figcaption>
<p><a data-height=Enlarge / Screenshot from the documentary Who Is Bobby Kennedy?

In a lawsuit that seems determined to ignore that Section 230 exists, Robert F. Kennedy Jr. has sued Meta for allegedly shadowbanning his million-dollar documentary, Who Is Bobby Kennedy? and preventing his supporters from advocating for his presidential campaign.

According to Kennedy, Meta is colluding with the Biden administration to sway the 2024 presidential election by suppressing Kennedy’s documentary and making it harder to support Kennedy’s candidacy. This allegedly has caused “substantial donation losses,” while also violating the free speech rights of Kennedy, his supporters, and his film’s production company, AV24.

Meta had initially restricted the documentary on Facebook and Instagram but later fixed the issue after discovering that the film was mistakenly flagged by the platforms’ automated spam filters.

But Kennedy’s complaint claimed that Meta is still “brazenly censoring speech” by “continuing to throttle, de-boost, demote, and shadowban the film.” In an exhibit, Kennedy’s lawyers attached screenshots representing “hundreds” of Facebook and Instagram users whom Meta allegedly sent threats, intimidated, and sanctioned after they shared the documentary.

Some of these users remain suspended on Meta platforms, the complaint alleged. Others whose temporary suspensions have been lifted claimed that their posts are still being throttled, though, and Kennedy’s lawyers earnestly insisted that an exchange with Meta’s chatbot proves it.

Two days after the documentary’s release, Kennedy’s team apparently asked the Meta AI assistant, “When users post the link whoisbobbykennedy.com, can their followers see the post in their feeds?”

“I can tell you that the link is currently restricted by Meta,” the chatbot answered.

Chatbots, of course, are notoriously inaccurate sources of information, and Meta AI’s terms of service note this. In a section labeled “accuracy,” Meta warns that chatbot responses “may not reflect accurate, complete, or current information” and should always be verified.

Perhaps more significantly, there is little reason to think that Meta’s chatbot would have access to information about internal content moderation decisions.

Techdirt’s Mike Masnick mocked Kennedy’s reliance on the chatbot in the case. He noted that Kennedy seemed to have no evidence of the alleged shadow-banning, while there’s plenty of evidence that Meta’s spam filters accidentally remove non-violative content all the time.

Meta’s chatbot is “just a probabilistic stochastic parrot, repeating a probable sounding answer to users’ questions,” Masnick wrote. “And these idiots think it’s meaningful evidence. This is beyond embarrassing.”

Neither Meta nor Kennedy’s lawyer, Jed Rubenfeld, responded to Ars’ request to comment.

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban Read More »

bluesky-finally-gets-rid-of-invite-codes,-lets-everyone-join

Bluesky finally gets rid of invite codes, lets everyone join

Bluesky finally gets rid of invite codes, lets everyone join

After more than a year as an exclusive invite-only social media platform, Bluesky is now open to the public, so anyone can join without needing a once-coveted invite code.

In a blog, Bluesky said that requiring invite codes helped Bluesky “manage growth” while building features that allow users to control what content they see on the social platform.

When Bluesky debuted, many viewed it as a potential Twitter killer, but limited access to Bluesky may have weakened momentum. As of January 2024, Bluesky has more than 3 million users. That’s significantly less than X (formerly Twitter), which estimates suggest currently boasts more than 400 million global users.

But Bluesky CEO Jay Graber wrote in a blog last April that the app needed time because its goal was to piece together a new kind of social network built on its own decentralized protocol, AT Protocol. This technology allows users to freely port their social media accounts to different social platforms—including followers—rather than being locked into walled-off experiences on a platform owned by “a single company” like Meta’s Threads.

Perhaps most critically, the team wanted time to build out content moderation features before opening Bluesky to the masses to “prioritize user safety from the start.”

Bluesky plans to take a threefold approach to content moderation. The first layer is automated filtering that removes illegal, harmful content like child sexual abuse materials. Beyond that, Bluesky will soon give users extra layers of protection, including community labels and options to enable admins running servers to filter content manually.

Labeling services will be rolled out “in the coming weeks,” the blog said. These labels will make it possible for individuals or organizations to run their own moderation services, such as a trusted fact-checking organization. Users who trust these sources can subscribe to labeling services that filter out or appropriately label different types of content, like “spam” or “NSFW.”

“The human-generated label sets can be thought of as something similar to shared mute/block lists,” Bluesky explained last year.

Currently, Bluesky is recruiting partners for labeling services and did not immediately respond to Ars’ request to comment on any initial partnerships already formed.

It appears that Bluesky is hoping to bring in new users while introducing some of its flashiest features. Within the next month, Bluesky will also “be rolling out an experimental early version of ‘federation,’ or the feature that makes the network so open and customizable,” the blog said. The sales pitch is simple:

On Bluesky, you’ll have the freedom to choose (and the right to leave) instead of being held to the whims of private companies or black box algorithms. And wherever you go, your friends and relationships can go with you.

Developers interested in experimenting with the earliest version of AT Protocol can start testing out self-hosting servers now.

In addition to allowing users to customize content moderation, Bluesky also provides ways to customize feeds. Anyone joining will be defaulted to only see posts from users they follow, but they can also set up filters to discover content they enjoy without relying on a company’s algorithm to learn what interests them.

Bluesky users who sat on invite codes over the past year have joked about their uselessness now, with some designating themselves as legacy users. Seeming to reference Twitter’s once-coveted blue checks, one Bluesky user responding to a post from Graber joked, “When does everyone from the invite-only days get their Bluesky Elder profile badge?”

Bluesky finally gets rid of invite codes, lets everyone join Read More »