content moderation

youtube-denies-ai-was-involved-with-odd-removals-of-tech-tutorials

YouTube denies AI was involved with odd removals of tech tutorials


YouTubers suspect AI is bizarrely removing popular video explainers.

This week, tech content creators began to suspect that AI was making it harder to share some of the most highly sought-after tech tutorials on YouTube, but now YouTube is denying that odd removals were due to automation.

Creators grew alarmed when educational videos that YouTube had allowed for years were suddenly being bizarrely flagged as “dangerous” or “harmful,” with seemingly no way to trigger human review to overturn removals. AI seemed to be running the show, with creators’ appeals seemingly getting denied faster than a human could possibly review them.

Late Friday, a YouTube spokesperson confirmed that videos flagged by Ars have been reinstated, promising that YouTube will take steps to ensure that similar content isn’t removed in the future. But, to creators, it remains unclear why the videos got taken down, as YouTube claimed that both initial enforcement decisions and decisions on appeals were not the result of an automation issue.

Shocked creators were stuck speculating

Rich White, a computer technician who runs an account called CyberCPU Tech, had two videos removed that demonstrated workarounds to install Windows 11 on unsupported hardware.

These videos are popular, White told Ars, with people looking to bypass Microsoft account requirements each time a new build is released. For tech content creators like White, “these are bread and butter videos,” dependably yielding “extremely high views,” he said.

Because there’s such high demand, many tech content creators’ channels are filled with these kinds of videos. White’s account has “countless” examples, he said, and in the past, YouTube even featured his most popular video in the genre on a trending list.

To White and others, it’s unclear exactly what has changed on YouTube that triggered removals of this type of content.

YouTube only seemed to be removing recently posted content, White told Ars. However, if the takedowns ever impacted older content, entire channels documenting years of tech tutorials risked disappearing in “the blink of an eye,” another YouTuber behind a tech tips account called Britec09 warned after one of his videos was removed.

The stakes appeared high for everyone, White warned, in a video titled “YouTube Tech Channels in Danger!”

White had already censored content that he planned to post on his channel, fearing it wouldn’t be worth the risk of potentially losing his account, which began in 2020 as a side hustle but has since become his primary source of income. If he continues to change the content he posts to avoid YouTube penalties, it could hurt his account’s reach and monetization. Britec told Ars that he paused a sponsorship due to the uncertainty that he said has already hurt his channel and caused a “great loss of income.”

YouTube’s policies are strict, with the platform known to swiftly remove accounts that receive three strikes for violating community guidelines within 90 days. But, curiously, White had not received any strikes following his content removals. Although Britec reported that his account had received a strike following his video’s removal, White told Ars that YouTube so far had only given him two warnings, so his account is not yet at risk of a ban.

Creators weren’t sure why YouTube might deem this content as harmful, so they tossed around some theories. It seemed possible, White suggested in his video, that AI was detecting this content as “piracy,” but that shouldn’t be the case, he claimed, since his guides require users to have a valid license to install Windows 11. He also thinks it’s unlikely that Microsoft prompted the takedowns, suggesting tech content creators have a “love-hate relationship” with the tech company.

“They don’t like what we’re doing, but I don’t think they’re going to get rid of it,” White told Ars, suggesting that Microsoft “could stop us in our tracks” if it were motivated to end workarounds. But Microsoft doesn’t do that, White said, perhaps because it benefits from popular tutorials that attract swarms of Windows 11 users who otherwise may not use “their flagship operating system” if they can’t bypass Microsoft account requirements.

Those users could become loyal to Microsoft, White said. And eventually, some users may even “get tired of bypassing the Microsoft account requirements, or Microsoft will add a new feature that they’ll happily get the account for, and they’ll relent and start using a Microsoft account,” White suggested in his video. “At least some people will, not me.”

Microsoft declined Ars’ request to comment.

To White, it seemed possible that YouTube was leaning on AI  to catch more violations but perhaps recognized the risk of over-moderation and, therefore, wasn’t allowing AI to issue strikes on his account.

But that was just a “theory” that he and other creators came up with, but couldn’t confirm, since YouTube’s chatbot that supports creators seemed to also be “suspiciously AI-driven,” seemingly auto-responding even when a “supervisor” is connected, White said in his video.

Absent more clarity from YouTube, creators who post tutorials, tech tips, and computer repair videos were spooked. Their biggest fear was that unexpected changes to automated content moderation could unexpectedly knock them off YouTube for posting videos that in tech circles seem ordinary and commonplace, White and Britec said.

“We are not even sure what we can make videos on,” White said. “Everything’s a theory right now because we don’t have anything solid from YouTube.”

YouTube recommends making the content it’s removing

White’s channel gained popularity after YouTube highlighted an early trending video that he made, showing a workaround to install Windows 11 on unsupported hardware. Following that video, his channel’s views spiked, and then he gradually built up his subscriber base to around 330,000.

In the past, White’s videos in that category had been flagged as violative, but human review got them quickly reinstated.

“They were striked for the same reason, but at that time, I guess the AI revolution hadn’t taken over,” White said. “So it was relatively easy to talk to a real person. And by talking to a real person, they were like, ‘Yeah, this is stupid.’ And they brought the videos back.”

Now, YouTube suggests that human review is causing the removals, which likely doesn’t completely ease creators’ fears about arbitrary takedowns.

Britec’s video was also flagged as dangerous or harmful. He has managed his account that currently has nearly 900,000 subscribers since 2009, and he’s worried he risked losing “years of hard work,” he said in his video.

Britec told Ars that “it’s very confusing” for panicked tech content creators trying to understand what content is permissible. It’s particularly frustrating, he noted in his video, that YouTube’s creator tool inspiring “ideas” for posts seemed to contradict the mods’ content warnings and continued to recommend that creators make content on specific topics like workarounds to install Windows 11 on unsupported hardware.

Screenshot from Britec09’s YouTube video, showing YouTube prompting creators to make content that could get their channels removed. Credit: via Britec09

“This tool was to give you ideas for your next video,” Britec said. “And you can see right here, it’s telling you to create content on these topics. And if you did this, I can guarantee you your channel will get a strike.”

From there, creators hit what White described as a “brick wall,” with one of his appeals denied within one minute, which felt like it must be an automated decision. As Britec explained, “You will appeal, and your appeal will be rejected instantly. You will not be speaking to a human being. You’ll be speaking to a bot or AI. The bot will be giving you automated responses.”

YouTube insisted that the decisions weren’t automated, even when an appeal was denied within one minute.

White told Ars that it’s easy for creators to be discouraged and censor their channels rather than fight with the AI. After wasting “an hour and a half trying to reason with an AI about why I didn’t violate the community guidelines” once his first appeal was quickly denied, he “didn’t even bother using the chat function” after the second appeal was denied even faster, White confirmed in his video.

“I simply wasn’t going to do that again,” White said.

All week, the panic spread, reaching fans who follow tech content creators. On Reddit, people recommended saving tutorials lest they risk YouTube taking them down.

“I’ve had people come out and say, ‘This can’t be true. I rely on this every time,’” White told Ars.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

YouTube denies AI was involved with odd removals of tech tutorials Read More »

researcher-threatens-x-with-lawsuit-after-falsely-linking-him-to-french-probe

Researcher threatens X with lawsuit after falsely linking him to French probe

X claimed that David Chavalarias, “who spearheads the ‘Escape X’ campaign”—which is “dedicated to encouraging X users to leave the platform”—was chosen to assess the data with one of his prior research collaborators, Maziyar Panahi.

“The involvement of these individuals raises serious concerns about the impartiality, fairness, and political motivations of the investigation, to put it charitably,” X alleged. “A predetermined outcome is not a fair one.”

However, Panahi told Reuters that he believes X blamed him “by mistake,” based only on his prior association with Chavalarias. He further clarified that “none” of his projects with Chavalarias “ever had any hostile intent toward X” and threatened legal action to protect himself against defamation if he receives “any form of hate speech” due to X’s seeming error and mischaracterization of his research. An Ars review suggests his research on social media platforms predates Musk’s ownership of X and has probed whether certain recommendation systems potentially make platforms toxic or influence presidential campaigns.

“The fact my name has been mentioned in such an erroneous manner demonstrates how little regard they have for the lives of others,” Panahi told Reuters.

X denies being an “organized gang”

X suggests that it “remains in the dark as to the specific allegations made against the platform,” accusing French police of “distorting French law in order to serve a political agenda and, ultimately, restrict free speech.”

The press release is indeed vague on what exactly French police are seeking to uncover. All French authorities say is that they are probing X for alleged “tampering with the operation of an automated data processing system by an organized gang” and “fraudulent extraction of data from an automated data processing system by an organized gang.” But later, a French magistrate, Laure Beccuau, clarified in a statement that the probe was based on complaints that X is spreading “an enormous amount of hateful, racist, anti-LGBT+ and homophobic political content, which aims to skew the democratic debate in France,” Politico reported.

Researcher threatens X with lawsuit after falsely linking him to French probe Read More »

eu-presses-pause-on-probe-of-x-as-us-trade-talks-heat-up

EU presses pause on probe of X as US trade talks heat up

While Trump and Musk have fallen out this year after developing a political alliance on the 2024 election, the US president has directly attacked EU penalties on US companies calling them a “form of taxation” and comparing fines on tech companies with “overseas extortion.”

Despite the US pressure, commission president Ursula von der Leyen has explicitly stated Brussels will not change its digital rulebook. In April, the bloc imposed a total of €700 million fines on Apple and Facebook owner Meta for breaching antitrust rules.

But unlike the Apple and Meta investigations, which fall under the Digital Markets Act, there are no clear legal deadlines under the DSA. That gives the bloc more political leeway on when it announces its formal findings. The EU also has probes into Meta and TikTok under its content moderation rulebook.

The commission said the “proceedings against X under the DSA are ongoing,” adding that the enforcement of “our legislation is independent of the current ongoing negotiations.”

It added that it “remains fully committed to the effective enforcement of digital legislation, including the Digital Services Act and the Digital Markets Act.”

Anna Cavazzini, a European lawmaker for the Greens, said she expected the commission “to move on decisively with its investigation against X as soon as possible.”

“The commission must continue making changes to EU regulations an absolute red line in tariff negotiations with the US,” she added.

Alongside Brussels’ probe into X’s transparency breaches, it is also looking into content moderation at the company after Musk hosted Alice Weidel of the far-right Alternative for Germany for a conversation on the social media platform ahead of the country’s elections.

Some European lawmakers, as well as the Polish government, are also pressing the commission to open an investigation into Musk’s Grok chatbot after it spewed out antisemitic tropes last week.

X said it disagreed “with the commission’s assessment of the comprehensive work we have done to comply with the Digital Services Act and the commission’s interpretation of the Act’s scope.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

EU presses pause on probe of X as US trade talks heat up Read More »

x’s-globe-trotting-defense-of-ads-on-nazi-posts-violates-tos,-media-matters-says

X’s globe-trotting defense of ads on Nazi posts violates TOS, Media Matters says

Part of the problem appeared to be decreased spending from big brands that did return, like reportedly Apple. Other dips were linked to X’s decision to partner with adtech companies, splitting ad revenue with Magnite, Google, and PubMatic, Business Insider reported. The CEO of marketing consultancy Ebiquity, Ruben Schreurs, told Business Insider that most of the top 100 global advertisers he works with were still hesitant to invest in X, confirming “no signs of a mass return.”

For X, the ad boycott has tanked revenue for years, even putting X on the brink of bankruptcy, Musk claimed. The billionaire paid $44 billion for the platform, and at the end of 2024, Fidelity estimated that X was worth just $9.4 billion, CNN reported.

But at the start of 2025, analysts predicted that advertisers may return to X to garner political favor with Musk, who remains a senior advisor in the Trump administration. Perhaps more importantly in the short-term, sources also told Bloomberg that X could potentially raise as much as Musk paid—$44 billion—from investors willing to help X pay down its debt to support new payments and video products.

That could put a Band-Aid on X’s financial wounds as Yaccarino attempts to persuade major brands that X isn’t toxic (while X sues some of them) and Musk tries to turn the social media platform once known as Twitter into an “everything app” as ubiquitous in the US as WeChat in China.

MMFA alleges that its research, which shows how toxic X is today, has been stifled by Musk’s suits, but other groups have filled the gap. The Center for Countering Digital Hate has resumed its reporting since defeating X’s lawsuit last March, and, most recently, University of California, Berkeley, researchers conducted a February analysis showing that “hate speech on the social media platform X rose about 50 percent” in the eight months after Musk’s 2022 purchase, which suggests that advertisers had potentially good reason to be spooked by changes at X and that those changes continue to keep them at bay today.

“Musk has continually tried to blame others for this loss in revenue since his takeover,” MMFA’s complaint said, alleging that all three suits were filed to intimidate MMFA “for having dared to publish an article Musk did not like.”

X’s globe-trotting defense of ads on Nazi posts violates TOS, Media Matters says Read More »

“zero-warnings”:-longtime-youtuber-rails-against-unexplained-channel-removal

“Zero warnings”: Longtime YouTuber rails against unexplained channel removal

Artemiy Pavlov, the founder of a small but mighty music software brand called Sinesvibes, spent more than 15 years building a YouTube channel with all original content to promote his business’ products. Over all those years, he never had any issues with YouTube’s automated content removal system—until Monday, when YouTube, without issuing a single warning, abruptly deleted his entire channel.

“What a ‘nice’ way to start a week!” Pavlov posted on Bluesky. “Our channel on YouTube has been deleted due to ‘spam and deceptive policies.’ Which is the biggest WTF moment in our brand’s history on social platforms. We have only posted demos of our own original products, never anything else….”

Officially, YouTube told Pavlov that his channel violated YouTube’s “spam, deceptive practices, and scam policy,” but Pavlov could think of no videos that might be labeled as violative.

“We have nothing to hide,” Pavlov told Ars, calling YouTube’s decision to delete the channel with “zero warnings” a “terrible, terrible day for an independent, honest software brand.”

“We have never been involved with anything remotely shady,” Pavlov said. “We have never taken a single dollar dishonestly from anyone. And we have thousands of customers that stand by our brand.”

Ars saw Pavolov’s post and reached out to YouTube to find out why the channel was targeted for takedown. About three hours later, the channel was suddenly restored. That’s remarkably fast, as YouTube can sometimes take days or weeks to review an appeal. A YouTube spokesperson later confirmed that the Sinesvibes channel was reinstated due to the regular appeals process, indicating perhaps that YouTube could see that Sinesvibes’ removal was an obvious mistake.

Developer calls for more human review

For small brands like Sinesvibes, even spending half a day in limbo was a cause for crisis. Immediately, the brand worried about 50 broken product pages for one of its distributors, as well as “hundreds if not thousands of news articles posted about our software on dozens of different websites.” Unsure if the channel would ever be restored, Sinesvibes spent most of Monday surveying the damage.

Now that the channel is restored, Pavlov is stuck confronting how much of the Sinesvibes brand depends on the YouTube channel remaining online while still grappling with uncertainty since the reason behind the ban remains unknown. He told Ars that’s why, for small brands, simply having a channel reinstated doesn’t resolve all their concerns.

“Zero warnings”: Longtime YouTuber rails against unexplained channel removal Read More »

meta-kills-diversity-programs,-claiming-dei-has-become-“too-charged”

Meta kills diversity programs, claiming DEI has become “too charged”

Meta has reportedly ended diversity, equity, and inclusion (DEI) programs that influenced staff hiring and training, as well as vendor decisions, effective immediately.

According to an internal memo viewed by Axios and verified by Ars, Meta’s vice president of human resources, Janelle Gale, told Meta employees that the shift was due to “legal and policy landscape surrounding diversity, equity, and inclusion efforts in the United States is changing.”

It’s another move by Meta that some view as part of the company’s larger effort to align with the incoming Trump administration’s politics. In December, Donald Trump promised to crack down on DEI initiatives at companies and on college campuses, The Guardian reported.

Earlier this week, Meta cut its fact-checking program, which was introduced in 2016 after Trump’s first election to prevent misinformation from spreading. In a statement announcing Meta’s pivot to X’s Community Notes-like approach to fact-checking, Meta CEO Mark Zuckerberg claimed that fact-checkers were “too politically biased” and “destroyed trust” on Meta platforms like Facebook, Instagram, and Threads.

Trump has also long promised to renew his war on alleged social media censorship while in office. Meta faced backlash this week over leaked rule changes relaxing Meta’s hate speech policies, The Intercept reported, which Zuckerberg said were “out of touch with mainstream discourse.”  Those changes included allowing anti-trans slurs previously banned, as well as permitting women to be called “property” and gay people to be called “mentally ill,” Mashable reported. In a statement, GLAAD said that rolling back safety guardrails risked turning Meta platforms into “unsafe landscapes filled with dangerous hate speech, violence, harassment, and misinformation” and alleged that Meta appeared to be willing to “normalize anti-LGBTQ hatred for profit.”

Meta kills diversity programs, claiming DEI has become “too charged” Read More »

elon-musk’s-x-may-succeed-in-blocking-calif.-content-moderation-law-on-appeal

Elon Musk’s X may succeed in blocking Calif. content moderation law on appeal

Judgment call —

Elon Musk’s X previously failed to block the law on First Amendment grounds.

Elon Musk’s X may succeed in blocking Calif. content moderation law on appeal

Elon Musk’s fight defending X’s content moderation decisions isn’t just with hate speech researchers and advertisers. He has also long been battling regulators, and this week, he seemed positioned to secure a potentially big win in California, where he’s hoping to permanently block a law that he claims unconstitutionally forces his platform to justify its judgment calls.

At a hearing Wednesday, three judges in the 9th US Circuit Court of Appeals seemed inclined to agree with Musk that a California law requiring disclosures from social media companies that clearly explain their content moderation choices likely violates the First Amendment.

Passed in 2022, AB-587 forces platforms like X to submit a “terms of service report” detailing how they moderate several categories of controversial content. Those categories include hate speech or racism, extremism or radicalization, disinformation or misinformation, harassment, and foreign political interference, which X’s lawyer, Joel Kurtzberg, told judges yesterday “are the most controversial categories of so-called awful but lawful speech.”

The law would seemingly require more transparency than ever from X, making it easy for users to track exactly how much controversial content X flags and removes—and perhaps most notably for advertisers, how many users viewed concerning content.

To block the law, X sued in 2023, arguing that California was trying to dictate its terms of service and force the company to make statements on content moderation that could generate backlash. X worried that the law “impermissibly” interfered with both “the constitutionally protected editorial judgments” of social media companies, as well as impacted users’ speech by requiring companies “to remove, demonetize, or deprioritize constitutionally protected speech that the state deems undesirable or harmful.”

Any companies found to be non-compliant could face stiff fines of up to $15,000 per violation per day, which X considered “draconian.” But last year, a lower court declined to block the law, prompting X to appeal, and yesterday, the appeals court seemed more sympathetic to X’s case.

At the hearing, Kurtzberg told judges that the law was “deeply threatening to the well-established First Amendment interests” of an “extraordinary diversity of” people, which is why X’s complaint was supported by briefs from reporters, freedom of the press advocates, First Amendment scholars, “conservative entities,” and people across the political spectrum.

All share “a deep concern about a statute that, on its face, is aimed at pressuring social media companies to change their content moderation policies, so as to carry less or even no expression that’s viewed by the state as injurious to its people,” Kurtzberg told judges.

When the court pointed out that seemingly the law simply required X to abide by content moderation policies for each category defined in its own terms of service—and did not compel X to adopt any policy or position that it did not choose—Kurtzberg pushed back.

“They don’t mandate us to define the categories in a specific way, but they mandate us to take a position on what the legislature makes clear are the most controversial categories to moderate and define,” Kurtzberg said. “We are entitled to respond to the statute by saying we don’t define hate speech or racism. But the report also asks about policies that are supposedly, quote, ‘intended’ to address those categories, which is a judgment call.”

“This is very helpful,” Judge Anthony Johnstone responded. “Even if you don’t yourself define those categories in the terms of service, you read the law as requiring you to opine or discuss those categories, even if they’re not part of your own terms,” and “you are required to tell California essentially your views on hate speech, extremism, harassment, foreign political interference, how you define them or don’t define them, and what you choose to do about them?”

“That is correct,” Kurtzberg responded, noting that X considered those categories the most “fraught” and “difficult to define.”

Elon Musk’s X may succeed in blocking Calif. content moderation law on appeal Read More »

robert-f-kennedy-jr.-sues-meta,-citing-chatbot’s-reply-as-evidence-of-shadowban

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban

Screenshot from the documentary <em>Who Is Bobby Kennedy?</em>” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/05/Who-Is-Bobby-Kennedy-screenshot-via-YouTube-800×422.jpg”></img><figcaption>
<p><a data-height=Enlarge / Screenshot from the documentary Who Is Bobby Kennedy?

In a lawsuit that seems determined to ignore that Section 230 exists, Robert F. Kennedy Jr. has sued Meta for allegedly shadowbanning his million-dollar documentary, Who Is Bobby Kennedy? and preventing his supporters from advocating for his presidential campaign.

According to Kennedy, Meta is colluding with the Biden administration to sway the 2024 presidential election by suppressing Kennedy’s documentary and making it harder to support Kennedy’s candidacy. This allegedly has caused “substantial donation losses,” while also violating the free speech rights of Kennedy, his supporters, and his film’s production company, AV24.

Meta had initially restricted the documentary on Facebook and Instagram but later fixed the issue after discovering that the film was mistakenly flagged by the platforms’ automated spam filters.

But Kennedy’s complaint claimed that Meta is still “brazenly censoring speech” by “continuing to throttle, de-boost, demote, and shadowban the film.” In an exhibit, Kennedy’s lawyers attached screenshots representing “hundreds” of Facebook and Instagram users whom Meta allegedly sent threats, intimidated, and sanctioned after they shared the documentary.

Some of these users remain suspended on Meta platforms, the complaint alleged. Others whose temporary suspensions have been lifted claimed that their posts are still being throttled, though, and Kennedy’s lawyers earnestly insisted that an exchange with Meta’s chatbot proves it.

Two days after the documentary’s release, Kennedy’s team apparently asked the Meta AI assistant, “When users post the link whoisbobbykennedy.com, can their followers see the post in their feeds?”

“I can tell you that the link is currently restricted by Meta,” the chatbot answered.

Chatbots, of course, are notoriously inaccurate sources of information, and Meta AI’s terms of service note this. In a section labeled “accuracy,” Meta warns that chatbot responses “may not reflect accurate, complete, or current information” and should always be verified.

Perhaps more significantly, there is little reason to think that Meta’s chatbot would have access to information about internal content moderation decisions.

Techdirt’s Mike Masnick mocked Kennedy’s reliance on the chatbot in the case. He noted that Kennedy seemed to have no evidence of the alleged shadow-banning, while there’s plenty of evidence that Meta’s spam filters accidentally remove non-violative content all the time.

Meta’s chatbot is “just a probabilistic stochastic parrot, repeating a probable sounding answer to users’ questions,” Masnick wrote. “And these idiots think it’s meaningful evidence. This is beyond embarrassing.”

Neither Meta nor Kennedy’s lawyer, Jed Rubenfeld, responded to Ars’ request to comment.

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban Read More »

bluesky-finally-gets-rid-of-invite-codes,-lets-everyone-join

Bluesky finally gets rid of invite codes, lets everyone join

Bluesky finally gets rid of invite codes, lets everyone join

After more than a year as an exclusive invite-only social media platform, Bluesky is now open to the public, so anyone can join without needing a once-coveted invite code.

In a blog, Bluesky said that requiring invite codes helped Bluesky “manage growth” while building features that allow users to control what content they see on the social platform.

When Bluesky debuted, many viewed it as a potential Twitter killer, but limited access to Bluesky may have weakened momentum. As of January 2024, Bluesky has more than 3 million users. That’s significantly less than X (formerly Twitter), which estimates suggest currently boasts more than 400 million global users.

But Bluesky CEO Jay Graber wrote in a blog last April that the app needed time because its goal was to piece together a new kind of social network built on its own decentralized protocol, AT Protocol. This technology allows users to freely port their social media accounts to different social platforms—including followers—rather than being locked into walled-off experiences on a platform owned by “a single company” like Meta’s Threads.

Perhaps most critically, the team wanted time to build out content moderation features before opening Bluesky to the masses to “prioritize user safety from the start.”

Bluesky plans to take a threefold approach to content moderation. The first layer is automated filtering that removes illegal, harmful content like child sexual abuse materials. Beyond that, Bluesky will soon give users extra layers of protection, including community labels and options to enable admins running servers to filter content manually.

Labeling services will be rolled out “in the coming weeks,” the blog said. These labels will make it possible for individuals or organizations to run their own moderation services, such as a trusted fact-checking organization. Users who trust these sources can subscribe to labeling services that filter out or appropriately label different types of content, like “spam” or “NSFW.”

“The human-generated label sets can be thought of as something similar to shared mute/block lists,” Bluesky explained last year.

Currently, Bluesky is recruiting partners for labeling services and did not immediately respond to Ars’ request to comment on any initial partnerships already formed.

It appears that Bluesky is hoping to bring in new users while introducing some of its flashiest features. Within the next month, Bluesky will also “be rolling out an experimental early version of ‘federation,’ or the feature that makes the network so open and customizable,” the blog said. The sales pitch is simple:

On Bluesky, you’ll have the freedom to choose (and the right to leave) instead of being held to the whims of private companies or black box algorithms. And wherever you go, your friends and relationships can go with you.

Developers interested in experimenting with the earliest version of AT Protocol can start testing out self-hosting servers now.

In addition to allowing users to customize content moderation, Bluesky also provides ways to customize feeds. Anyone joining will be defaulted to only see posts from users they follow, but they can also set up filters to discover content they enjoy without relying on a company’s algorithm to learn what interests them.

Bluesky users who sat on invite codes over the past year have joked about their uselessness now, with some designating themselves as legacy users. Seeming to reference Twitter’s once-coveted blue checks, one Bluesky user responding to a post from Graber joked, “When does everyone from the invite-only days get their Bluesky Elder profile badge?”

Bluesky finally gets rid of invite codes, lets everyone join Read More »