adult content

discord-faces-backlash-over-age-checks-after-data-breach-exposed-70,000-ids

Discord faces backlash over age checks after data breach exposed 70,000 IDs


Discord to block adult content unless users verify ages with selfies or IDs.

Discord is facing backlash after announcing that all users will soon be required to verify ages to access adult content by sharing video selfies or uploading government IDs.

According to Discord, it’s relying on AI technology that verifies age on the user’s device, either by evaluating a user’s facial structure or by comparing a selfie to a government ID. Although government IDs will be checked off-device, the selfie data will never leave the user’s device, Discord emphasized. Both forms of data will be promptly deleted after the user’s age is estimated.

In a blog, Discord confirmed that “a phased global rollout” would begin in “early March,” at which point all users globally would be defaulted to “teen-appropriate” experiences.

To unblur sensitive media or access age-restricted channels, the majority of users will likely have to undergo Discord’s age estimation process. Most users will only need to verify their ages once, Discord said, but some users “may be asked to use multiple methods, if more information is needed to assign an age group,” the blog said.

On social media, alarmed Discord users protested the move, doubting whether Discord could be trusted with their most sensitive information after Discord age verification data was recently breached. In October, hackers stole government IDs of 70,000 Discord users from a third-party service that Discord previously trusted to verify ages in the United Kingdom and Australia.

At that time, Discord told users that the hackers were hoping to use the stolen data to “extort a financial ransom from Discord.” In October, Ars Senior Security Editor Dan Goodin joined others warning that “the best advice for people who have submitted IDs to Discord or any other service is to assume they have been or soon will be stolen by hackers and put up for sale or used in extortion scams.”

For bad actors, Discord will likely only become a bigger target as more sensitive information is collected worldwide, users now fear.

It’s no surprise then that hundreds of Discord users on Reddit slammed the decision to expand age verification globally shortly after The Verge broke the news. On a PC gaming subreddit discussing alternative apps for gamers, one user wrote, “Hell, Discord has already had one ID breach, why the fuck would anyone verify on it after that?”

“This is how Discord dies,” another user declared. “Seriously, uploading any kind of government ID to a 3rd party company is just asking for identity theft on a global scale.”

Many users seem just as sketched out about sharing face scans. On the Discord app subreddit, some users vowed to never submit selfies or IDs, fearing that breaches may be inevitable and suspecting Discord of downplaying privacy risks while allowing data harvesting.

Who can access Discord age-check data?

Discord’s system is supposed to make sure that only users have access to their age-check data, which Discord said would never leave their phones.

The company is hoping to convince users that it has tightened security after the breach by partnering with k-ID, an increasingly popular age-check service provider that’s also used by social platforms from Meta and Snap.

However, self-described Discord users on Reddit aren’t so sure, with some going the extra step of picking apart k-ID’s privacy policy to understand exactly how age is verified without data ever leaving the device.

“The wording is pretty unclear and inconsistent even if you dig down to the k-ID privacy policy,” one Redditor speculated. “Seems that ID scans are uploaded to k-ID servers, they delete them, but they also mention using ‘trusted 3rd parties’ for verification, who may or may not delete it.” That user seemingly gave up on finding reassurances in either company’s privacy policies, noting that “everywhere along the chain it reads like ‘we don’t collect your data, we forward it to someone else… .’”

Discord did not immediately respond to Ars’ requests to comment directly on how age checks work without data leaving the device.

To better understand user concerns, Ars reviewed the privacy policies, noting that k-ID said its “facial age estimation” tool is provided by a Swiss company called Privately.

“We don’t actually see any faces that are processed via this solution,” k-ID’s policy said.

That part does seem vague, since Privately isn’t explicitly included in the “we” in that statement. Similarly, further down, the policy more clearly states that “neither k-ID nor its service providers collect any biometric information from users when they interact with the solution. k-ID only receives and stores the outcome of the age check process.” In that section, “service providers” seems to refer to partners like Discord, which integrate k-ID’s age checks, rather than third parties like Privately that actually conduct the age check.

Asked for comment, a k-ID spokesperson told Ars that “the Facial Age Estimation technology runs entirely on the user’s device in real time when they are performing the verification. That means there is no video or image transmitted, and the estimation happens locally. The only data to leave the device is a pass/fail of the age threshold which is what Discord receives (and some performance metrics that contain no personal data).”

K-ID’s spokesperson told Ars that no third parties store personal data shared during age checks.

“k-ID, does not receive personal data from Discord when performing age-assurance,” k-ID’s spokesperson said. “This is an intentional design choice grounded in data protection and data minimisation principles. There is no storage of personal data by k-ID or any third parties, regardless of the age assurance method used.”

Turning to Privately’s website, that offers a little more information on how on-device age estimation works, while providing likely more reassurances that data won’t leave devices.

Privately’s services were designed to minimize data collection and prioritize anonymity to comply with the European Union’s General Data Protection Regulation, Privately noted. “No user biometric or personal data is captured or transmitted,” Privately’s website said, while bragging that “our secret sauce is our ability to run very performant models on the user device or user browser to implement a privacy-centric solution.”

The company’s privacy policy offers slightly more detail, noting that the company avoids relying on the cloud while running AI models on local devices.

“Our technology is built using on-device edge-AI that facilitates data minimization so as to maximise user privacy and data protection,” the privacy policy said. “The machine learning based technology that we use (for age estimation and safeguarding) processes user’s data on their own devices, thereby avoiding the need for us or for our partners to export user’s personal data onto any form of cloud services.”

Additionally, the policy said, “our technology solutions are built to operate mostly on user devices and to avoid sending any of the user’s personal data to any form of cloud service. For this we use specially adapted machine learning models that can be either deployed or downloaded on the user’s device. This avoids the need to transmit and retain user data outside the user device in order to provide the service.”

Finally, Privately explained that it also employs a “double blind” implementation to avoid knowing the origin of age estimation requests. That supposedly ensures that Privately only knows the result of age checks and cannot connect the result to a user on a specific platform.

Discord expects to lose users

Some Discord users may never be asked to verify their ages, even if they try to access age-restricted content. Savannah Badalich, Discord’s global head of product policy, told The Verge that Discord “is also rolling out an age inference model that analyzes metadata, like the types of games a user plays, their activity on Discord, and behavioral signals like signs of working hours or the amount of time they spend on Discord.”

“If we have a high confidence that they are an adult, they will not have to go through the other age verification flows,” Badalich said.

Badalich confirmed that Discord is bracing for some users to leave Discord over the update but suggested that “we’ll find other ways to bring users back.”

On Reddit, Discord users complained that age verification is easy to bypass, forcing adults to share sensitive information without keeping kids away from harmful content. In Australia, where Discord’s policy first rolled out, some kids claimed that Discord never even tried to estimate their ages, while others found it easy to trick k-ID by using AI videos or altering their appearances to look older. A teen girl relied on fake eyelashes to do the trick, while one 13-year-old boy was estimated to be over 30 years old after scrunching his face to seem more wrinkled.

Badalich told The Verge that Discord doesn’t expect the tools to work perfectly but acts quickly to block workarounds, like teens using Death Stranding‘s photo mode to skirt age gates. However, questions remain about the accuracy of Discord’s age estimation model in assessing minors’ ages, in particular.

It may be noteworthy that Privately only claims that its technology is “proven to be accurate to within 1.3 years, for 18-20-year-old faces, regardless of a customer’s gender or ethnicity.” But experts told Ars last year that flawed age-verification technology still frequently struggles to distinguish minors from adults, especially when differentiating between a 17- and 18-year-old, for example.

Perhaps notably, Discord’s prior scandal occurred after hackers stole government IDs that users shared as part of the appeal process in order to fix an incorrect age estimation. Appeals could remain the most vulnerable part of this process, The Verge’s report indicated. Badalich confirmed that a third-party vendor would be reviewing appeals, with the only reassurance for users seemingly that IDs shared during appeals “are deleted quicklyin most cases, immediately after age confirmation.”

On Reddit, Discord fans awaiting big changes remain upset. A disgruntled Discord user suggested that “corporations like Facebook and Discord, will implement easily passable, cheapest possible, bare minimum under the law verification, to cover their ass from a lawsuit,” while forcing users to trust that their age-check data is secure.

Another user joked that she’d be more willing to trust that selfies never leave a user’s device if Discord were “willing to pay millions to every user” whose “scan does leave a device.”

This story was updated on February 9 to clarify that government IDs are checked off-device.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Discord faces backlash over age checks after data breach exposed 70,000 IDs Read More »

meta-denies-torrenting-porn-to-train-ai,-says-downloads-were-for-“personal-use”

Meta denies torrenting porn to train AI, says downloads were for “personal use”

Instead, Meta argued, available evidence “is plainly indicative” that the flagged adult content was torrented for “private personal use”—since the small amount linked to Meta IP addresses and employees represented only “a few dozen titles per year intermittently obtained one file at a time.”

“The far more plausible inference to be drawn from such meager, uncoordinated activity is that disparate individuals downloaded adult videos for personal use,” Meta’s filing said.

For example, unlike lawsuits raised by book authors whose works are part of an enormous dataset used to train AI, the activity on Meta’s corporate IP addresses only amounted to about 22 downloads per year. That is nowhere near the “concerted effort to collect the massive datasets Plaintiffs allege are necessary for effective AI training,” Meta argued.

Further, that alleged activity can’t even reliably be linked to any Meta employee, Meta argued.

Strike 3 “does not identify any of the individuals who supposedly used these Meta IP addresses, allege that any were employed by Meta or had any role in AI training at Meta, or specify whether (and which) content allegedly downloaded was used to train any particular Meta model,” Meta wrote.

Meanwhile, “tens of thousands of employees,” as well as “innumerable contractors, visitors, and third parties access the Internet at Meta every day,” Meta argued. So while it’s “possible one or more Meta employees” downloaded Strike 3’s content over the last seven years, “it is just as possible” that a “guest, or freeloader,” or “contractor, or vendor, or repair person—or any combination of such persons—was responsible for that activity,” Meta suggested.

Other alleged activity included a claim that a Meta contractor was directed to download adult content at his father’s house, but those downloads, too, “are plainly indicative of personal consumption,” Meta argued. That contractor worked as an “automation engineer,” Meta noted, with no apparent basis provided for why he would be expected to source AI training data in that role. “No facts plausibly” tie “Meta to those downloads,” Meta claimed.

Meta denies torrenting porn to train AI, says downloads were for “personal use” Read More »

meta-pirated-and-seeded-porn-for-years-to-train-ai,-lawsuit-says

Meta pirated and seeded porn for years to train AI, lawsuit says

Evidence may prove Meta seeded more content

Seeking evidence to back its own copyright infringement claims, Strike 3 Holdings searched “its archive of recorded infringement captured by its VXN Scan and Cross Reference tools” and found 47 “IP addresses identified as owned by Facebook infringing its copyright protected Works.”

The data allegedly demonstrates a “continued unauthorized distribution” over “several years.” And Meta allegedly did not stop its seeding after Strike 3 Holdings confronted the tech giant with this evidence—despite the IP data supposedly being verified through an industry-leading provider called Maxmind.

Strike 3 Holdings shared a screenshot of MaxMind’s findings. Credit: via Strike 3 Holdings’ complaint

Meta also allegedly attempted to “conceal its BitTorrent activities” through “six Virtual Private Clouds” that formed a “stealth network” of “hidden IP addresses,” the lawsuit alleged, which seemingly implicated a “major third-party data center provider” as a partner in Meta’s piracy.

An analysis of these IP addresses allegedly found “data patterns that matched infringement patterns seen on Meta’s corporate IP Addresses” and included “evidence of other activity on the BitTorrent network including ebooks, movies, television shows, music, and software.” The seemingly non-human patterns documented on both sets of IP addresses suggest the data was for AI training and not for personal use, Strike 3 Holdings alleged.

Perhaps most shockingly, considering that a Meta employee joked “torrenting from a corporate laptop doesn’t feel right,” Strike 3 Holdings further alleged that it found “at least one residential IP address of a Meta employee” infringing its copyrighted works. That suggests Meta may have directed an employee to torrent pirated data outside the office to obscure the data trail.

The adult site operator did not identify the employee or the major data center discussed in its complaint, noting in a subsequent filing that it recognized the risks to Meta’s business and its employees’ privacy of sharing sensitive information.

In total, the company alleged that evidence shows “well over 100,000 unauthorized distribution transactions” linked to Meta’s corporate IPs. Strike 3 Holdings is hoping the evidence will lead a jury to find Meta liable for direct copyright infringement or charge Meta with secondary and vicarious copyright infringement if the jury finds that Meta successfully distanced itself by using the third-party data center or an employee’s home IP address.

“Meta has the right and ability to supervise and/or control its own corporate IP addresses, as well as the IP addresses hosted in off-infra data centers, and the acts of its employees and agents infringing Plaintiffs’ Works through their residential IPs by using Meta’s AI script to obtain content through BitTorrent,” the complaint said.

Meta pirated and seeded porn for years to train AI, lawsuit says Read More »

millions-of-onlyfans-paywalls-make-it-hard-to-detect-child-sex-abuse,-cops-say

Millions of OnlyFans paywalls make it hard to detect child sex abuse, cops say

Millions of OnlyFans paywalls make it hard to detect child sex abuse, cops say

OnlyFans’ paywalls make it hard for police to detect child sexual abuse materials (CSAM) on the platform, Reuters reported—especially new CSAM that can be harder to uncover online.

Because each OnlyFans creator posts their content behind their own paywall, five specialists in online child sexual abuse told Reuters that it’s hard to independently verify just how much CSAM is posted. Cops would seemingly need to subscribe to each account to monitor the entire platform, one expert who aids in police CSAM investigations, Trey Amick, suggested to Reuters.

OnlyFans claims that the amount of CSAM on its platform is extremely low. Out of 3.2 million accounts sharing “hundreds of millions of posts,” OnlyFans only removed 347 posts as suspected CSAM in 2023. Each post was voluntarily reported to the CyberTipline of the National Center for Missing and Exploited Children (NCMEC), which OnlyFans told Reuters has “full access” to monitor content on the platform.

However, that intensified monitoring seems to have only just begun. NCMEC just got access to OnlyFans in late 2023, the child safety group told Reuters. And NCMEC seemingly can’t scan the entire platform at once, telling Reuters that its access was “limited” exclusively “to OnlyFans accounts reported to its CyberTipline or connected to a missing child case.”

Similarly, OnlyFans told Reuters that police do not have to subscribe to investigate a creator’s posts, but the platform only grants free access to accounts when there’s an active investigation. That means once police suspect that CSAM is being exchanged on an account, they get “full access” to review “account details, content, and direct messages,” Reuters reported.

But that access doesn’t aid police hoping to uncover CSAM shared on accounts not yet flagged for investigation. That’s a problem, a Reuters investigation found, because it’s easy for creators to make a new account, where bad actors can mask their identities to avoid OnlyFans’ “controls meant to hold account holders responsible for their own content,” one detective, Edward Scoggins, told Reuters.

Evading OnlyFans’ CSAM detection seems easy

OnlyFans told Reuters that “would-be creators must provide at least nine pieces of personally identifying information and documents, including bank details, a selfie while holding a government photo ID, and—in the United States—a Social Security number.”

“All this is verified by human judgment and age-estimation technology that analyzes the selfie,” OnlyFans told Reuters. On OnlyFans’ site, the platform further explained that “we continuously scan our platform to prevent the posting of CSAM. All our content moderators are trained to identify and swiftly report any suspected CSAM.”

However, Reuters found that none of these controls worked 100 percent of the time to stop bad actors from sharing CSAM. And the same seemingly holds true for some minors motivated to post their own explicit content. One girl told Reuters that she evaded age verification first by using an adult’s driver’s license to sign up, then by taking over an account of an adult user.

An OnlyFans spokesperson told Ars that low amounts of CSAM reported to NCMEC is a “testament to the rigorous safety controls OnlyFans has in place.”

OnlyFans is proud of the work we do to aggressively target, report, and support the investigations and prosecutions of anyone who seeks to abuse our platform in this way,” OnlyFans’ spokesperson told Ars. “Unlike many other platforms, the lack of anonymity and absence of end-to-end encryption on OnlyFans means that reports are actionable by law enforcement and prosecutors.”

Millions of OnlyFans paywalls make it hard to detect child sex abuse, cops say Read More »

pornhub-prepares-to-block-five-more-states-rather-than-check-ids

Pornhub prepares to block five more states rather than check IDs

“Uphill battle” —

The number of states blocked by Pornhub will soon nearly double.

Pornhub prepares to block five more states rather than check IDs

Aurich Lawson | Getty Images

Pornhub will soon be blocked in five more states as the adult site continues to fight what it considers privacy-infringing age-verification laws that require Internet users to provide an ID to access pornography.

On July 1, according to a blog post on the adult site announcing the impending block, Pornhub visitors in Indiana, Idaho, Kansas, Kentucky, and Nebraska will be “greeted by a video featuring” adult entertainer Cherie Deville, “who explains why we had to make the difficult decision to block them from accessing Pornhub.”

Pornhub explained that—similar to blocks in Texas, Utah, Arkansas, Virginia, Montana, North Carolina, and Mississippi—the site refuses to comply with soon-to-be-enforceable age-verification laws in this new batch of states that allegedly put users at “substantial risk” of identity theft, phishing, and other harms.

Age-verification laws requiring adult site visitors to submit “private information many times to adult sites all over the Internet” normalizes the unnecessary disclosure of personally identifiable information (PII), Pornhub argued, warning, “this is not a privacy-by-design approach.”

Pornhub does not outright oppose age verification but advocates for laws that require device-based age verification, which allows users to access adult sites after authenticating their identity on their devices. That’s “the best and most effective solution for protecting minors and adults alike,” Pornhub argued, because the age-verification technology is proven and less PII would be shared.

“Users would only get verified once, through their operating system, not on each age-restricted site,” Pornhub’s blog said, claiming that “this dramatically reduces privacy risks and creates a very simple process for regulators to enforce.”

A spokesperson for Pornhub-owner Aylo told Ars that “unfortunately, the way many jurisdictions worldwide have chosen to implement age verification is ineffective, haphazard, and dangerous.”

“Any regulations that require hundreds of thousands of adult sites to collect significant amounts of highly sensitive personal information is putting user safety in jeopardy,” Aylo’s spokesperson told Ars. “Moreover, as experience has demonstrated, unless properly enforced, users will simply access non-compliant sites or find other methods of evading these laws.

Age-verification laws are harmful, Pornhub says

Pornhub’s big complaint with current age-verification laws is that these laws are hard to enforce and seem to make it riskier than ever to visit an adult site.

“Since age verification software requires users to hand over extremely sensitive information, it opens the door for the risk of data breaches,” Pornhub’s blog said. “Whether or not your intentions are good, governments have historically struggled to secure this data. It also creates an opportunity for criminals to exploit and extort people through phishing attempts or fake [age verification] processes, an unfortunate and all too common practice.”

Over the past few years, the risk of identity theft or stolen PII on both widely used and smaller niche adult sites has been well-documented.

Hundreds of millions of people were impacted by major leaks exposing PII shared with popular adult sites like Adult Friend Finder and Brazzers in 2016, while likely tens of thousands of users were targeted on eight poorly secured adult sites in 2018. Niche and free sites have also been vulnerable to attacks, including millions collectively exposed through breaches of fetish porn site Luscious in 2019 and MyFreeCams in 2021.

And those are just the big breaches that make headlines. In 2019, Kaspersky Lab reported that malware targeting online porn account credentials more than doubled in 2018, and researchers analyzing 22,484 pornography websites estimated that 93 percent were leaking user data to a third party.

That’s why Pornhub argues that, as states have passed age-verification laws requiring ID, they’ve “introduced harm” by redirecting visitors to adult sites that have fewer privacy protections and worse security, allegedly exposing users to more threats.

As an example, Pornhub reported, traffic to Pornhub in Louisiana “dropped by approximately 80 percent” after their age-verification law passed. That allegedly showed not just how few users were willing to show an ID to access their popular platform, but also how “very easily” users could simply move to “pirate, illegal, or other non-compliant sites that don’t ask visitors to verify their age.”

Pornhub has continued to argue that states passing laws like Louisiana’s cannot effectively enforce the laws and are simply shifting users to make riskier choices when accessing porn.

“The Louisiana law and other copycat state-level laws have no regulator, only civil liability, which results in a flawed enforcement regime, effectively making it an option for platform operators to comply,” Pornhub’s blog said. As one of the world’s most popular adult platforms, Pornhub would surely be targeted for enforcement if found to be non-compliant, while smaller adult sites perhaps plagued by security risks and disincentivized to check IDs would go unregulated, the thinking goes.

Aylo’s spokesperson shared 2023 Similarweb data with Ars, showing that sites complying with age-verification laws in Virginia, including Pornhub and xHamster, lost substantial traffic while seven non-compliant sites saw a sharp uptick in traffic. Similar trends were observed in Google trends data in Utah and Mississippi, while market shares were seemingly largely maintained in California, a state not yet checking IDs to access adult sites.

Pornhub prepares to block five more states rather than check IDs Read More »

butts,-breasts,-and-genitals-now-explicitly-allowed-on-elon-musk’s-x

Butts, breasts, and genitals now explicitly allowed on Elon Musk’s X

Butts, breasts, and genitals now explicitly allowed on Elon Musk’s X

Aurich Lawson | Getty Images

Adult content has always proliferated on Twitter, but the platform now called X recently clarified its policy to officially allow “consensually produced and distributed adult nudity or sexual behavior.”

X’s rules seem simple. As long as content is “properly labeled and not prominently displayed,” users can share material—including AI-generated or animated content—”that is pornographic or intended to cause sexual arousal.”

“We believe that users should be able to create, distribute, and consume material related to sexual themes as long as it is consensually produced and distributed,” X’s policy said.

The policy update seemingly reflects X’s core mission to defend all legal speech. It protects a wide range of sexual expression, including depictions of explicit or implicit sexual behavior, simulated sexual intercourse, full or partial nudity, and close-ups of genitals, buttocks, or breasts.

“Sexual expression, whether visual or written, can be a legitimate form of artistic expression,” X’s policy said. “We believe in the autonomy of adults to engage with and create content that reflects their own beliefs, desires, and experiences, including those related to sexuality.”

Today, X Support promoted the update on X, confirming that “we have launched Adult Content and Violent Content policies to bring more clarity of our Rules and transparency into enforcement of these areas. These policies replace our former Sensitive Media and Violent Speech policies—but what we enforce against hasn’t changed.”

Seemingly also unchanged: none of this content can be monetized, as X’s ad policy says that “to ensure a positive user experience and a healthy conversation on the platform, X prohibits the promotion of adult sexual content globally.”

Under the policy, adult content is also prohibited from appearing in live videos, profile pictures, headers, list banners, or community cover photos.

X has been toying with the idea of fully embracing adult content and has even planned a feature for adult creators that could position X as an OnlyFans rival. That plan was delayed, Platformer reported in 2022, after red-teaming flagged a seemingly insurmountable obstacle to the launch: “Twitter cannot accurately detect child sexual exploitation and non-consensual nudity at scale.”

The new adult content policy still emphasizes that non-consensual adult content is prohibited, but it’s unclear if the platform has gotten any better at distinguishing between consensually produced content and nonconsensual material. X did not immediately respond to Ars’ request to comment.

For adult content to be allowed on the platform, X now requires content warnings so that “users who do not wish to see it can avoid it” and “children below the age of 18 are not exposed to it.”

Users who plan to regularly post adult content can adjust their account’s media settings to place a label on all their images and videos. That results in a content warning for any visitor of that account’s profile, except for “people who have opted in to see possibly sensitive content,” who “will still see your account without the message.”

Users who only occasionally share adult content can choose to avoid the account label and instead edit an image or video to add a one-time label to any individual post, flagging just that post as sensitive.

Once a label is applied, any users under 18 will be blocked from viewing the post, X said.

Butts, breasts, and genitals now explicitly allowed on Elon Musk’s X Read More »

twitch-allowing-more-nudity-after-disproportionately-banning-female-streamers

Twitch allowing more nudity after disproportionately banning female streamers

“Bravo, Twitch!” —

Twitch confirmed its policy banning nudity was sexist.

Twitch allowing more nudity after disproportionately banning female streamers

Twitch users have finally pushed the interactive livestreaming service to allow more nudity after years of banning mostly female-presenting streamers who were penalized for violating sexual content policies that Twitch now admits were too confusing.

In one prominent example, just two days before the policy change, Twitch banned OnlyFans model Morgpie for posting “topless” streams. Although Morgpie’s videos were shot to imply nudity, they “never actually showed content that explicitly violated Twitch’s sexual content policies,” TechCrunch reported. To critics, Twitch’s rationale for banning Morgpie remains unclear.

In a blog, Twitch Chief Customer Trust Officer Angela Hession wrote that Twitch “received consistent feedback from streamers that our current policies around sexual content are confusing and that it can be difficult to know how their content will be interpreted based on these policies.”

Hession confirmed that Twitch’s policy prohibiting “content that ‘deliberately highlighted breasts, buttocks or pelvic region,’ even when fully clothed” was confusing.

“Streamers found it difficult to determine what was prohibited and what was allowed and often evaluating whether or not a stream violated this portion of the policy was subjective,” Hession wrote.

Combined with another policy that restricted “sexually suggestive” content, Twitch’s former policy was found to be “overly punitive” and “resulted in female-presenting streamers being disproportionately penalized,” Hession wrote.

In an effort to eliminate confusion, Twitch announced that it had consolidated two separate policies that addressed sexual content under Twitch’s community guidelines into one single sexual content policy. That policy “specifies the types of sexual content that are prohibited on Twitch” and clarifies enforcement.

Another key part of the policy overhaul, Twitch has also begun allowing some sexual content that was previously restricted, so long as appropriate Content Classification Labels (CCLs) indicating sexual themes are applied.

Now, labeled content that “deliberately highlighted breasts, buttocks or pelvic region” is allowed, as well as content with “fictionalized (drawn, animated, or sculpted) fully exposed female-presenting breasts and/or genitals or buttocks regardless of gender” and “body writing on female-presenting breasts and/or buttocks regardless of gender.” Streamers can also label content featuring “erotic dances that involve disrobing or disrobing gestures, such as strip teases.”

These labels will ensure that content is not removed but also prevent mature content from being promoted on the homepage, Hession wrote, ensuring that no Twitch user ever encounters inappropriate content that they do not wish to see. The only exceptions will be made for streams labeled as including mature-rated games and profanity, which will still appear in the main feed on the homepage. Streams with sexual themes labels will, however, still be recommended in the left bar of the homepage, where a thumbnail isn’t displayed.

“We believe that accurate content labeling is key to helping viewers get the experience they expect, and now that we can enable appropriate labeling of sexual content using CCLs we believe that some of the restrictions in our former policies are no longer required,” Hession wrote. “In addition to providing clarity, these updates will also reduce the risk of inconsistent enforcement and bring our policy more in line with other social media services.”

Failing to label sexual content can result in penalties, like warnings, but not suspensions, Hession wrote. Twitch will automatically apply labels to sexual content missing labels, and any repeated failure to label content could result in a label “temporarily locked onto the stream.”

Some sexual content is still prohibited, including pornographic content, youth nudity, “fictionalized sexual acts or masturbation,” and “simulated sexual activity or erotic roleplay with other players in online games.” Uploading such prohibited content can result in content removals or account suspensions. A complete overview of what’s still banned can be found here.

For many Twitch users, changes to the sexual content policy likely feel long overdue. Although her ban is reportedly still in effect, Morgpie praised Twitch’s update in a statement to Dexerto.

“With the updated terms of service, content on Twitch containing mature themes will be allowed but no longer pushed on the homepage of the site,” Morgpie told Dexerto. “I think this is the best possible outcome, because it gives creators much more freedom, while also keeping this content from reaching the wrong audience. Bravo, Twitch!”

Twitch allowing more nudity after disproportionately banning female streamers Read More »