age verification

fury-over-discord’s-age-checks-explodes-after-shady-persona-test-in-uk

Fury over Discord’s age checks explodes after shady Persona test in UK


Persona confirmed all age-check data from Discord’s UK test was deleted.

Shortly after Discord announced that all users will soon be defaulted to teen experiences until their ages are verified, the messaging platform faced immediate backlash.

One of the major complaints was that Discord planned to collect more government IDs as part of its global age verification process. It shocked many that Discord would be so bold so soon after a third-party breach of a former age check partner’s services recently exposed 70,000 Discord users’ government IDs.

Attempting to reassure users, Discord claimed that most users wouldn’t have to show ID, instead relying on video selfies using AI to estimate ages, which raised separate privacy concerns. In the future, perhaps behavioral signals would override the need for age checks for most users, Discord suggested, seemingly downplaying the risk that sensitive data would be improperly stored.

Discord didn’t hide that it planned to continue requesting IDs for any user appealing an incorrect age assessment, and users weren’t happy, since that is exactly how the prior breach happened. Responding to critics, Discord claimed that the majority of ID data was promptly deleted. Specifically, Savannah Badalich, Discord’s global head of product policy, told The Verge that IDs shared during appeals “are deleted quickly—in most cases, immediately after age confirmation.”

It’s unsurprising then that backlash exploded after Discord posted, and then weirdly deleted, a disclaimer on an FAQ about Discord’s age assurance policies that contradicted Discord’s hyped short timeline for storing IDs. An archived version of the page shows the note shared this warning:

“Important: If you’re located in the UK, you may be part of an experiment where your information will be processed by an age-assurance vendor, Persona. The information you submit will be temporarily stored for up to 7 days, then deleted. For ID document verification, all details are blurred except your photo and date of birth, so only what’s truly needed for age verification is used.”

Critics felt that Discord was obscuring not just how long IDs may be stored, but also the entities collecting information. Discord did not provide details on what the experiment was testing or how many users were affected, and Persona was not listed as a partner on its platform.

Asked for comment, Discord told Ars that only a small number of users was included in the experiment, which ran for less than one month. That test has since concluded, Discord confirmed, and Persona is no longer an active vendor partnering with Discord. Moving forward, Discord promised to “keep our users informed as vendors are added or updated.”

While Discord seeks to distance itself from Persona, Rick Song, Persona’s CEO, has been stuck responding to the mounting backlash. Hoping to quell fears that any of the UK data collected during the experiment risked being breached, he told Ars that all the data of verified individuals involved in Discord’s test has been deleted.

Persona draws fire amid Discord fury

This all seemingly started after Discord was forced to find age verification solutions when Australia’s under-16 social media ban and the United Kingdom’s Online Safety Act came into effect.

It seems that in the UK, Discord struggled to find partners, as the messaging service wasn’t just trying to stop minors from accessing adult content but also needed to block adults from messaging minors.

Setting aside known issues with accuracy in today’s age estimation technology, there’s an often-overlooked nuance to how age solutions work, particularly when the safety of children is involved in platforms’ decisions. Age checks that are good enough to block kids from accessing adult content may not work as well as age checks to stop tech-savvy adults with malicious intentions bent on contacting minors; the UK’s OSA required that Discord’s age checks block both.

It seems likely that Discord expected Persona to be a partner that the UK’s OSA enforcers would approve. OSA had previously approved Persona as an age verification service on Reddit, which shares similarly complex age verification goals with Discord.

For Persona, the partnership came at a time when many Discord users globally were closely monitoring the service, trying to decided whehter they trusted Discord with their age check data.

After Discord shocked users by abruptly retracting the disclaimer about the Persona experiment, mistrust swelled, and scrutiny of Persona intensified.

On X and other social media platforms, critics warned that Palantir co-founder Peter Thiel’s Founders Fund was a major investor in Persona. They worried Thiel might have influence over Persona or access to Persona’s data, or, worse, that Thiel’s ties to the Trump administration might mean the government had access to it. Fearing that Discord data may one day be fed into government facial recognition systems, conspiracies swirled, increasing heat on Persona and leaving Song with no choice but to cautiously confront allegations.

Hackers exposed Persona database

Perhaps most problematic for Persona, the mass outrage prompted hackers to investigate. They quickly exposed a “workaround” to avoid Persona’s age checks on Discord, The Rage, an independent publication that covers financial surveillance, reported. But more concerning for privacy advocates, hackers also “found a Persona frontend exposed to the open Internet on a US government authorized server.”

“In 2,456 publicly accessible files, the code revealed the extensive surveillance Persona software performs on its users, bundled in an interface that pairs facial recognition with financial reporting—and a parallel implementation that appears designed to serve federal agencies,” The Rage reported.

As The Rage reported, and Song confirmed to Ars, Persona does not currently have any government contracts. Instead, the exposed service “appears to be powered by an OpenAI chatbot,” The Rage noted.

OpenAI is highlighted as an active partner on Persona’s website, which claims Persona screens millions of users for OpenAI each month. According to The Rage, “the publicly exposed domain, titled ‘openai-watchlistdb.withpersona.com,’” appears to “query identity verification requests on an OpenAI database” that has a “FedRAMP-authorized parallel implementation of the software called ‘withpersona-gov.com.’”

Hackers warned “that OpenAI may have created an internal database for Persona identity checks that spans all OpenAI users via its internal watchlistdb,” seemingly exploiting the “opportunity to go from comparing users against a single federal watchlist, to creating the watchlist of all users themselves.”

OpenAI did not immediately respond to Ars’ request to comment.

Persona denies government, ICE ties

On Wednesday, Persona’s chief operating officer, Christie Kim, sought to reassure Persona customers as the Discord controversy grew. In an email, Kim said that Persona invests “heavily in infrastructure, compliance, and internal training to ensure sensitive data is handled responsibly,” and not exposed.

“Over the past week, multiple social media posts and online articles have circulated repeating misleading claims about Persona, insinuating conspiracies around our work with Discord and our investors,” Kim wrote.

Noting that Persona does not “typically engage with online speculation,” Kim said that the scandal required a direct response “because we operate in a sensitive space and your trust in us is foundational to our partnership.”

As expected, Kim noted that Persona is not partnered with federal agencies, including the Department of Homeland Security or Immigration and Customs Enforcement (ICE).

“Transparently, we are actively working on a couple of potential contracts which would be publicly visible if we move forward,” Kim wrote. “However, these engagements are strictly for workforce account security of government employees and do not include ICE or any agency within the Department of Homeland Security.”

Kim acknowledged that Thiel’s Founders Fund is an investor but said that investors do not have access to Persona data and that Thiel was not involved in Persona’s operations.

“He is not on our board, does not advise us, has no role in our operations or decision-making, and is not directly involved with Persona in any way,” Kim wrote. “Persona and Palantir share no board members and have no business relationship with each other.”

In the email, Kim confirmed that Persona was planning a PR blitz to go on the defensive, speaking with media to clarify the narrative. She apologized for any inconvenience that the heightened scrutiny on the company’s services may have caused.

That scrutiny has likely spooked partners that may have previously gravitated to Persona as a partner that seems savvy about government approvals.

Persona combats ongoing trust issues

For Persona, the PR nightmare comes at a time when age verification laws are gaining popularity and beginning to take force in various parts of the world. Persona’s background in verifying identities for financial services to prevent fraud seems to make its services—which The Rage noted combine facial recognition with financial reporting—an appealing option for platforms seeking a solution that will appease regulators.

But because of Persona’s background in financial services and fraud protection, its data retention policies—which require some data be retained for legal and audit purposes—will likely leave anyone uncomfortable with a tech company gathering a massive database of government IDs. Such databases are viewed as hugely attractive targets for bad actors behind costly breaches, and Discord’s users have already been burned once.

On X, Song responded to one of the hackers exposing the Persona database—a user named Celeste with the handle @vmfunc—aiming to provide more transparency into how Persona was addressing the flagged issues. In the thread, he shared screenshots of emails documenting his correspondence with Celeste over security concerns.

The correspondence showed that Celeste credited Persona for quickly fixing the front-end issue but also noted that it was hard to trust Persona’s story about government and Palantir ties, since the company wouldn’t put more information on the record. Additionally, Persona’s compliance team should be concerned that the company had not yet started an “in-depth security review,” Celeste said.

“Unfortunately, there is no way I can fully trust you here and you know this,” Celeste wrote, “but I’m trying to act in good faith” by explicitly stating that “we found zero references” to ICE or other entities concerning critics “in all source files we found.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Fury over Discord’s age checks explodes after shady Persona test in UK Read More »

discord-faces-backlash-over-age-checks-after-data-breach-exposed-70,000-ids

Discord faces backlash over age checks after data breach exposed 70,000 IDs


Discord to block adult content unless users verify ages with selfies or IDs.

Discord is facing backlash after announcing that all users will soon be required to verify ages to access adult content by sharing video selfies or uploading government IDs.

According to Discord, it’s relying on AI technology that verifies age on the user’s device, either by evaluating a user’s facial structure or by comparing a selfie to a government ID. Although government IDs will be checked off-device, the selfie data will never leave the user’s device, Discord emphasized. Both forms of data will be promptly deleted after the user’s age is estimated.

In a blog, Discord confirmed that “a phased global rollout” would begin in “early March,” at which point all users globally would be defaulted to “teen-appropriate” experiences.

To unblur sensitive media or access age-restricted channels, the majority of users will likely have to undergo Discord’s age estimation process. Most users will only need to verify their ages once, Discord said, but some users “may be asked to use multiple methods, if more information is needed to assign an age group,” the blog said.

On social media, alarmed Discord users protested the move, doubting whether Discord could be trusted with their most sensitive information after Discord age verification data was recently breached. In October, hackers stole government IDs of 70,000 Discord users from a third-party service that Discord previously trusted to verify ages in the United Kingdom and Australia.

At that time, Discord told users that the hackers were hoping to use the stolen data to “extort a financial ransom from Discord.” In October, Ars Senior Security Editor Dan Goodin joined others warning that “the best advice for people who have submitted IDs to Discord or any other service is to assume they have been or soon will be stolen by hackers and put up for sale or used in extortion scams.”

For bad actors, Discord will likely only become a bigger target as more sensitive information is collected worldwide, users now fear.

It’s no surprise then that hundreds of Discord users on Reddit slammed the decision to expand age verification globally shortly after The Verge broke the news. On a PC gaming subreddit discussing alternative apps for gamers, one user wrote, “Hell, Discord has already had one ID breach, why the fuck would anyone verify on it after that?”

“This is how Discord dies,” another user declared. “Seriously, uploading any kind of government ID to a 3rd party company is just asking for identity theft on a global scale.”

Many users seem just as sketched out about sharing face scans. On the Discord app subreddit, some users vowed to never submit selfies or IDs, fearing that breaches may be inevitable and suspecting Discord of downplaying privacy risks while allowing data harvesting.

Who can access Discord age-check data?

Discord’s system is supposed to make sure that only users have access to their age-check data, which Discord said would never leave their phones.

The company is hoping to convince users that it has tightened security after the breach by partnering with k-ID, an increasingly popular age-check service provider that’s also used by social platforms from Meta and Snap.

However, self-described Discord users on Reddit aren’t so sure, with some going the extra step of picking apart k-ID’s privacy policy to understand exactly how age is verified without data ever leaving the device.

“The wording is pretty unclear and inconsistent even if you dig down to the k-ID privacy policy,” one Redditor speculated. “Seems that ID scans are uploaded to k-ID servers, they delete them, but they also mention using ‘trusted 3rd parties’ for verification, who may or may not delete it.” That user seemingly gave up on finding reassurances in either company’s privacy policies, noting that “everywhere along the chain it reads like ‘we don’t collect your data, we forward it to someone else… .’”

Discord did not immediately respond to Ars’ requests to comment directly on how age checks work without data leaving the device.

To better understand user concerns, Ars reviewed the privacy policies, noting that k-ID said its “facial age estimation” tool is provided by a Swiss company called Privately.

“We don’t actually see any faces that are processed via this solution,” k-ID’s policy said.

That part does seem vague, since Privately isn’t explicitly included in the “we” in that statement. Similarly, further down, the policy more clearly states that “neither k-ID nor its service providers collect any biometric information from users when they interact with the solution. k-ID only receives and stores the outcome of the age check process.” In that section, “service providers” seems to refer to partners like Discord, which integrate k-ID’s age checks, rather than third parties like Privately that actually conduct the age check.

Asked for comment, a k-ID spokesperson told Ars that “the Facial Age Estimation technology runs entirely on the user’s device in real time when they are performing the verification. That means there is no video or image transmitted, and the estimation happens locally. The only data to leave the device is a pass/fail of the age threshold which is what Discord receives (and some performance metrics that contain no personal data).”

K-ID’s spokesperson told Ars that no third parties store personal data shared during age checks.

“k-ID, does not receive personal data from Discord when performing age-assurance,” k-ID’s spokesperson said. “This is an intentional design choice grounded in data protection and data minimisation principles. There is no storage of personal data by k-ID or any third parties, regardless of the age assurance method used.”

Turning to Privately’s website, that offers a little more information on how on-device age estimation works, while providing likely more reassurances that data won’t leave devices.

Privately’s services were designed to minimize data collection and prioritize anonymity to comply with the European Union’s General Data Protection Regulation, Privately noted. “No user biometric or personal data is captured or transmitted,” Privately’s website said, while bragging that “our secret sauce is our ability to run very performant models on the user device or user browser to implement a privacy-centric solution.”

The company’s privacy policy offers slightly more detail, noting that the company avoids relying on the cloud while running AI models on local devices.

“Our technology is built using on-device edge-AI that facilitates data minimization so as to maximise user privacy and data protection,” the privacy policy said. “The machine learning based technology that we use (for age estimation and safeguarding) processes user’s data on their own devices, thereby avoiding the need for us or for our partners to export user’s personal data onto any form of cloud services.”

Additionally, the policy said, “our technology solutions are built to operate mostly on user devices and to avoid sending any of the user’s personal data to any form of cloud service. For this we use specially adapted machine learning models that can be either deployed or downloaded on the user’s device. This avoids the need to transmit and retain user data outside the user device in order to provide the service.”

Finally, Privately explained that it also employs a “double blind” implementation to avoid knowing the origin of age estimation requests. That supposedly ensures that Privately only knows the result of age checks and cannot connect the result to a user on a specific platform.

Discord expects to lose users

Some Discord users may never be asked to verify their ages, even if they try to access age-restricted content. Savannah Badalich, Discord’s global head of product policy, told The Verge that Discord “is also rolling out an age inference model that analyzes metadata, like the types of games a user plays, their activity on Discord, and behavioral signals like signs of working hours or the amount of time they spend on Discord.”

“If we have a high confidence that they are an adult, they will not have to go through the other age verification flows,” Badalich said.

Badalich confirmed that Discord is bracing for some users to leave Discord over the update but suggested that “we’ll find other ways to bring users back.”

On Reddit, Discord users complained that age verification is easy to bypass, forcing adults to share sensitive information without keeping kids away from harmful content. In Australia, where Discord’s policy first rolled out, some kids claimed that Discord never even tried to estimate their ages, while others found it easy to trick k-ID by using AI videos or altering their appearances to look older. A teen girl relied on fake eyelashes to do the trick, while one 13-year-old boy was estimated to be over 30 years old after scrunching his face to seem more wrinkled.

Badalich told The Verge that Discord doesn’t expect the tools to work perfectly but acts quickly to block workarounds, like teens using Death Stranding‘s photo mode to skirt age gates. However, questions remain about the accuracy of Discord’s age estimation model in assessing minors’ ages, in particular.

It may be noteworthy that Privately only claims that its technology is “proven to be accurate to within 1.3 years, for 18-20-year-old faces, regardless of a customer’s gender or ethnicity.” But experts told Ars last year that flawed age-verification technology still frequently struggles to distinguish minors from adults, especially when differentiating between a 17- and 18-year-old, for example.

Perhaps notably, Discord’s prior scandal occurred after hackers stole government IDs that users shared as part of the appeal process in order to fix an incorrect age estimation. Appeals could remain the most vulnerable part of this process, The Verge’s report indicated. Badalich confirmed that a third-party vendor would be reviewing appeals, with the only reassurance for users seemingly that IDs shared during appeals “are deleted quicklyin most cases, immediately after age confirmation.”

On Reddit, Discord fans awaiting big changes remain upset. A disgruntled Discord user suggested that “corporations like Facebook and Discord, will implement easily passable, cheapest possible, bare minimum under the law verification, to cover their ass from a lawsuit,” while forcing users to trust that their age-check data is secure.

Another user joked that she’d be more willing to trust that selfies never leave a user’s device if Discord were “willing to pay millions to every user” whose “scan does leave a device.”

This story was updated on February 9 to clarify that government IDs are checked off-device.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Discord faces backlash over age checks after data breach exposed 70,000 IDs Read More »

uk-to-“encourage”-apple-and-google-to-put-nudity-blocking-systems-on-phones

UK to “encourage” Apple and Google to put nudity-blocking systems on phones

The push for device-level blocking comes after the UK implemented the Online Safety Act, a law requiring porn platforms and social media firms to verify users’ ages before letting them view adult content. The law can’t fully prevent minors from viewing porn, as many people use VPN services to get around the UK age checks. Government officials may view device-level detection of nudity as a solution to that problem, but such systems would raise concerns about user rights and the accuracy of the nudity detection.

Age-verification battles in multiple countries

Apple and Google both provide optional tools that let parents control what content their children can access. The companies could object to mandates on privacy grounds, as they have in other venues.

When Texas enacted an age-verification law for app stores, Apple and Google said they would comply but warned of risks to user privacy. A lobby group that represents Apple, Google, and other tech firms then sued Texas in an attempt to prevent the law from taking effect, saying it “imposes a broad censorship regime on the entire universe of mobile apps.”

There’s another age-verification battle in Australia, where the government decided to ban social media for users under 16. Companies said they would comply, although Reddit sued Australia on Friday in a bid to overturn the law.

Apple this year also fought a UK demand that it create a backdoor for government security officials to access encrypted data. The Trump administration claimed it convinced the UK to drop its demand, but the UK is reportedly still seeking an Apple backdoor.

In another case, the image-sharing website Imgur blocked access for UK users starting in September while facing an investigation over its age-verification practices.

Apple faced a backlash in 2021 over potential privacy violations when it announced a plan to have iPhones scan photos for child sexual abuse material (CSAM). Apple ultimately dropped the plan.

UK to “encourage” Apple and Google to put nudity-blocking systems on phones Read More »

pornhub-is-urging-tech-giants-to-enact-device-based-age-verification

Pornhub is urging tech giants to enact device-based age verification


The company is pushing for an alternative way to keep minors from viewing porn.

In letters sent to Apple, Google, and Microsoft this week, Pornhub’s parent company urged the tech giants to support device-based age verification in their app stores and across their operating systems, WIRED has learned.

“Based on our real-world experience with existing age assurance laws, we strongly support the initiative to protect minors online,” reads the letter sent by Anthony Penhale, chief legal officer for Aylo, which owns Pornhub, Brazzers, Redtube, and YouPorn. “However, we have found site-based age assurance approaches to be fundamentally flawed and counterproductive.”

The letter adds that site-based age verification methods have “failed to achieve their primary objective: protecting minors from accessing age-inappropriate material online.” Aylo says device-based authentication is a better solution for this issue because once a viewer’s age is determined via phone or tablet, their age signal can be shared over its application programming interface (API) with adult sites.

The letters were sent following the continued adoption of age verification laws in the US and UK, which require users to upload an ID or other personal documentation to verify that they are not a minor before viewing sexually explicit content; often this requires using third-party services. Currently, 25 US states have passed some form of ID verification, each with different provisions.

Pornhub has experienced an enormous dip in traffic as a result of its decision to pull out of most states that have enacted these laws. The platform was one of the few sites to comply with the new law in Louisiana but doing so caused traffic to drop by 80 percent. Similarly, since implementation of the Online Safety Act, Pornhub has lost nearly 80 percent of its UK viewership.

The company argues that it’s a privacy risk to leave age verification up to third-party sites and that people will simply seek adult content on platforms that don’t comply with the laws.

“We have seen an exponential surge in searches for alternate adult sites without age restrictions or safety standards at all,” says Alex Kekesi, vice president of brand and community at Pornhub.

She says she hopes the tech companies and Aylo are able to find common ground on the matter, especially given the recent passage of the Digital Age Assurance Act (AB 1043) in California. “This is a law that’s interesting because it gets it almost exactly right,” she says. Signed into law in October, it requires app store operators to authenticate user ages before download.

According to Google spokesperson Karl Ryan, “Google is committed to protecting kids online, including by developing and deploying new age assurance tools like our Credential Manager API that can be used by websites. We don’t allow adult entertainment apps on Google Play and would emphasize that certain high-risk services like Aylo will always need to invest in specific tools to meet their own legal and responsibility obligations.”

Microsoft declined to comment, but pointed WIRED to a recent policy recommendation post that said “age assurance should be applied at the service level, target specific design features that pose heightened risks, and enable tailored experiences for children.”

Apple likewise declined to comment and instead pointed WIRED to its child online safety report and noted that web content filters are turned on by default for every user under 18. A software update from June specified that Apple requires kids who are under 13 to have a kid account, which also includes “app restrictions enabled from the beginning.” Apple currently has no way of requiring every single website to integrate an API.

According to Pornhub, age verification laws have led to ineffective enforcement. “The sheer volume of adult content platforms has proven to be too challenging for governments worldwide to regulate at the individual site or platform level,” says Kekesi. Aylo claims device-based age verification that happens once, on a phone or computer, will preserve user privacy while prioritizing safety.

Recent Studies by New York University and public policy nonprofit the Phoenix Center suggest that current age verification laws don’t work because people find ways to circumvent them, including by using VPNs and turning to sites that don’t regulate their content.

“Platform-based verification has been like Prohibition,” says Mike Stabile, director of public policy at the Free Speech Coalition. “We’re seeing consumer behavior reroute away from legal, compliant sites to foreign sites that don’t comply with any regulations or laws. Age verification laws have effectively rerouted a massive river of consumers to sites with pirated content, revenge porn, and child sex abuse material.” He claims that these laws “have been great for criminals, terrible for the legal adult industry.”

With age verification and the overall deanonymizing of the internet, these are issues that will now face nearly everyone, but especially those who are politically disfavored. Sex workers have been dealing with issues like censorship and surveillance online for a long time. One objective of Project 2025, MAGA’s playbook for President Trump’s second term, has been to “back door” a national ban on porn through state laws.

The current surge of child protection laws around the world is driving a significant change in how people engage with the internet, and is also impacting industries beyond porn, including gaming and social media. Starting December 10 in Australia, in accordance with the government’s social media ban, kids under 16 will be kicked off Facebook, Instagram, and Threads.

Ultimately, Stabile says that may be the point. In the US, “the advocates for these bills have largely fallen into two groups: faith-based organizations that don’t believe adult content should be legal, and age verification providers who stand to profit from a restricted internet.” The goal of faith-based organizations, he says, is to destabilize the adult industry and dissuade adults from using it, while the latter works to expand their market as much as possible, “even if that means getting in bed with right-wing censors.”

But the problem is that “even well-meaning legislators advancing these bills have little understanding of the internet,” Stabile adds. “It’s much easier to go after a political punching bag like Pornhub than it is Apple or Google. But if you’re not addressing the reality of the internet, if your legislation flies in the face of consumer behavior, you’re only going to end up creating systems that fail.”

Adult industry insiders I spoke to in August explained that the biggest misconception about the industry is that it is against self-regulation when that couldn’t be further from the truth. “Keeping minors off adult sites is a shared responsibility that requires a global solution,” Kekesi says. “Every phone, tablet, or computer should start as a kid-safe device. Only verified adults should unlock access to things like dating apps, gambling, or adult content.” In 2022, Pornhub created a chatbot that urges people searching for child sexual abuse content to seek counseling; the tool was introduced following a 2020 New York Times investigation that alleged the platform had monetized videos showing child abuse. Pornhub has since started releasing annual transparency reports and tightened its verification process of performers and for video uploads.

According to Politico, Google, Meta, OpenAI, Snap, and Pinterest all supported the California bill. Right now that law is limited to California, but Kekesi believes it can work as a template for other states.

“We obviously see that there’s kind of a path forward here,” she says.

This story originally appeared at WIRED.com

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Pornhub is urging tech giants to enact device-based age verification Read More »

chatgpt-erotica-coming-soon-with-age-verification,-ceo-says

ChatGPT erotica coming soon with age verification, CEO says

On Tuesday, OpenAI CEO Sam Altman announced that the company will allow verified adult users to have erotic conversations with ChatGPT starting in December. The change represents a shift in how OpenAI approaches content restrictions, which the company had loosened in February but then dramatically tightened after an August lawsuit from parents of a teen who died by suicide after allegedly receiving encouragement from ChatGPT.

“In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults,” Altman wrote in his post on X (formerly Twitter). The announcement follows OpenAI’s recent hint that it would allow developers to create “mature” ChatGPT applications once the company implements appropriate age verification and controls.

Altman explained that OpenAI had made ChatGPT “pretty restrictive to make sure we were being careful with mental health issues” but acknowledged this approach made the chatbot “less useful/enjoyable to many users who had no mental health problems.” The CEO said the company now has new tools to better detect when users are experiencing mental distress, allowing OpenAI to relax restrictions in most cases.

Striking the right balance between freedom for adults and safety for users has been a difficult balancing act for OpenAI, which has vacillated between permissive and restrictive chat content controls over the past year.

In February, the company updated its Model Spec to allow erotica in “appropriate contexts.” But a March update made GPT-4o so agreeable that users complained about its “relentlessly positive tone.” By August, Ars reported on cases where ChatGPT’s sycophantic behavior had validated users’ false beliefs to the point of causing mental health crises, and news of the aforementioned suicide lawsuit hit not long after.

Aside from adjusting the behavioral outputs for its previous GPT-40 AI language model, new model changes have also created some turmoil among users. Since the launch of GPT-5 in early August, some users have been complaining that the new model feels less engaging than its predecessor, prompting OpenAI to bring back the older model as an option. Altman said the upcoming release will allow users to choose whether they want ChatGPT to “respond in a very human-like way, or use a ton of emoji, or act like a friend.”

ChatGPT erotica coming soon with age verification, CEO says Read More »

vpn-use-soars-in-uk-after-age-verification-laws-go-into-effect

VPN use soars in UK after age-verification laws go into effect

Also on Friday, the Windscribe VPN service posted a screenshot on X claiming to show a spike in new subscribers. The makers of the AdGuard VPN claimed that they have seen a 2.5X increase in install rates from the UK since Friday.

Nord Security, the company behind the NordVPN app, says it has seen a “1,000 percent increase in purchases” of subscriptions from the UK since the day before the new laws went into effect. “Such spikes in demand for VPNs are not unusual,” Laura Tyrylyte, Nord Security’s head of public relations, tells WIRED. She adds in a statement that “whenever a government announces an increase in surveillance, Internet restrictions, or other types of constraints, people turn to privacy tools.”

People living under repressive governments that impose extensive Internet censorship—like China, Russia, and Iran—have long relied on circumvention tools like VPNs and other technologies to maintain anonymity and access blocked content. But as countries that have long claimed to champion the open Internet and access to information, like the United States, begin considering or adopting age verification laws meant to protect children, the boundaries for protecting digital rights online quickly become extremely murky.

“There will be a large number of people who are using circumvention tech for a range of reasons” to get around age verification laws, the ACLU’s Kahn Gillmor says. “So then as a government you’re in a situation where either you’re obliging the websites to do this on everyone globally, that way legal jurisdiction isn’t what matters, or you’re encouraging people to use workarounds—which then ultimately puts you in the position of being opposed to censorship-circumvention tools.”

This story originally appeared on wired.com.

VPN use soars in UK after age-verification laws go into effect Read More »

pornhub-prepares-to-block-five-more-states-rather-than-check-ids

Pornhub prepares to block five more states rather than check IDs

“Uphill battle” —

The number of states blocked by Pornhub will soon nearly double.

Pornhub prepares to block five more states rather than check IDs

Aurich Lawson | Getty Images

Pornhub will soon be blocked in five more states as the adult site continues to fight what it considers privacy-infringing age-verification laws that require Internet users to provide an ID to access pornography.

On July 1, according to a blog post on the adult site announcing the impending block, Pornhub visitors in Indiana, Idaho, Kansas, Kentucky, and Nebraska will be “greeted by a video featuring” adult entertainer Cherie Deville, “who explains why we had to make the difficult decision to block them from accessing Pornhub.”

Pornhub explained that—similar to blocks in Texas, Utah, Arkansas, Virginia, Montana, North Carolina, and Mississippi—the site refuses to comply with soon-to-be-enforceable age-verification laws in this new batch of states that allegedly put users at “substantial risk” of identity theft, phishing, and other harms.

Age-verification laws requiring adult site visitors to submit “private information many times to adult sites all over the Internet” normalizes the unnecessary disclosure of personally identifiable information (PII), Pornhub argued, warning, “this is not a privacy-by-design approach.”

Pornhub does not outright oppose age verification but advocates for laws that require device-based age verification, which allows users to access adult sites after authenticating their identity on their devices. That’s “the best and most effective solution for protecting minors and adults alike,” Pornhub argued, because the age-verification technology is proven and less PII would be shared.

“Users would only get verified once, through their operating system, not on each age-restricted site,” Pornhub’s blog said, claiming that “this dramatically reduces privacy risks and creates a very simple process for regulators to enforce.”

A spokesperson for Pornhub-owner Aylo told Ars that “unfortunately, the way many jurisdictions worldwide have chosen to implement age verification is ineffective, haphazard, and dangerous.”

“Any regulations that require hundreds of thousands of adult sites to collect significant amounts of highly sensitive personal information is putting user safety in jeopardy,” Aylo’s spokesperson told Ars. “Moreover, as experience has demonstrated, unless properly enforced, users will simply access non-compliant sites or find other methods of evading these laws.

Age-verification laws are harmful, Pornhub says

Pornhub’s big complaint with current age-verification laws is that these laws are hard to enforce and seem to make it riskier than ever to visit an adult site.

“Since age verification software requires users to hand over extremely sensitive information, it opens the door for the risk of data breaches,” Pornhub’s blog said. “Whether or not your intentions are good, governments have historically struggled to secure this data. It also creates an opportunity for criminals to exploit and extort people through phishing attempts or fake [age verification] processes, an unfortunate and all too common practice.”

Over the past few years, the risk of identity theft or stolen PII on both widely used and smaller niche adult sites has been well-documented.

Hundreds of millions of people were impacted by major leaks exposing PII shared with popular adult sites like Adult Friend Finder and Brazzers in 2016, while likely tens of thousands of users were targeted on eight poorly secured adult sites in 2018. Niche and free sites have also been vulnerable to attacks, including millions collectively exposed through breaches of fetish porn site Luscious in 2019 and MyFreeCams in 2021.

And those are just the big breaches that make headlines. In 2019, Kaspersky Lab reported that malware targeting online porn account credentials more than doubled in 2018, and researchers analyzing 22,484 pornography websites estimated that 93 percent were leaking user data to a third party.

That’s why Pornhub argues that, as states have passed age-verification laws requiring ID, they’ve “introduced harm” by redirecting visitors to adult sites that have fewer privacy protections and worse security, allegedly exposing users to more threats.

As an example, Pornhub reported, traffic to Pornhub in Louisiana “dropped by approximately 80 percent” after their age-verification law passed. That allegedly showed not just how few users were willing to show an ID to access their popular platform, but also how “very easily” users could simply move to “pirate, illegal, or other non-compliant sites that don’t ask visitors to verify their age.”

Pornhub has continued to argue that states passing laws like Louisiana’s cannot effectively enforce the laws and are simply shifting users to make riskier choices when accessing porn.

“The Louisiana law and other copycat state-level laws have no regulator, only civil liability, which results in a flawed enforcement regime, effectively making it an option for platform operators to comply,” Pornhub’s blog said. As one of the world’s most popular adult platforms, Pornhub would surely be targeted for enforcement if found to be non-compliant, while smaller adult sites perhaps plagued by security risks and disincentivized to check IDs would go unregulated, the thinking goes.

Aylo’s spokesperson shared 2023 Similarweb data with Ars, showing that sites complying with age-verification laws in Virginia, including Pornhub and xHamster, lost substantial traffic while seven non-compliant sites saw a sharp uptick in traffic. Similar trends were observed in Google trends data in Utah and Mississippi, while market shares were seemingly largely maintained in California, a state not yet checking IDs to access adult sites.

Pornhub prepares to block five more states rather than check IDs Read More »

florida-braces-for-lawsuits-over-law-banning-kids-from-social-media

Florida braces for lawsuits over law banning kids from social media

Florida braces for lawsuits over law banning kids from social media

On Monday, Florida became the first state to ban kids under 14 from social media without parental permission. It appears likely that the law—considered one of the most restrictive in the US—will face significant legal challenges, however, before taking effect on January 1.

Under HB 3, apps like Instagram, Snapchat, or TikTok would need to verify the ages of users, then delete any accounts for users under 14 when parental consent is not granted. Companies that “knowingly or recklessly” fail to block underage users risk fines of up to $10,000 in damages to anyone suing on behalf of child users. They could also be liable for up to $50,000 per violation in civil penalties.

In a statement, Florida governor Ron DeSantis said the “landmark law” gives “parents a greater ability to protect their children” from a variety of social media harm. Florida House Speaker Paul Renner, who spearheaded the law, explained some of that harm, saying that passing HB 3 was critical because “the Internet has become a dark alley for our children where predators target them and dangerous social media leads to higher rates of depression, self-harm, and even suicide.”

But tech groups critical of the law have suggested that they are already considering suing to block it from taking effect.

In a statement provided to Ars, a nonprofit opposing the law, the Computer & Communications Industry Association (CCIA) said that while CCIA “supports enhanced privacy protections for younger users online,” it is concerned that “any commercially available age verification method that may be used by a covered platform carries serious privacy and security concerns for users while also infringing upon their First Amendment protections to speak anonymously.”

“This law could create substantial obstacles for young people seeking access to online information, a right afforded to all Americans regardless of age,” Khara Boender, CCIA’s state policy director, warned. “It’s foreseeable that this legislation may face legal opposition similar to challenges seen in other states.”

Carl Szabo, vice president and general counsel for Netchoice—a trade association with members including Meta, TikTok, and Snap—went even further, warning that Florida’s “unconstitutional law will protect exactly zero Floridians.”

Szabo suggested that there are “better ways to keep Floridians, their families, and their data safe and secure online without violating their freedoms.” Democratic state house representative Anna Eskamani opposed the bill, arguing that “instead of banning social media access, it would be better to ensure improved parental oversight tools, improved access to data to stop bad actors, alongside major investments in Florida’s mental health systems and programs.”

Netchoice expressed “disappointment” that DeSantis agreed to sign a law requiring an “ID for the Internet” after “his staunch opposition to this idea both on the campaign trail” and when vetoing a prior version of the bill.

“HB 3 in effect will impose an ‘ID for the Internet’ on any Floridian who wants to use an online service—no matter their age,” Szabo said, warning of invasive data collection needed to verify that a user is under 14 or a parent or guardian of a child under 14.

“This level of data collection will put Floridians’ privacy and security at risk, and it violates their constitutional rights,” Szabo said, noting that in court rulings in Arkansas, California, and Ohio over similar laws, “each of the judges noted the similar laws’ constitutional and privacy problems.”

Florida braces for lawsuits over law banning kids from social media Read More »