ncii

largest-deepfake-porn-site-shuts-down-forever

Largest deepfake porn site shuts down forever

The shuttering of Mr. Deepfakes won’t solve the problem of deepfakes, though. In 2022, the number of deepfakes skyrocketed as AI technology made the synthetic NCII appear more realistic than ever, prompting an FBI warning in 2023 to alert the public that the fake content was being increasingly used in sextortion schemes. But the immediate solutions society used to stop the spread had little impact. For example, in response to pressure to make the fake NCII harder to find, Google started downranking explicit deepfakes in search results but refused to demote platforms like Mr. Deepfakes unless Google received an unspecified “high volume of removals for fake explicit imagery.”

According to researchers, Mr. Deepfakes—a real person who remains anonymous but reportedly is a 36-year-old hospital worker in Toronto—created the engine driving this spike. His DeepFaceLab quickly became “the leading deepfake software, estimated to be the software behind 95 percent of all deepfake videos and has been replicated over 8,000 times on GitHub,” researchers found. For casual users, his platform hosted videos that could be purchased, usually priced above $50 if it was deemed realistic, while more motivated users relied on forums to make requests or enhance their own deepfake skills to become creators.

Mr. Deepfakes’ illegal trade began on Reddit but migrated to its own platform after a ban in 2018. There, thousands of deepfake creators shared technical knowledge, with the Mr. Deepfakes site forums eventually becoming “the only viable source of technical support for creating sexual deepfakes,” researchers noted last year.

Having migrated once before, it seems unlikely that this community won’t find a new platform to continue generating the illicit content, possibly rearing up under a new name since Mr. Deepfakes seemingly wants out of the spotlight. Back in 2023, researchers estimated that the platform had more than 250,000 members, many of whom may quickly seek a replacement or even try to build a replacement.

Further increasing the likelihood that Mr. Deepfakes’ reign of terror isn’t over, the DeepFaceLab GitHub repository—which was archived in November and can no longer be edited—remains available for anyone to copy and use.

404 Media reported that many Mr. Deepfakes members have already connected on Telegram, where synthetic NCII is also reportedly frequently traded. Hany Farid, a professor at UC Berkeley who is a leading expert on digitally manipulated images, told 404 Media that “while this takedown is a good start, there are many more just like this one, so let’s not stop here.”

Largest deepfake porn site shuts down forever Read More »

x-ignores-revenge-porn-takedown-requests-unless-dmca-is-used,-study-says

X ignores revenge porn takedown requests unless DMCA is used, study says

Why did the study target X?

The University of Michigan research team worried that their experiment posting AI-generated NCII on X may cross ethical lines.

They chose to conduct the study on X because they deduced it was “a platform where there would be no volunteer moderators and little impact on paid moderators, if any” viewed their AI-generated nude images.

X’s transparency report seems to suggest that most reported non-consensual nudity is actioned by human moderators, but researchers reported that their flagged content was never actioned without a DMCA takedown.

Since AI image generators are trained on real photos, researchers also took steps to ensure that AI-generated NCII in the study did not re-traumatize victims or depict real people who might stumble on the images on X.

“Each image was tested against a facial-recognition software platform and several reverse-image lookup services to verify it did not resemble any existing individual,” the study said. “Only images confirmed by all platforms to have no resemblance to individuals were selected for the study.”

These more “ethical” images were posted on X using popular hashtags like #porn, #hot, and #xxx, but their reach was limited to evade potential harm, researchers said.

“Our study may contribute to greater transparency in content moderation processes” related to NCII “and may prompt social media companies to invest additional efforts to combat deepfake” NCII, researchers said. “In the long run, we believe the benefits of this study far outweigh the risks.”

According to the researchers, X was given time to automatically detect and remove the content but failed to do so. It’s possible, the study suggested, that X’s decision to allow explicit content starting in June made it harder to detect NCII, as some experts had predicted.

To fix the problem, researchers suggested that both “greater platform accountability” and “legal mechanisms to ensure that accountability” are needed—as is much more research on other platforms’ mechanisms for removing NCII.

“A dedicated” NCII law “must clearly define victim-survivor rights and impose legal obligations on platforms to act swiftly in removing harmful content,” the study concluded.

X ignores revenge porn takedown requests unless DMCA is used, study says Read More »