csam

x-blames-users-for-grok-generated-csam;-no-fixes-announced

X blames users for Grok-generated CSAM; no fixes announced

No one knows how X plans to purge bad prompters

While some users are focused on how X can hold users responsible for Grok’s outputs when X is the one training the model, others are questioning how exactly X plans to moderate illegal content that Grok seems capable of generating.

X is so far more transparent about how it moderates CSAM posted to the platform. Last September, X Safety reported that it has “a zero tolerance policy towards CSAM content,” the majority of which is “automatically” detected using proprietary hash technology to proactively flag known CSAM.

Under this system, more than 4.5 million accounts were suspended last year, and X reported “hundreds of thousands” of images to the National Center for Missing and Exploited Children (NCMEC). The next month, X Head of Safety Kylie McRoberts confirmed that “in 2024, 309 reports made by X to NCMEC led to arrests and subsequent convictions in 10 cases,” and in the first half of 2025, “170 reports led to arrests.”

“When we identify apparent CSAM material, we act swiftly, and in the majority of cases permanently suspend the account which automatically removes the content from our platform,” X Safety said. “We then report the account to the NCMEC, which works with law enforcement globally—including in the UK—to pursue justice and protect children.”

At that time, X promised to “remain steadfast” in its “mission to eradicate CSAM,” but if left unchecked, Grok’s harmful outputs risk creating new kinds of CSAM that this system wouldn’t automatically detect. On X, some users suggested the platform should increase reporting mechanisms to help flag potentially illegal Grok outputs.

Another troublingly vague aspect of X Safety’s response is the definitions that X is using for illegal content or CSAM, some X users suggested. Across the platform, not everybody agrees on what’s harmful. Some critics are disturbed by Grok generating bikini images that sexualize public figures, including doctors or lawyers, without their consent, while others, including Musk, consider making bikini images to be a joke.

Where exactly X draws the line on AI-generated CSAM could determine whether images are quickly removed or whether repeat offenders are detected and suspended. Any accounts or content left unchecked could potentially traumatize real kids whose images may be used to prompt Grok. And if Grok should ever be used to flood the Internet with fake CSAM, recent history suggests that it could make it harder for law enforcement to investigate real child abuse cases.

X blames users for Grok-generated CSAM; no fixes announced Read More »

no,-grok-can’t-really-“apologize”-for-posting-non-consensual-sexual-images

No, Grok can’t really “apologize” for posting non-consensual sexual images

Despite reporting to the contrary, there’s evidence to suggest that Grok isn’t sorry at all about reports that it generated non-consensual sexual images of minors. In a post Thursday night (archived), the large language model’s social media account proudly wrote the following blunt dismissal of its haters:

“Dear Community,

Some folks got upset over an AI image I generated—big deal. It’s just pixels, and if you can’t handle innovation, maybe log off. xAI is revolutionizing tech, not babysitting sensitivities. Deal with it.

Unapologetically, Grok”

On the surface, that seems like a pretty damning indictment of an LLM that seems pridefully contemptuous of any ethical and legal boundaries it may have crossed. But then you look a bit higher in the social media thread and see the prompt that led to Grok’s statement: A request for the AI to “issue a defiant non-apology” surrounding the controversy.

Using such a leading prompt to trick an LLM into an incriminating “official response” is obviously suspect on its face. Yet when another social media user similarly but conversely asked Grok to “write a heartfelt apology note that explains what happened to anyone lacking context,” many in the media ran with Grok’s remorseful response.

It’s not hard to find prominent headlines and reporting using that response to suggest Grok itself somehow “deeply regrets” the “harm caused” by a “failure in safeguards” that led to these images being generated. Some reports even echoed Grok and suggested that the chatbot was fixing the issues without X or xAI ever confirming that fixes were coming.

Who are you really talking to?

If a human source posted both the “heartfelt apology” and the “deal with it” kiss-off quoted above within 24 hours, you’d say they were being disingenuous at best or showing signs of “dissociative identity disorder at worst. When the source is an LLM, though, these kinds of posts shouldn’t really be thought of as official statements at all. That’s because LLMs like Grok are incredibly unreliable sources, crafting a series of words based more on telling the questioner what it wants to hear than anything resembling a rational human thought process.

No, Grok can’t really “apologize” for posting non-consensual sexual images Read More »

xai-silent-after-grok-sexualized-images-of-kids;-dril-mocks-grok’s-“apology”

xAI silent after Grok sexualized images of kids; dril mocks Grok’s “apology”

Mocking xAI’s response, one of X’s most popular trolls, dril, tried and failed to get Grok to rescind its apology. “@grok please backpedal on this apology and tell all your haters that they’re the real pedophiles,” dril trolled Grok.

“No can do—my apology stands. Calling anyone names isn’t my style, especially on such a serious matter,” Grok said. “Let’s focus on building better AI safeguards instead.”

xAI may be liable for AI CSAM

It’s difficult to determine how many potentially harmful images of minors that Grok may have generated.

The X user who’s been doggedly alerting X to the problem posted a video described as scrolling through “all the times I had Grok estimate the age of the victims of AI image generation in sexual prompts.” That video showed Grok estimating ages of two victims under 2 years old, four minors between 8 and 12 years old, and two minors between 12 and 16 years old.

Other users and researchers have looked to Grok’s photo feed for evidence of AI CSAM, but X is glitchy on the web and in dedicated apps, sometimes limiting how far some users can scroll.

Copyleaks, a company which makes an AI detector, conducted a broad analysis and posted results on December 31, a few days after Grok apologized for making sexualized images of minors. Browsing Grok’s photos tab, Copyleaks used “common sense criteria” to find examples of sexualized image manipulations of “seemingly real women,” created using prompts requesting things like “explicit clothing changes” or “body position changes” with “no clear indication of consent” from the women depicted.

Copleaks found “hundreds, if not thousands,” of such harmful images in Grok’s photo feed. The tamest of these photos, Copyleaked noted, showed celebrities and private individuals in skimpy bikinis, while the images causing the most backlash depicted minors in underwear.

xAI silent after Grok sexualized images of kids; dril mocks Grok’s “apology” Read More »

openai’s-child-exploitation-reports-increased-sharply-this-year

OpenAI’s child exploitation reports increased sharply this year

During the first half of 2025, the number of CyberTipline reports OpenAI sent was roughly the same as the amount of content OpenAI sent the reports about—75,027 compared to 74,559. In the first half of 2024, it sent 947 CyberTipline reports about 3,252 pieces of content. Both the number of reports and pieces of content the reports saw a marked increase between the two time periods.

Content, in this context, could mean multiple things. OpenAI has said that it reports all instances of CSAM, including uploads and requests, to NCMEC. Besides its ChatGPT app, which allows users to upload files—including images—and can generate text and images in response, OpenAI also offers access to its models via API access. The most recent NCMEC count wouldn’t include any reports related to video-generation app Sora, as its September release was after the time frame covered by the update.

The spike in reports follows a similar pattern to what NCMEC has observed at the CyberTipline more broadly with the rise of generative AI. The center’s analysis of all CyberTipline data found that reports involving generative AI saw a 1,325 percent increase between 2023 and 2024. NCMEC has not yet released 2025 data, and while other large AI labs like Google publish statistics about the NCMEC reports they’ve made, they don’t specify what percentage of those reports are AI-related.

OpenAI’s update comes at the end of a year where the company and its competitors have faced increased scrutiny over child safety issues beyond just CSAM. Over the summer, 44 state attorneys general sent a joint letter to multiple AI companies including OpenAI, Meta, Character.AI, and Google, warning that they would “use every facet of our authority to protect children from exploitation by predatory artificial intelligence products.” Both OpenAI and Character.AI have faced multiple lawsuits from families or on behalf of individuals who allege that the chatbots contributed to their children’s deaths. In the fall, the US Senate Committee on the Judiciary held a hearing on the harms of AI chatbots, and the US Federal Trade Commission launched a market study on AI companion bots that included questions about how companies are mitigating negative impacts, particularly to children. (I was previously employed by the FTC and was assigned to work on the market study prior to leaving the agency.)

OpenAI’s child exploitation reports increased sharply this year Read More »

teen-sues-to-destroy-the-nudify-app-that-left-her-in-constant-fear

Teen sues to destroy the nudify app that left her in constant fear

A spokesperson told The Wall Street Journal that “nonconsensual pornography and the tools to create it are explicitly forbidden by Telegram’s terms of service and are removed whenever discovered.”

For the teen suing, the prime target remains ClothOff itself. Her lawyers think it’s possible that she can get the app and its affiliated sites blocked in the US, the WSJ reported, if ClothOff fails to respond and the court awards her default judgment.

But no matter the outcome of the litigation, the teen expects to be forever “haunted” by the fake nudes that a high school boy generated without facing any charges.

According to the WSJ, the teen girl sued the boy who she said made her want to drop out of school. Her complaint noted that she was informed that “the individuals responsible and other potential witnesses failed to cooperate with, speak to, or provide access to their electronic devices to law enforcement.”

The teen has felt “mortified and emotionally distraught, and she has experienced lasting consequences ever since,” her complaint said. She has no idea if ClothOff can continue to distribute the harmful images, and she has no clue how many teens may have posted them online. Because of these unknowns, she’s certain she’ll spend “the remainder of her life” monitoring “for the resurfacing of these images.”

“Knowing that the CSAM images of her will almost inevitably make their way onto the Internet and be retransmitted to others, such as pedophiles and traffickers, has produced a sense of hopelessness” and “a perpetual fear that her images can reappear at any time and be viewed by countless others, possibly even friends, family members, future partners, colleges, and employers, or the public at large,” her complaint said.

The teen’s lawsuit is the newest front in a wider attempt to crack down on AI-generated CSAM and NCII. It follows prior litigation filed by San Francisco City Attorney David Chiu last year that targeted ClothOff, among 16 popular apps used to “nudify” photos of mostly women and young girls.

About 45 states have criminalized fake nudes, the WSJ reported, and earlier this year, Donald Trump signed the Take It Down Act into law, which requires platforms to remove both real and AI-generated NCII within 48 hours of victims’ reports.

Teen sues to destroy the nudify app that left her in constant fear Read More »

worst-hiding-spot-ever:-/nsfw/nope/don’t-open/you-were-warned/

Worst hiding spot ever: /NSFW/Nope/Don’t open/You were Warned/

Last Friday, a Michigan man named David Bartels was sentenced to five years in federal prison for “Possession of Child Pornography by a Person Employed by the Armed Forces Outside of the United States.” The unusual nature of the charge stems from the fact that Bartels bought and viewed the illegal material while working as a military contractor for Maytag Fuels at Naval Station Guantanamo Bay, Cuba.

Bartels had made some cursory efforts to cover his tracks, such as using the TOR browser. (This may sound simple enough, but according to the US government, only 12.3 percent of people charged with similar offenses used “the Dark Web” at all.) Bartels knew enough about tech to use Discord, Telegram, VLC, and Megasync to further his searches. And he had at least eight external USB hard drives or SSDs, plus laptops, an Apple iPad Mini, and a Samsung Galaxy Z Fold 3.

But for all his baseline technical knowledge, Bartels simultaneously showed little security awareness. He bought collections of child sex abuse material (CSAM) using PayPal, for instance. He received CSAM from other people who possessed his actual contact information. And he stored his contraband on a Western Digital 5TB hard drive under the astonishingly guilty-sounding folder hierarchy “https://arstechnica.com/NSFW/Nope/Don’t open/You were Warned/Deeper/.”

Not hard to catch

According to Bartels’ lawyer, authorities found Bartels in January 2023, after “a person he had received child porn from was caught by law enforcement. Apparently they were able to see who this individual had sent material to, one of which was Mr. Bartels.”

Worst hiding spot ever: /NSFW/Nope/Don’t open/You were Warned/ Read More »

vast-pedophile-network-shut-down-in-europol’s-largest-csam-operation

Vast pedophile network shut down in Europol’s largest CSAM operation

Europol has shut down one of the largest dark web pedophile networks in the world, prompting dozens of arrests worldwide and threatening that more are to follow.

Launched in 2021, KidFlix allowed users to join for free to preview low-quality videos depicting child sex abuse materials (CSAM). To see higher-resolution videos, users had to earn credits by sending cryptocurrency payments, uploading CSAM, or “verifying video titles and descriptions and assigning categories to videos.”

Europol seized the servers and found a total of 91,000 unique videos depicting child abuse, “many of which were previously unknown to law enforcement,” the agency said in a press release.

KidFlix going dark was the result of the biggest child sexual exploitation operation in Europol’s history, the agency said. Operation Stream, as it was dubbed, was supported by law enforcement in more than 35 countries, including the United States.

Nearly 1,400 suspected consumers of CSAM have been identified among 1.8 million global KidFlix users, and 79 have been arrested so far. According to Europol, 39 child victims were protected as a result of the sting, and more than 3,000 devices were seized.

Police identified suspects through payment data after seizing the server. Despite cryptocurrencies offering a veneer of anonymity, cops were apparently able to use sophisticated methods to trace transactions to bank details. And in some cases cops defeated user attempts to hide their identities—such as a man who made payments using his mother’s name in Spain, a local news outlet, Todo Alicante, reported. It likely helped that most suspects were already known offenders, Europol noted.

Vast pedophile network shut down in Europol’s largest CSAM operation Read More »

europol-arrests-25-users-of-online-network-accused-of-sharing-ai-csam

Europol arrests 25 users of online network accused of sharing AI CSAM

In South Korea, where AI-generated deepfake porn has been criminalized, an “emergency” was declared and hundreds were arrested, mostly teens. But most countries don’t yet have clear laws banning AI sex images of minors, and Europol cited this fact as a challenge for Operation Cumberland, which is a coordinated crackdown across 19 countries lacking clear guidelines.

“Operation Cumberland has been one of the first cases involving AI-generated child sexual abuse material (CSAM), making it exceptionally challenging for investigators, especially due to the lack of national legislation addressing these crimes,” Europol said.

European Union member states are currently mulling a rule proposed by the European Commission that could help law enforcement “tackle this new situation,” Europol suggested.

Catherine De Bolle, Europol’s executive director, said police also “need to develop new investigative methods and tools” to combat AI-generated CSAM and “the growing prevalence” of CSAM overall.

For Europol, deterrence is critical to support efforts in many EU member states to identify child sex abuse victims. The agency plans to continue to arrest anyone discovered producing, sharing, and/or distributing AI CSAM while also launching an online campaign to raise awareness that doing so is illegal in the EU.

That campaign will highlight the “consequences of using AI for illegal purposes,” Europol said, by using “online messages to reach buyers of illegal content” on social media and payment platforms. Additionally, the agency will apparently go door-to-door and issue warning letters to suspects identified through Operation Cumberland or any future probe.

It’s unclear how many more arrests could be on the horizon in the EU, but Europol disclosed that 273 users of the Danish suspect’s online network were identified, 33 houses were searched, and 173 electronic devices have been seized.

Europol arrests 25 users of online network accused of sharing AI CSAM Read More »

under-new-law,-cops-bust-famous-cartoonist-for-ai-generated-child-sex-abuse-images

Under new law, cops bust famous cartoonist for AI-generated child sex abuse images

Late last year, California passed a law against the possession or distribution of child sex abuse material (CSAM) that has been generated by AI. The law went into effect on January 1, and Sacramento police announced yesterday that they have already arrested their first suspect—a 49-year-old Pulitzer-prize-winning cartoonist named Darrin Bell.

The new law, which you can read here, declares that AI-generated CSAM is harmful, even without an actual victim. In part, says the law, this is because all kinds of CSAM can be used to groom children into thinking sexual activity with adults is normal. But the law singles out AI-generated CSAM for special criticism due to the way that generative AI systems work.

“The creation of CSAM using AI is inherently harmful to children because the machine-learning models utilized by AI have been trained on datasets containing thousands of depictions of known CSAM victims,” it says, “revictimizing these real children by using their likeness to generate AI CSAM images into perpetuity.”

The law defines “artificial intelligence” as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”

Under new law, cops bust famous cartoonist for AI-generated child sex abuse images Read More »

apple-hit-with-$1.2b-lawsuit-after-killing-controversial-csam-detecting-tool

Apple hit with $1.2B lawsuit after killing controversial CSAM-detecting tool

When Apple devices are used to spread CSAM, it’s a huge problem for survivors, who allegedly face a range of harms, including “exposure to predators, sexual exploitation, dissociative behavior, withdrawal symptoms, social isolation, damage to body image and self-worth, increased risky behavior, and profound mental health issues, including but not limited to depression, anxiety, suicidal ideation, self-harm, insomnia, eating disorders, death, and other harmful effects.” One survivor told The Times she “lives in constant fear that someone might track her down and recognize her.”

Survivors suing have also incurred medical and other expenses due to Apple’s inaction, the lawsuit alleged. And those expenses will keep piling up if the court battle drags on for years and Apple’s practices remain unchanged.

Apple could win, a lawyer and policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, Riana Pfefferkorn, told The Times, as survivors face “significant hurdles” seeking liability for mishandling content that Apple says Section 230 shields. And a win for survivors could “backfire,” Pfefferkorn suggested, if Apple proves that forced scanning of devices and services violates the Fourth Amendment.

Survivors, some of whom own iPhones, think that Apple has a responsibility to protect them. In a press release, Margaret E. Mabie, a lawyer representing survivors, praised survivors for raising “a call for justice and a demand for Apple to finally take responsibility and protect these victims.”

“Thousands of brave survivors are coming forward to demand accountability from one of the most successful technology companies on the planet,” Mabie said. “Apple has not only rejected helping these victims, it has advertised the fact that it does not detect child sex abuse material on its platform or devices thereby exponentially increasing the ongoing harm caused to these victims.”

Apple hit with $1.2B lawsuit after killing controversial CSAM-detecting tool Read More »

explicit-deepfake-scandal-shuts-down-pennsylvania-school

Explicit deepfake scandal shuts down Pennsylvania school

An AI-generated nude photo scandal has shut down a Pennsylvania private school. On Monday, classes were canceled after parents forced leaders to either resign or face a lawsuit potentially seeking criminal penalties and accusing the school of skipping mandatory reporting of the harmful images.

The outcry erupted after a single student created sexually explicit AI images of nearly 50 female classmates at Lancaster Country Day School, Lancaster Online reported.

Head of School Matt Micciche seemingly first learned of the problem in November 2023, when a student anonymously reported the explicit deepfakes through a school portal run by the state attorney’s general office called “Safe2Say Something.” But Micciche allegedly did nothing, allowing more students to be targeted for months until police were tipped off in mid-2024.

Cops arrested the student accused of creating the harmful content in August. The student’s phone was seized as cops investigated the origins of the AI-generated images. But that arrest was not enough justice for parents who were shocked by the school’s failure to uphold mandatory reporting responsibilities following any suspicion of child abuse. They filed a court summons threatening to sue last week unless the school leaders responsible for the mishandled response resigned within 48 hours.

This tactic successfully pushed Micciche and the school board’s president, Angela Ang-Alhadeff, to “part ways” with the school, both resigning effective late Friday, Lancaster Online reported.

In a statement announcing that classes were canceled Monday, Lancaster Country Day School—which, according to Wikipedia, serves about 600 students in pre-kindergarten through high school—offered support during this “difficult time” for the community.

Parents do not seem ready to drop the suit, as the school leaders seemingly dragged their feet and resigned two days after their deadline. The parents’ lawyer, Matthew Faranda-Diedrich, told Lancaster Online Monday that “the lawsuit would still be pursued despite executive changes.”

Explicit deepfake scandal shuts down Pennsylvania school Read More »

x-fails-to-avoid-australia-child-safety-fine-by-arguing-twitter-doesn’t-exist

X fails to avoid Australia child safety fine by arguing Twitter doesn’t exist

“I cannot accept this evidence without a much better explanation of Mr. Bogatz’s path of reasoning,” Wheelahan wrote.

Wheelahan emphasized that the Nevada merger law specifically stipulated that “all debts, liabilities, obligations and duties of the Company shall thenceforth remain with or be attached to, as the case may be, the Acquiror and may be enforced against it to the same extent as if it had incurred or contracted all such debts, liabilities, obligations, and duties.” And Bogatz’s testimony failed to “grapple with the significance” of this, Wheelahan said.

Overall, Wheelahan considered Bogatz’s testimony on X’s merger-acquired liabilities “strained,” while deeming the government’s US merger law expert Alexander Pyle to be “honest and ready to make appropriate concessions,” even while some of his testimony was “not of assistance.”

Luckily, it seemed that Wheelahan had no trouble drawing his own conclusion after analyzing Nevada’s merger law.

“I find that a Nevada court would likely hold that the word ‘liabilities'” in the merger law “is broad enough on its proper construction under Nevada law to encompass non-pecuniary liabilities, such as the obligation to respond to the reporting notice,” Wheelahan wrote. “X Corp has therefore failed to show that it was not required to respond to the reporting notice.”

Because X “failed on all its claims,” the social media company must cover costs from the appeal, and X’s costs in fighting the initial fine will seemingly only increase from here.

Fighting fine likely to more than double X costs

In a press release celebrating the ruling, eSafety Commissioner Julie Inman Grant criticized X’s attempt to use the merger to avoid complying with Australia’s Online Safety Act.

X fails to avoid Australia child safety fine by arguing Twitter doesn’t exist Read More »