child sex abuse

europol-arrests-25-users-of-online-network-accused-of-sharing-ai-csam

Europol arrests 25 users of online network accused of sharing AI CSAM

In South Korea, where AI-generated deepfake porn has been criminalized, an “emergency” was declared and hundreds were arrested, mostly teens. But most countries don’t yet have clear laws banning AI sex images of minors, and Europol cited this fact as a challenge for Operation Cumberland, which is a coordinated crackdown across 19 countries lacking clear guidelines.

“Operation Cumberland has been one of the first cases involving AI-generated child sexual abuse material (CSAM), making it exceptionally challenging for investigators, especially due to the lack of national legislation addressing these crimes,” Europol said.

European Union member states are currently mulling a rule proposed by the European Commission that could help law enforcement “tackle this new situation,” Europol suggested.

Catherine De Bolle, Europol’s executive director, said police also “need to develop new investigative methods and tools” to combat AI-generated CSAM and “the growing prevalence” of CSAM overall.

For Europol, deterrence is critical to support efforts in many EU member states to identify child sex abuse victims. The agency plans to continue to arrest anyone discovered producing, sharing, and/or distributing AI CSAM while also launching an online campaign to raise awareness that doing so is illegal in the EU.

That campaign will highlight the “consequences of using AI for illegal purposes,” Europol said, by using “online messages to reach buyers of illegal content” on social media and payment platforms. Additionally, the agency will apparently go door-to-door and issue warning letters to suspects identified through Operation Cumberland or any future probe.

It’s unclear how many more arrests could be on the horizon in the EU, but Europol disclosed that 273 users of the Danish suspect’s online network were identified, 33 houses were searched, and 173 electronic devices have been seized.

Europol arrests 25 users of online network accused of sharing AI CSAM Read More »

fbi:-after-dad-allegedly-tried-to-shoot-trump,-son-arrested-for-child-porn

FBI: After dad allegedly tried to shoot Trump, son arrested for child porn

family matters —

“Hundreds” of files found on SD card, FBI agent says.

Picture of police lights.

Alex Schmidt / Getty Images

Oran Routh has had an eventful few weeks.

In August, he moved into a two-bed, two-bath rental unit on the second floor of a building in Greensboro, North Carolina.

On September 15, his father, Ryan Routh, was found in the bushes of the sixth hole of Trump International Golf Club with a scope and a rifle, apparently in a bid to assassinate Donald Trump, who was golfing that day.

As part of the ensuing federal investigation, the FBI raided the junior Routh’s apartment on September 21. A Starbucks bag labeled “Oran” still sat on a dresser in one of the bedrooms while agents searched the home and Routh’s person, looking for any evidence related to his father’s actions. In the course of the search, they found one Galaxy Note 9 on Oran’s person and another Galaxy Note 9 in a laptop bag.

On September 22, the FBI obtained a warrant to search the devices. The investigation of Oran Routh quickly moved in a different direction after the FBI said that it found “hundreds” of videos depicting the sexual abuse of prepubescent girls on an SD card in the Note 9 from the laptop bag.

The other Note 9, the one that Oran had with him when raided, contained not just downloaded files but also “chats from a messaging application that, based on my training and experience, is commonly used by individuals who distribute and receive child pornography,” said an FBI agent in an affidavit. (The messaging app is not named.)

According to the agent, whoever used the phone had been chatting as recently as July with someone on the Internet who sold access to various cloud storage links. When asked for a sample of the linked material, the seller sent over two files depicting the abuse of young girls.

On September 23, Routh was charged in North Carolina federal court with both receipt and possession of child pornography. According to the court docket, Routh was arrested today.

FBI: After dad allegedly tried to shoot Trump, son arrested for child porn Read More »

nonprofit-scrubs-illegal-content-from-controversial-ai-training-dataset

Nonprofit scrubs illegal content from controversial AI training dataset

Nonprofit scrubs illegal content from controversial AI training dataset

After Stanford Internet Observatory researcher David Thiel found links to child sexual abuse materials (CSAM) in an AI training dataset tainting image generators, the controversial dataset was immediately taken down in 2023.

Now, the LAION (Large-scale Artificial Intelligence Open Network) team has released a scrubbed version of the LAION-5B dataset called Re-LAION-5B and claimed that it “is the first web-scale, text-link to images pair dataset to be thoroughly cleaned of known links to suspected CSAM.”

To scrub the dataset, LAION partnered with the Internet Watch Foundation (IWF) and the Canadian Center for Child Protection (C3P) to remove 2,236 links that matched with hashed images in the online safety organizations’ databases. Removals include all the links flagged by Thiel, as well as content flagged by LAION’s partners and other watchdogs, like Human Rights Watch, which warned of privacy issues after finding photos of real kids included in the dataset without their consent.

In his study, Thiel warned that “the inclusion of child abuse material in AI model training data teaches tools to associate children in illicit sexual activity and uses known child abuse images to generate new, potentially realistic child abuse content.”

Thiel urged LAION and other researchers scraping the Internet for AI training data that a new safety standard was needed to better filter out not just CSAM, but any explicit imagery that could be combined with photos of children to generate CSAM. (Recently, the US Department of Justice pointedly said that “CSAM generated by AI is still CSAM.”)

While LAION’s new dataset won’t alter models that were trained on the prior dataset, LAION claimed that Re-LAION-5B sets “a new safety standard for cleaning web-scale image-link datasets.” Where before illegal content “slipped through” LAION’s filters, the researchers have now developed an improved new system “for identifying and removing illegal content,” LAION’s blog said.

Thiel told Ars that he would agree that LAION has set a new safety standard with its latest release, but “there are absolutely ways to improve it.” However, “those methods would require possession of all original images or a brand new crawl,” and LAION’s post made clear that it only utilized image hashes and did not conduct a new crawl that could have risked pulling in more illegal or sensitive content. (On Threads, Thiel shared more in-depth impressions of LAION’s effort to clean the dataset.)

LAION warned that “current state-of-the-art filters alone are not reliable enough to guarantee protection from CSAM in web scale data composition scenarios.”

“To ensure better filtering, lists of hashes of suspected links or images created by expert organizations (in our case, IWF and C3P) are suitable choices,” LAION’s blog said. “We recommend research labs and any other organizations composing datasets from the public web to partner with organizations like IWF and C3P to obtain such hash lists and use those for filtering. In the longer term, a larger common initiative can be created that makes such hash lists available for the research community working on dataset composition from web.”

According to LAION, the bigger concern is that some links to known CSAM scraped into a 2022 dataset are still active more than a year later.

“It is a clear hint that law enforcement bodies have to intensify the efforts to take down domains that host such image content on public web following information and recommendations by organizations like IWF and C3P, making it a safer place, also for various kinds of research related activities,” LAION’s blog said.

HRW researcher Hye Jung Han praised LAION for removing sensitive data that she flagged, while also urging more interventions.

“LAION’s responsive removal of some children’s personal photos from their dataset is very welcome, and will help to protect these children from their likenesses being misused by AI systems,” Han told Ars. “It’s now up to governments to pass child data protection laws that would protect all children’s privacy online.”

Although LAION’s blog said that the content removals represented an “upper bound” of CSAM that existed in the initial dataset, AI specialist and Creative.AI co-founder Alex Champandard told Ars that he’s skeptical that all CSAM was removed.

“They only filter out previously identified CSAM, which is only a partial solution,” Champandard told Ars. “Statistically speaking, most instances of CSAM have likely never been reported nor investigated by C3P or IWF. A more reasonable estimate of the problem is about 25,000 instances of things you’d never want to train generative models on—maybe even 50,000.”

Champandard agreed with Han that more regulations are needed to protect people from AI harms when training data is scraped from the web.

“There’s room for improvement on all fronts: privacy, copyright, illegal content, etc.,” Champandard said. Because “there are too many data rights being broken with such web-scraped datasets,” Champandard suggested that datasets like LAION’s won’t “stand the test of time.”

“LAION is simply operating in the regulatory gap and lag in the judiciary system until policymakers realize the magnitude of the problem,” Champandard said.

Nonprofit scrubs illegal content from controversial AI training dataset Read More »

apple-“clearly-underreporting”-child-sex-abuse,-watchdogs-say

Apple “clearly underreporting” child sex abuse, watchdogs say

Apple “clearly underreporting” child sex abuse, watchdogs say

After years of controversies over plans to scan iCloud to find more child sexual abuse materials (CSAM), Apple abandoned those plans last year. Now, child safety experts have accused the tech giant of not only failing to flag CSAM exchanged and stored on its services—including iCloud, iMessage, and FaceTime—but also allegedly failing to report all the CSAM that is flagged.

The United Kingdom’s National Society for the Prevention of Cruelty to Children (NSPCC) shared UK police data with The Guardian showing that Apple is “vastly undercounting how often” CSAM is found globally on its services.

According to the NSPCC, police investigated more CSAM cases in just the UK alone in 2023 than Apple reported globally for the entire year. Between April 2022 and March 2023 in England and Wales, the NSPCC found, “Apple was implicated in 337 recorded offenses of child abuse images.” But in 2023, Apple only reported 267 instances of CSAM to the National Center for Missing & Exploited Children (NCMEC), supposedly representing all the CSAM on its platforms worldwide, The Guardian reported.

Large tech companies in the US must report CSAM to NCMEC when it’s found, but while Apple reports a couple hundred CSAM cases annually, its big tech peers like Meta and Google report millions, NCMEC’s report showed. Experts told The Guardian that there’s ongoing concern that Apple “clearly” undercounts CSAM on its platforms.

Richard Collard, the NSPCC’s head of child safety online policy, told The Guardian that he believes Apple’s child safety efforts need major improvements.

“There is a concerning discrepancy between the number of UK child abuse image crimes taking place on Apple’s services and the almost negligible number of global reports of abuse content they make to authorities,” Collard told The Guardian. “Apple is clearly behind many of their peers in tackling child sexual abuse when all tech firms should be investing in safety and preparing for the rollout of the Online Safety Act in the UK.”

Outside the UK, other child safety experts shared Collard’s concerns. Sarah Gardner, the CEO of a Los Angeles-based child protection organization called the Heat Initiative, told The Guardian that she considers Apple’s platforms a “black hole” obscuring CSAM. And she expects that Apple’s efforts to bring AI to its platforms will intensify the problem, potentially making it easier to spread AI-generated CSAM in an environment where sexual predators may expect less enforcement.

“Apple does not detect CSAM in the majority of its environments at scale, at all,” Gardner told The Guardian.

Gardner agreed with Collard that Apple is “clearly underreporting” and has “not invested in trust and safety teams to be able to handle this” as it rushes to bring sophisticated AI features to its platforms. Last month, Apple integrated ChatGPT into Siri, iOS and Mac OS, perhaps setting expectations for continually enhanced generative AI features to be touted in future Apple gear.

“The company is moving ahead to a territory that we know could be incredibly detrimental and dangerous to children without the track record of being able to handle it,” Gardner told The Guardian.

So far, Apple has not commented on the NSPCC’s report. Last September, Apple did respond to the Heat Initiative’s demands to detect more CSAM, saying that rather than focusing on scanning for illegal content, its focus is on connecting vulnerable or victimized users directly with local resources and law enforcement that can assist them in their communities.

Apple “clearly underreporting” child sex abuse, watchdogs say Read More »