online scams

bombshell-report-exposes-how-meta-relied-on-scam-ad-profits-to-fund-ai

Bombshell report exposes how Meta relied on scam ad profits to fund AI


“High risk” versus “high value”

Meta goosed its revenue by targeting users likely to click on scam ads, docs show.

Internal documents have revealed that Meta has projected it earns billions from ignoring scam ads that its platforms then targeted to users most likely to click on them.

In a lengthy report, Reuters exposed five years of Meta practices and failures that allowed scammers to take advantage of users of Facebook, Instagram, and WhatsApp.

Documents showed that internally, Meta was hesitant to abruptly remove accounts, even those considered some of the “scammiest scammers,” out of concern that a drop in revenue could diminish resources needed for artificial intelligence growth.

Instead of promptly removing bad actors, Meta allowed “high value accounts” to “accrue more than 500 strikes without Meta shutting them down,” Reuters reported. The more strikes a bad actor accrued, the more Meta could charge to run ads, as Meta’s documents showed the company “penalized” scammers by charging higher ad rates. Meanwhile, Meta acknowledged in documents that its systems helped scammers target users most likely to click on their ads.

“Users who click on scam ads are likely to see more of them because of Meta’s ad-personalization system, which tries to deliver ads based on a user’s interests,” Reuters reported.

Internally, Meta estimates that users across its apps in total encounter 15 billion “high risk” scam ads a day. That’s on top of 22 billion organic scam attempts that Meta users are exposed to daily, a 2024 document showed. Last year, the company projected that about $16 billion, which represents about 10 percent of its revenue, would come from scam ads.

“High risk” scam ads strive to sell users on fake products or investment schemes, Reuters noted. Some common scams in this category that mislead users include selling banned medical products, or promoting sketchy entities, like linking to illegal online casinos. However, Meta is most concerned about “imposter” ads, which impersonate celebrities or big brands that Meta fears may halt advertising or engagement on its apps if such scams aren’t quickly stopped.

“Hey it’s me,” one scam advertisement using Elon Musk’s photo read. “I have a gift for you text me.” Another using Donald Trump’s photo claimed the US president was offering $710 to every American as “tariff relief.” Perhaps most depressingly, a third posed as a real law firm, offering advice on how to avoid falling victim to online scams.

Meta removed these particular ads after Reuters flagged them, but in 2024, Meta earned about $7 billion from “high risk” ads like these alone, Reuters reported.

Sandeep Abraham, a former Meta safety investigator who now runs consultancy firm Risky Business Solutions as a fraud examiner, told Reuters that regulators should intervene.

“If regulators wouldn’t tolerate banks profiting from fraud, they shouldn’t tolerate it in tech,” Abraham said.

Meta won’t disclose how much it made off scam ads

Meta spokesperson Andy Stone told Reuters that its collection of documents—which were created between 2021 and 2025 by Meta’s finance, lobbying, engineering, and safety divisions—“present a selective view that distorts Meta’s approach to fraud and scams.”

Stone claimed that Meta’s estimate that it would earn 10 percent of its 2024 revenue from scam ads was “rough and overly-inclusive.” He suggested the actual amount Meta earned was much lower but declined to specify the true amount. He also said that Meta’s most recent investor disclosures note that scam ads “adversely affect” Meta’s revenue.

“We aggressively fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it, and we don’t want it either,” Stone said.

Despite those efforts, this spring, Meta’s safety team “estimated that the company’s platforms were involved in a third of all successful scams in the US,” Reuters reported. In other internal documents around the same time, Meta staff concluded that “it is easier to advertise scams on Meta platforms than Google,” acknowledging that Meta’s rivals were better at “weeding out fraud.”

As Meta tells it, though seemingly dismal, these documents came amid vast improvements in its fraud protections. Stone told Reuters that “over the past 18 months, we have reduced user reports of scam ads globally by 58 percent and, so far in 2025, we’ve removed more than 134 million pieces of scam ad content,” Stone said.

According to Reuters, the problem may be the pace Meta sets in combating scammers. In 2023, Meta laid off “everyone who worked on the team handling advertiser concerns about brand-rights issues,” then ordered safety staffers to limit use of computing resources to devote more resources to virtual reality and AI. A 2024 document showed Meta recommended a “moderate” approach to enforcement, plotting to reduce revenue “attributable to scams, illegal gambling and prohibited goods” by 1–3 percentage points each year since 2024, supposedly slashing it in half by 2027. More recently, a 2025 document showed Meta continues to weigh how “abrupt reductions of scam advertising revenue could affect its business projections.”

Eventually, Meta “substantially expanded” its teams that track scam ads, Stone told Reuters. But Meta also took steps to ensure they didn’t take too hard a hit while needing vast resources—$72 billion—to invest in AI, Reuters reported.

For example, in February, Meta told “the team responsible for vetting questionable advertisers” that they weren’t “allowed to take actions that could cost Meta more than 0.15 percent of the company’s total revenue,” Reuters reported. That’s any scam account worth about $135 million, Reuters noted. Stone pushed back, saying that the team was never given “a hard limit” on what the manager described as “specific revenue guardrails.”

“Let’s be cautious,” the team’s manager wrote, warning that Meta didn’t want to lose revenue by blocking “benign” ads mistakenly swept up in enforcement.

Meta should donate scam ad profits, ex-exec says

Documents showed that Meta prioritized taking action when it risked regulatory fines, although revenue from scam ads was worth roughly three times the highest fines it could face. Possibly, Meta most feared that officials would require disgorgement of ill-gotten gains, rather than fines.

Meta appeared to be less likely to ramp up enforcement from police requests. Documents showed that police in Singapore flagged “146 examples of scams targeting that country’s users last fall,” Reuters reported. Only 23 percent violated Meta’s policies, while the rest only “violate the spirit of the policy, but not the letter,” a Meta presentation said.

Scams that Meta failed to flag offered promotions like crypto scams, fake concert tickets, or deals “too good to be true,” like 80 percent off a desirable item from a high-fashion brand. Meta also looked past fake job ads that claimed to be hiring for Big Tech companies.

Rob Leathern previously led Meta’s business integrity unit that worked to prevent scam ads but left in 2020. He told Wired that it’s hard to “know how bad it’s gotten or what the current state is” since Meta and other social media platforms don’t provide outside researchers access to large random samples of ads.

With such access, researchers like Leathern and Rob Goldman, Meta’s former vice president of ads, could provide “scorecards” showing how well different platforms work to combat scams. Together, Leathern and Goldman launched a nonprofit called CollectiveMetrics.org in hopes of “bringing more transparency to digital advertising in order to fight deceptive ads,” Wired reported.

“I want there to be more transparency. I want third parties, researchers, academics, nonprofits, whoever, to be able to actually assess how good of a job these platforms are doing at stopping scams and fraud,” Leathern told Wired. “We’d like to move to actual measurement of the problem and help foster an understanding.”

Another meaningful step that Leathern thinks companies like Meta should take to protect users would be to notify users when Meta discovers that they clicked on a scam ad—rather than targeting them with more scam ads, as Reuters suggested was Meta’s practice.

“These scammers aren’t getting people’s money on day one, typically. So there’s a window to take action,” he said, recommending that platforms donate ill-gotten gains from running scam ads to “fund nonprofits to educate people about how to recognize these kinds of scams or problems.”

“There’s lots that could be done with funds that come from these bad guys,” Leathern said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Bombshell report exposes how Meta relied on scam ad profits to fund AI Read More »

feds-seize-$15-billion-from-alleged-forced-labor-scam-built-on-“human-suffering”

Feds seize $15 billion from alleged forced labor scam built on “human suffering”

Federal prosecutors have seized $15 billion from the alleged kingpin of an operation that used imprisoned laborers to trick unsuspecting people into making investments in phony funds, often after spending months faking romantic relationships with the victims.

Such “pig butchering” scams have operated for years. They typically work when members of the operation initiate conversations with people on social media and then spend months messaging them. Often, the scammers pose as attractive individuals who feign romantic interest for the victim.

Forced labor, phone farms, and human suffering

Eventually, conversations turn to phony investment funds with the end goal of convincing the victim to transfer large amounts of bitcoin. In many cases, the scammers are trafficked and held against their will in compounds surrounded by fences and barbed wire.

On Tuesday, federal prosecutors unsealed an indictment against Chen Zhi, the founder and chairman of a multinational business conglomerate based in Cambodia. It alleged that Zhi led such a forced-labor scam operation, which, with the help of unnamed co-conspirators, netted billions of dollars from victims.

“The defendant CHEN ZHI and his co-conspirators designed the compounds to maximize profits and personally ensured that they had the necessary infrastructure to reach as many victims as possible,” prosecutors wrote in the court document, filed in US District Court for the Eastern District of New York. The indictment continued:

For example, in or about 2018, Co-Conspirator-1 was involved in procuring millions of mobile telephone numbers and account passwords from an illicit online marketplace. In or about 2019, Co-Conspirator-3 helped oversee construction of the Golden Fortune compound. CHEN himself maintained documents describing and depicting “phone farms,” automated call centers used to facilitate cryptocurrency investment fraud and other cybercrimes, including the below image:

Credit: Justice Department

Prosecutors said Zhi is the founder and chairman of Prince Group, a Cambodian corporate conglomerate that ostensibly operated dozens of legitimate business entities in more than 30 countries. In secret, however, Zhi and top executives built Prince Group into one of Asia’s largest transnational criminal organizations. Zhi’s whereabouts are unknown.

Feds seize $15 billion from alleged forced labor scam built on “human suffering” Read More »

eu-investigates-apple,-google,-and-microsoft-over-handling-of-online-scams

EU investigates Apple, Google, and Microsoft over handling of online scams

The EU is set to scrutinize if Apple, Google, and Microsoft are failing to adequately police financial fraud online, as it steps up efforts to police how Big Tech operates online.

The EU’s tech chief Henna Virkkunen told the Financial Times that on Tuesday, the bloc’s regulators would send formal requests for information to the three US Big Tech groups as well as global accommodation platform Booking Holdings, under powers granted under the Digital Services Act to tackle financial scams.

“We see that more and more criminal actions are taking place online,” Virkkunen said. “We have to make sure that online platforms really take all their efforts to detect and prevent that kind of illegal content.”

The move, which could later lead to a formal investigation and potential fines against the companies, comes amid transatlantic tensions over the EU’s digital rulebook. US President Donald Trump has threatened to punish countries that “discriminate” against US companies with higher tariffs.

Virkkunnen stressed the commission looked at the operations of individual companies, rather than where they were based. She will scrutinize how Apple and Google are handling fake applications in their app stores, such as fake banking apps.

She said regulators would also look at fake search results in the search engines of Google and Microsoft’s Bing. The bloc wants to have more information about the approach Booking Holdings, whose biggest subsidiary Booking.com is based in Amsterdam, is taking to fake accommodation listings. It is the only Europe-based company among the four set to be scrutinized.

EU investigates Apple, Google, and Microsoft over handling of online scams Read More »

regrets:-actors-who-sold-ai-avatars-stuck-in-black-mirror-esque-dystopia

Regrets: Actors who sold AI avatars stuck in Black Mirror-esque dystopia

In a Black Mirror-esque turn, some cash-strapped actors who didn’t fully understand the consequences are regretting selling their likenesses to be used in AI videos that they consider embarrassing, damaging, or harmful, AFP reported.

Among them is a 29-year-old New York-based actor, Adam Coy, who licensed rights to his face and voice to a company called MCM for one year for $1,000 without thinking, “am I crossing a line by doing this?” His partner’s mother later found videos where he appeared as a doomsayer predicting disasters, he told the AFP.

South Korean actor Simon Lee’s AI likeness was similarly used to spook naïve Internet users but in a potentially more harmful way. He told the AFP that he was “stunned” to find his AI avatar promoting “questionable health cures on TikTok and Instagram,” feeling ashamed to have his face linked to obvious scams.

As AI avatar technology improves, the temptation to license likenesses will likely grow. One of the most successful companies that’s recruiting AI avatars, UK-based Synthesia, doubled its valuation to $2.1 billion in January, CNBC reported. And just last week, Synthesia struck a $2 billion deal with Shutterstock that will make its AI avatars more human-like, The Guardian reported.

To ensure that actors are incentivized to license their likenesses, Synthesia also recently launched an equity fund. According to the company, actors behind the most popular AI avatars or featured in Synthesia marketing campaigns will be granted options in “a pool of our company shares” worth $1 million.

“These actors will be part of the program for up to four years, during which their equity awards will vest monthly,” Synthesia said.

For actors, selling their AI likeness seems quick and painless—and perhaps increasingly more lucrative. All they have to do is show up and make a bunch of different facial expressions in front of a green screen, then collect their checks. But Alyssa Malchiodi, a lawyer who has advocated on behalf of actors, told the AFP that “the clients I’ve worked with didn’t fully understand what they were agreeing to at the time,” blindly signing contracts with “clauses considered abusive,” even sometimes granting “worldwide, unlimited, irrevocable exploitation, with no right of withdrawal.”

Regrets: Actors who sold AI avatars stuck in Black Mirror-esque dystopia Read More »

google-has-no-duty-to-refund-gift-card-scam-victims,-judge-finds

Google has no duty to refund gift card scam victims, judge finds

But Freeman ruled that “May suffered economic harm because of third-party scammers’ fraudulent inducement, not Google’s omission or misrepresentation.”

Additionally, May failed to show that Google had any duty to refund customers after Google cited Target and Walmart policies to show that it’s common to refuse refunds.

Scam victims did not use gift card “as designed”

Freeman mostly sided with Google, deciding that the company engaged in no unfair practices, while noting that May had not used the gift cards “in their designed way.” The judge also agreed with Google that May’s funds were not considered stolen at the time she purchased the gift cards, because May still controlled the funds at that point in time.

Additionally, May’s attempt to argue that Google has the technology to detect scams failed, Freeman wrote, because May couldn’t prove that Google deployed that technology when her particular scam purchases were made. Even after May argued that she reported the theft to Google, Freeman wrote, May’s complaint failed because “there is no allegation that Google had a duty to investigate her report.”

Ultimately, May’s complaint “identifies no public policy suggesting Google has a duty to refund the scammed victims or that the harm of Google’s conduct outweighs any benefits,” Freeman concluded.

In her order, Freeman provided leave to amend some claims in the next 45 days, but Ars could not immediately reach May’s lawyer to confirm if the complaint would likely be amended. However, the judge notably dismissed a claim seeking triple damages because May’s complaint “failed to show a likelihood that May will be a victim of gift card scams again given her awareness of such scams,” which may deflate May’s interests to amend.

That particular part of the ruling may be especially frustrating for May, whose complaint was sparked by a claim that she never would have been victimized if Google had provided adequate warnings of scams.

Google did not immediately respond to Ars’ request to comment.

Google has no duty to refund gift card scam victims, judge finds Read More »