facebook

after-“glitter-bomb,”-cops-arrested-former-cop-who-criticized-current-cops-online

After “glitter bomb,” cops arrested former cop who criticized current cops online

The police claimed that “the fraudulent Facebook pages posted comments on Village of Orland Park social media sites while also soliciting friend requests from Orland Park Police employees and other citizens, portraying the likeness of Deputy Chief of Police Brian West”—and said that this was both Disorderly Conduct and False Personation, both misdemeanors.

West got permission from his boss to launch a criminal investigation, which soon turned into search warrants that surfaced a name: retired Orland Park Sergeant Ken Kovac, who had left the department in 2019 after two decades of service. Kovac was charged, and he surrendered himself at the Orland Park Police Department on April 7, 2024.

The police then issued their press release, letting their community know that West had witnessed “demeaning comments in reference to his supervisory position within the department from Kovac’s posts on social media”—which doesn’t sound like any sort of crime. They also wanted to let concerned citizens know that West “epitomizes the principles of public service” and that “Deputy Chief West’s apprehensions were treated with the utmost seriousness and underwent a thorough investigation.”

Okay.

Despite the “utmost seriousness” of this Very Serious Investigation, a judge wasn’t having any of it. In January 2025, Cook County Judge Mohammad Ahmad threw out both charges against Kovac.

Kovac, of course, was thrilled. His lawyer told a local Patch reporter, “These charges never should have been brought. Ken Kovac made a Facebook account that poked fun at the Deputy Chief of the Orland Park Police Department. The Deputy Chief didn’t like it and tried to use the criminal legal system to get even.”

Orland Park was not backing down, however, blaming prosecutors for the loss. “Despite compelling evidence in the case, the Cook County State’s Attorney’s Office was unable to secure a prosecution, failing in its responsibility to protect Deputy Chief West as a victim of these malicious acts,” the village manager told Patch. “The Village of Orland Park is deeply disappointed by this outcome and stands unwavering in its support of former Deputy Chief West.”

The drama took its most recent, entirely predictable, turn this week when Kovac sued the officials who had arrested him. He told the Chicago Sun-Times that he had been embarrassed about being fingerprinted and processed “at the police department that I was previously employed at by people that I used to work with and for.”

Orland Park told the paper that it “stands by its actions and those of its employees and remains confident that they were appropriate and fully compliant with the law.”

After “glitter bomb,” cops arrested former cop who criticized current cops online Read More »

meta-plans-to-test-and-tinker-with-x’s-community-notes-algorithm

Meta plans to test and tinker with X’s community notes algorithm

Meta also confirmed that it won’t be reducing visibility of misleading posts with community notes. That’s a change from the prior system, Meta noted, which had penalties associated with fact-checking.

According to Meta, X’s algorithm cannot be gamed, supposedly safeguarding “against organized campaigns” striving to manipulate notes and “influence what notes get published or what they say.” Meta claims it will rely on external research on community notes to avoid that pitfall, but as recently as last October, outside researchers had suggested that X’s Community Notes were easily sabotaged by toxic X users.

“We don’t expect this process to be perfect, but we’ll continue to improve as we learn,” Meta said.

Meta confirmed that the company plans to tweak X’s algorithm over time to develop its own version of community notes, which “may explore different or adjusted algorithms to support how Community Notes are ranked and rated.”

In a post, X’s Support account said that X was “excited” that Meta was using its “well-established, academically studied program as a foundation” for its community notes.

Meta plans to test and tinker with X’s community notes algorithm Read More »

”torrenting-from-a-corporate-laptop-doesn’t-feel-right”:-meta-emails-unsealed

”Torrenting from a corporate laptop doesn’t feel right”: Meta emails unsealed

Emails discussing torrenting prove that Meta knew it was “illegal,” authors alleged. And Bashlykov’s warnings seemingly landed on deaf ears, with authors alleging that evidence showed Meta chose to instead hide its torrenting as best it could while downloading and seeding terabytes of data from multiple shadow libraries as recently as April 2024.

Meta allegedly concealed seeding

Supposedly, Meta tried to conceal the seeding by not using Facebook servers while downloading the dataset to “avoid” the “risk” of anyone “tracing back the seeder/downloader” from Facebook servers, an internal message from Meta researcher Frank Zhang said, while describing the work as in “stealth mode.” Meta also allegedly modified settings “so that the smallest amount of seeding possible could occur,” a Meta executive in charge of project management, Michael Clark, said in a deposition.

Now that new information has come to light, authors claim that Meta staff involved in the decision to torrent LibGen must be deposed again, because allegedly the new facts “contradict prior deposition testimony.”

Mark Zuckerberg, for example, claimed to have no involvement in decisions to use LibGen to train AI models. But unredacted messages show the “decision to use LibGen occurred” after “a prior escalation to MZ,” authors alleged.

Meta did not immediately respond to Ars’ request for comment and has maintained throughout the litigation that AI training on LibGen was “fair use.”

However, Meta has previously addressed its torrenting in a motion to dismiss filed last month, telling the court that “plaintiffs do not plead a single instance in which any part of any book was, in fact, downloaded by a third party from Meta via torrent, much less that Plaintiffs’ books were somehow distributed by Meta.”

While Meta may be confident in its legal strategy despite the new torrenting wrinkle, the social media company has seemingly complicated its case by allowing authors to expand the distribution theory that’s key to winning a direct copyright infringement claim beyond just claiming that Meta’s AI outputs unlawfully distributed their works.

As limited discovery on Meta’s seeding now proceeds, Meta is not fighting the seeding aspect of the direct copyright infringement claim at this time, telling the court that it plans to “set… the record straight and debunk… this meritless allegation on summary judgment.”

”Torrenting from a corporate laptop doesn’t feel right”: Meta emails unsealed Read More »

ai-haters-build-tarpits-to-trap-and-trick-ai-scrapers-that-ignore-robots.txt

AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt


Making AI crawlers squirm

Attackers explain how an anti-spam defense became an AI weapon.

Last summer, Anthropic inspired backlash when its ClaudeBot AI crawler was accused of hammering websites a million or more times a day.

And it wasn’t the only artificial intelligence company making headlines for supposedly ignoring instructions in robots.txt files to avoid scraping web content on certain sites. Around the same time, Reddit’s CEO called out all AI companies whose crawlers he said were “a pain in the ass to block,” despite the tech industry otherwise agreeing to respect “no scraping” robots.txt rules.

Watching the controversy unfold was a software developer whom Ars has granted anonymity to discuss his development of malware (we’ll call him Aaron). Shortly after he noticed Facebook’s crawler exceeding 30 million hits on his site, Aaron began plotting a new kind of attack on crawlers “clobbering” websites that he told Ars he hoped would give “teeth” to robots.txt.

Building on an anti-spam cybersecurity tactic known as tarpitting, he created Nepenthes, malicious software named after a carnivorous plant that will “eat just about anything that finds its way inside.”

Aaron clearly warns users that Nepenthes is aggressive malware. It’s not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an “infinite maze” of static files with no exit links, where they “get stuck” and “thrash around” for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That’s likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.

Tarpits were originally designed to waste spammers’ time and resources, but creators like Aaron have now evolved the tactic into an anti-AI weapon. As of this writing, Aaron confirmed that Nepenthes can effectively trap all the major web crawlers. So far, only OpenAI’s crawler has managed to escape.

It’s unclear how much damage tarpits or other AI attacks can ultimately do. Last May, Laxmi Korada, Microsoft’s director of partner technology, published a report detailing how leading AI companies were coping with poisoning, one of the earliest AI defense tactics deployed. He noted that all companies have developed poisoning countermeasures, while OpenAI “has been quite vigilant” and excels at detecting the “first signs of data poisoning attempts.”

Despite these efforts, he concluded that data poisoning was “a serious threat to machine learning models.” And in 2025, tarpitting represents a new threat, potentially increasing the costs of fresh data at a moment when AI companies are heavily investing and competing to innovate quickly while rarely turning significant profits.

“A link to a Nepenthes location from your site will flood out valid URLs within your site’s domain name, making it unlikely the crawler will access real content,” a Nepenthes explainer reads.

The only AI company that responded to Ars’ request to comment was OpenAI, whose spokesperson confirmed that OpenAI is already working on a way to fight tarpitting.

“We’re aware of efforts to disrupt AI web crawlers,” OpenAI’s spokesperson said. “We design our systems to be resilient while respecting robots.txt and standard web practices.”

But to Aaron, the fight is not about winning. Instead, it’s about resisting the AI industry further decaying the Internet with tech that no one asked for, like chatbots that replace customer service agents or the rise of inaccurate AI search summaries. By releasing Nepenthes, he hopes to do as much damage as possible, perhaps spiking companies’ AI training costs, dragging out training efforts, or even accelerating model collapse, with tarpits helping to delay the next wave of enshittification.

“Ultimately, it’s like the Internet that I grew up on and loved is long gone,” Aaron told Ars. “I’m just fed up, and you know what? Let’s fight back, even if it’s not successful. Be indigestible. Grow spikes.”

Nepenthes instantly inspires another tarpit

Nepenthes was released in mid-January but was instantly popularized beyond Aaron’s expectations after tech journalist Cory Doctorow boosted a tech commentator, Jürgen Geuter, praising the novel AI attack method on Mastodon. Very quickly, Aaron was shocked to see engagement with Nepenthes skyrocket.

“That’s when I realized, ‘oh this is going to be something,'” Aaron told Ars. “I’m kind of shocked by how much it’s blown up.”

It’s hard to tell how widely Nepenthes has been deployed. Site owners are discouraged from flagging when the malware has been deployed, forcing crawlers to face unknown “consequences” if they ignore robots.txt instructions.

Aaron told Ars that while “a handful” of site owners have reached out and “most people are being quiet about it,” his web server logs indicate that people are already deploying the tool. Likely, site owners want to protect their content, deter scraping, or mess with AI companies.

When software developer and hacker Gergely Nagy, who goes by the handle “algernon” online, saw Nepenthes, he was delighted. At that time, Nagy told Ars that nearly all of his server’s bandwidth was being “eaten” by AI crawlers.

Already blocking scraping and attempting to poison AI models through a simpler method, Nagy took his defense method further and created his own tarpit, Iocaine. He told Ars the tarpit immediately killed off about 94 percent of bot traffic to his site, which was primarily from AI crawlers. Soon, social media discussion drove users to inquire about Iocaine deployment, including not just individuals but also organizations wanting to take stronger steps to block scraping.

Iocaine takes ideas (not code) from Nepenthes, but it’s more intent on using the tarpit to poison AI models. Nagy used a reverse proxy to trap crawlers in an “infinite maze of garbage” in an attempt to slowly poison their data collection as much as possible for daring to ignore robots.txt.

Taking its name from “one of the deadliest poisons known to man” from The Princess Bride, Iocaine is jokingly depicted as the “deadliest poison known to AI.” While there’s no way of validating that claim, Nagy’s motto is that the more poisoning attacks that are out there, “the merrier.” He told Ars that his primary reasons for building Iocaine were to help rights holders wall off valuable content and stop AI crawlers from crawling with abandon.

Tarpits aren’t perfect weapons against AI

Running malware like Nepenthes can burden servers, too. Aaron likened the cost of running Nepenthes to running a cheap virtual machine on a Raspberry Pi, and Nagy said that serving crawlers Iocaine costs about the same as serving his website.

But Aaron told Ars that Nepenthes wasting resources is the chief objection he’s seen preventing its deployment. Critics fear that deploying Nepenthes widely will not only burden their servers but also increase the costs of powering all that AI crawling for nothing.

“That seems to be what they’re worried about more than anything,” Aaron told Ars. “The amount of power that AI models require is already astronomical, and I’m making it worse. And my view of that is, OK, so if I do nothing, AI models, they boil the planet. If I switch this on, they boil the planet. How is that my fault?”

Aaron also defends against this criticism by suggesting that a broader impact could slow down AI investment enough to possibly curb some of that energy consumption. Perhaps due to the resistance, AI companies will be pushed to seek permission first to scrape or agree to pay more content creators for training on their data.

“Any time one of these crawlers pulls from my tarpit, it’s resources they’ve consumed and will have to pay hard cash for, but, being bullshit, the money [they] have spent to get it won’t be paid back by revenue,” Aaron posted, explaining his tactic online. “It effectively raises their costs. And seeing how none of them have turned a profit yet, that’s a big problem for them. The investor money will not continue forever without the investors getting paid.”

Nagy agrees that the more anti-AI attacks there are, the greater the potential is for them to have an impact. And by releasing Iocaine, Nagy showed that social media chatter about new attacks can inspire new tools within a few days. Marcus Butler, an independent software developer, similarly built his poisoning attack called Quixotic over a few days, he told Ars. Soon afterward, he received messages from others who built their own versions of his tool.

Butler is not in the camp of wanting to destroy AI. He told Ars that he doesn’t think “tools like Quixotic (or Nepenthes) will ‘burn AI to the ground.'” Instead, he takes a more measured stance, suggesting that “these tools provide a little protection (a very little protection) against scrapers taking content and, say, reposting it or using it for training purposes.”

But for a certain sect of Internet users, every little bit of protection seemingly helps. Geuter linked Ars to a list of tools bent on sabotaging AI. Ultimately, he expects that tools like Nepenthes are “probably not gonna be useful in the long run” because AI companies can likely detect and drop gibberish from training data. But Nepenthes represents a sea change, Geuter told Ars, providing a useful tool for people who “feel helpless” in the face of endless scraping and showing that “the story of there being no alternative or choice is false.”

Criticism of tarpits as AI weapons

Critics debating Nepenthes’ utility on Hacker News suggested that most AI crawlers could easily avoid tarpits like Nepenthes, with one commenter describing the attack as being “very crawler 101.” Aaron said that was his “favorite comment” because if tarpits are considered elementary attacks, he has “2 million lines of access log that show that Google didn’t graduate.”

But efforts to poison AI or waste AI resources don’t just mess with the tech industry. Governments globally are seeking to leverage AI to solve societal problems, and attacks on AI’s resilience seemingly threaten to disrupt that progress.

Nathan VanHoudnos is a senior AI security research scientist in the federally funded CERT Division of the Carnegie Mellon University Software Engineering Institute, which partners with academia, industry, law enforcement, and government to “improve the security and resilience of computer systems and networks.” He told Ars that new threats like tarpits seem to replicate a problem that AI companies are already well aware of: “that some of the stuff that you’re going to download from the Internet might not be good for you.”

“It sounds like these tarpit creators just mainly want to cause a little bit of trouble,” VanHoudnos said. “They want to make it a little harder for these folks to get” the “better or different” data “that they’re looking for.”

VanHoudnos co-authored a paper on “Counter AI” last August, pointing out that attackers like Aaron and Nagy are limited in how much they can mess with AI models. They may have “influence over what training data is collected but may not be able to control how the data are labeled, have access to the trained model, or have access to the Al system,” the paper said.

Further, AI companies are increasingly turning to the deep web for unique data, so any efforts to wall off valuable content with tarpits may be coming right when crawling on the surface web starts to slow, VanHoudnos suggested.

But according to VanHoudnos, AI crawlers are also “relatively cheap,” and companies may deprioritize fighting against new attacks on crawlers if “there are higher-priority assets” under attack. And tarpitting “does need to be taken seriously because it is a tool in a toolkit throughout the whole life cycle of these systems. There is no silver bullet, but this is an interesting tool in a toolkit,” he said.

Offering a choice to abstain from AI training

Aaron told Ars that he never intended Nepenthes to be a major project but that he occasionally puts in work to fix bugs or add new features. He said he’d consider working on integrations for real-time reactions to crawlers if there was enough demand.

Currently, Aaron predicts that Nepenthes might be most attractive to rights holders who want AI companies to pay to scrape their data. And many people seem enthusiastic about using it to reinforce robots.txt. But “some of the most exciting people are in the ‘let it burn’ category,” Aaron said. These people are drawn to tools like Nepenthes as an act of rebellion against AI making the Internet less useful and enjoyable for users.

Geuter told Ars that he considers Nepenthes “more of a sociopolitical statement than really a technological solution (because the problem it’s trying to address isn’t purely technical, it’s social, political, legal, and needs way bigger levers).”

To Geuter, a computer scientist who has been writing about the social, political, and structural impact of tech for two decades, AI is the “most aggressive” example of “technologies that are not done ‘for us’ but ‘to us.'”

“It feels a bit like the social contract that society and the tech sector/engineering have had (you build useful things, and we’re OK with you being well-off) has been canceled from one side,” Geuter said. “And that side now wants to have its toy eat the world. People feel threatened and want the threats to stop.”

As AI evolves, so do attacks, with one 2021 study showing that increasingly stronger data poisoning attacks, for example, were able to break data sanitization defenses. Whether these attacks can ever do meaningful destruction or not, Geuter sees tarpits as a “powerful symbol” of the resistance that Aaron and Nagy readily joined.

“It’s a great sign to see that people are challenging the notion that we all have to do AI now,” Geuter said. “Because we don’t. It’s a choice. A choice that mostly benefits monopolists.”

Tarpit creators like Nagy will likely be watching to see if poisoning attacks continue growing in sophistication. On the Iocaine site—which, yes, is protected from scraping by Iocaine—he posted this call to action: “Let’s make AI poisoning the norm. If we all do it, they won’t have anything to crawl.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt Read More »

reddit-won’t-interfere-with-users-revolting-against-x-with-subreddit-bans

Reddit won’t interfere with users revolting against X with subreddit bans

A Reddit spokesperson told Ars that decisions to ban or not ban X links are user-driven. Subreddit members are allowed to suggest and institute subreddit rules, they added.

“Notably, many Reddit communities also prohibit Reddit links,” the Reddit representative pointed out. They noted that Reddit as a company doesn’t currently have any ban on links to X.

A ban against links to an entire platform isn’t outside of the ordinary for Reddit. Numerous subreddits ban social media links, Reddit’s spokesperson said. r/EarthPorn, a subreddit for landscape photography, for example, doesn’t allow website links because all posts “must be static images,” per the subreddit’s official rules. r/AskReddit, meanwhile, only allows for questions asked in the title of a Reddit post and doesn’t allow for use of the text box, including for sharing links.

“Reddit has a longstanding commitment to freedom of speech and freedom of association,” Reddit’s spokesperson said. They added that any person is free to make or moderate their own community. Those unsatisfied with a forum about Seahawks football that doesn’t have X links could feel free to make their own subreddit. Although, some of the subreddits considering X bans, like r/MadeMeSmile, already have millions of followers.

Meta bans also under discussion

As 404 Media noted, some Redditors are also pushing to block content from Facebook, Instagram, and other Meta properties in response to new Donald Trump-friendly policies instituted by owner Mark Zuckerberg, like Meta killing diversity programs and axing third-party fact-checkers.

Reddit won’t interfere with users revolting against X with subreddit bans Read More »

meta-to-cut-5%-of-employees-deemed-unfit-for-zuckerberg’s-ai-fueled-future

Meta to cut 5% of employees deemed unfit for Zuckerberg’s AI-fueled future

Anticipating that 2025 will be an “intense year” requiring rapid innovation, Mark Zuckerberg reportedly announced that Meta would be cutting 5 percent of its workforce—targeting “lowest performers.”

Bloomberg reviewed the internal memo explaining the cuts, which was posted to Meta’s internal Workplace forum Tuesday. In it, Zuckerberg confirmed that Meta was shifting its strategy to “move out low performers faster” so that Meta can hire new talent to fill those vacancies this year.

“I’ve decided to raise the bar on performance management,” Zuckerberg said. “We typically manage out people who aren’t meeting expectations over the course of a year, but now we’re going to do more extensive performance-based cuts during this cycle.”

Cuts will likely impact more than 3,600 employees, as Meta’s most recent headcount in September totaled about 72,000 employees. It may not be as straightforward as letting go anyone with an unsatisfactory performance review, as Zuckerberg said that any employee not currently meeting expectations could be spared if Meta is “optimistic about their future performance,” The Wall Street Journal reported.

Any employees affected will be notified by February 10 and receive “generous severance,” Zuckerberg’s memo promised.

This is the biggest round of cuts at Meta since 2023, when Meta laid off 10,000 employees during what Zuckerberg dubbed the “year of efficiency.” Those layoffs followed a prior round where 11,000 lost their jobs and Zuckerberg realized that “leaner is better.” He told employees in 2023 that a “surprising result” from reducing the workforce was “that many things have gone faster.”

“A leaner org will execute its highest priorities faster,” Zuckerberg wrote in 2023. “People will be more productive, and their work will be more fun and fulfilling. We will become an even greater magnet for the most talented people. That’s why in our Year of Efficiency, we are focused on canceling projects that are duplicative or lower priority and making every organization as lean as possible.”

Meta to cut 5% of employees deemed unfit for Zuckerberg’s AI-fueled future Read More »

mastodon’s-founder-cedes-control,-refuses-to-become-next-musk-or-zuckerberg

Mastodon’s founder cedes control, refuses to become next Musk or Zuckerberg

And perhaps in a nod to Meta’s recent changes, Mastodon also vowed to “invest deeply in trust and safety” and ensure “everyone, especially marginalized communities,” feels “safe” on the platform.

To become a more user-focused paradise of “resilient, governable, open and safe digital spaces,” Mastodon is going to need a lot more funding. The blog called for donations to help fund an annual operating budget of $5.1 million (5 million euros) in 2025. That’s a massive leap from the $152,476 (149,400 euros) total operating expenses Mastodon reported in 2023.

Other social networks wary of EU regulations

Mastodon has decided to continue basing its operations in Europe, while still maintaining a separate US-based nonprofit entity as a “fundraising hub,” the blog said.

It will take time, Mastodon said, to “select the appropriate jurisdiction and structure in Europe” before Mastodon can then “determine which other (subsidiary) legal structures are needed to support operations and sustainability.”

While Mastodon is carefully getting re-settled as a nonprofit in Europe, Zuckerberg this week went on Joe Rogan’s podcast to call on Donald Trump to help US tech companies fight European Union fines, Politico reported.

Some critics suggest the recent policy changes on Meta platforms were intended to win Trump’s favor, partly to get Trump on Meta’s side in the fight against the EU’s strict digital laws. According to France24, Musk’s recent combativeness with EU officials suggests Musk might team up with Zuckerberg in that fight (unlike that cage fight pitting the wealthy tech titans against each other that never happened).

Experts told France24 that EU officials may “perhaps wrongly” already be fearful about ruffling Trump’s feathers by targeting his tech allies and would likely need to use the “full legal arsenal” of EU digital laws to “stand up to Big Tech” once Trump’s next term starts.

As Big Tech prepares to continue battling EU regulators, Mastodon appears to be taking a different route, laying roots in Europe and “establishing the appropriate governance and leadership frameworks that reflect the nature and purpose of Mastodon as a whole” and “responsibly serve the community,” its blog said.

“Our core mission remains the same: to create the tools and digital spaces where people can build authentic, constructive online communities free from ads, data exploitation, manipulative algorithms, or corporate monopolies,” Mastodon’s blog said.

Mastodon’s founder cedes control, refuses to become next Musk or Zuckerberg Read More »

meta-kills-diversity-programs,-claiming-dei-has-become-“too-charged”

Meta kills diversity programs, claiming DEI has become “too charged”

Meta has reportedly ended diversity, equity, and inclusion (DEI) programs that influenced staff hiring and training, as well as vendor decisions, effective immediately.

According to an internal memo viewed by Axios and verified by Ars, Meta’s vice president of human resources, Janelle Gale, told Meta employees that the shift was due to “legal and policy landscape surrounding diversity, equity, and inclusion efforts in the United States is changing.”

It’s another move by Meta that some view as part of the company’s larger effort to align with the incoming Trump administration’s politics. In December, Donald Trump promised to crack down on DEI initiatives at companies and on college campuses, The Guardian reported.

Earlier this week, Meta cut its fact-checking program, which was introduced in 2016 after Trump’s first election to prevent misinformation from spreading. In a statement announcing Meta’s pivot to X’s Community Notes-like approach to fact-checking, Meta CEO Mark Zuckerberg claimed that fact-checkers were “too politically biased” and “destroyed trust” on Meta platforms like Facebook, Instagram, and Threads.

Trump has also long promised to renew his war on alleged social media censorship while in office. Meta faced backlash this week over leaked rule changes relaxing Meta’s hate speech policies, The Intercept reported, which Zuckerberg said were “out of touch with mainstream discourse.”  Those changes included allowing anti-trans slurs previously banned, as well as permitting women to be called “property” and gay people to be called “mentally ill,” Mashable reported. In a statement, GLAAD said that rolling back safety guardrails risked turning Meta platforms into “unsafe landscapes filled with dangerous hate speech, violence, harassment, and misinformation” and alleged that Meta appeared to be willing to “normalize anti-LGBTQ hatred for profit.”

Meta kills diversity programs, claiming DEI has become “too charged” Read More »

meta-axes-third-party-fact-checkers-in-time-for-second-trump-term

Meta axes third-party fact-checkers in time for second Trump term


Zuckerberg says Meta will “work with President Trump” to fight censorship.

Meta CEO Mark Zuckerberg during the Meta Connect event in Menlo Park, California on September 25, 2024.  Credit: Getty Images | Bloomberg

Meta announced today that it’s ending the third-party fact-checking program it introduced in 2016, and will rely instead on a Community Notes approach similar to what’s used on Elon Musk’s X platform.

The end of third-party fact-checking and related changes to Meta policies could help the company make friends in the Trump administration and in governments of conservative-leaning states that have tried to impose legal limits on content moderation. The operator of Facebook and Instagram announced the changes in a blog post and a video message recorded by CEO Mark Zuckerberg.

“Governments and legacy media have pushed to censor more and more. A lot of this is clearly political,” Zuckerberg said. He said the recent elections “feel like a cultural tipping point toward once again prioritizing speech.”

“We’re going to get rid of fact-checkers and replace them with Community Notes, similar to X, starting in the US,” Zuckerberg said. “After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried in good faith to address those concerns without becoming the arbiters of truth. But the fact-checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the US.”

Meta says the soon-to-be-discontinued fact-checking program includes over 90 third-party organizations that evaluate posts in over 60 languages. The US-based fact-checkers are AFP USA, Check Your Fact, Factcheck.org, Lead Stories, PolitiFact, Science Feedback, Reuters Fact Check, TelevisaUnivision, The Dispatch, and USA Today.

The independent fact-checkers rate the accuracy of posts and apply ratings such as False, Altered, Partly False, Missing Context, Satire, and True. Meta adds notices to posts rated as false or misleading and notifies users before they try to share the content or if they shared it in the past.

Meta: Experts “have their own biases”

In the blog post that accompanied Zuckerberg’s video message, Chief Global Affairs Officer Joel Kaplan said the 2016 decision to use independent fact-checkers seemed like “the best and most reasonable choice at the time… The intention of the program was to have these independent experts give people more information about the things they see online, particularly viral hoaxes, so they were able to judge for themselves what they saw and read.”

But experts “have their own biases and perspectives,” and the program imposed “intrusive labels and reduced distribution” of content “that people would understand to be legitimate political speech and debate,” Kaplan wrote.

The X-style Community Notes system lets the community “decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see… Just like they do on X, Community Notes [on Meta sites] will require agreement between people with a range of perspectives to help prevent biased ratings,” Kaplan wrote.

The end of third-party fact-checking will be implemented in the US before other countries. Meta will also move its internal trust and safety and content moderation teams out of California, Zuckerberg said. “Our US-based content review is going to be based in Texas. As we work to promote free expression, I think it will help us build trust to do this work in places where there is less concern about the bias of our teams,” he said. Meta will continue to take “legitimately bad stuff” like drugs, terrorism, and child exploitation “very seriously,” Zuckerberg said.

Zuckerberg pledges to work with Trump

Meta will “phase in a more comprehensive community notes system” over the next couple of months, Zuckerberg said. Meta, which donated $1 million to Trump’s inaugural fund, will also “work with President Trump to push back on governments around the world that are going after American companies and pushing to censor more,” Zuckerberg said.

Zuckerberg said that “Europe has an ever-increasing number of laws institutionalizing censorship,” that “Latin American countries have secret courts that can quietly order companies to take things down,” and that “China has censored apps from even working in the country.” Meta needs “the support of the US government” to push back against other countries’ content-restriction orders, he said.

“That’s why it’s been so difficult over the past four years when even the US government has pushed for censorship,” Zuckerberg said, referring to the Biden administration. “By going after US and other American companies, it has emboldened other governments to go even further. But now we have the opportunity to restore free expression, and I am excited to take it.”

Brendan Carr, Trump’s pick to lead the Federal Communications Commission, praised Meta’s policy changes. Carr has promised to shift the FCC’s focus from regulating telecom companies to cracking down on Big Tech and media companies that he alleges are part of a “censorship cartel.”

“President Trump’s resolute and strong support for the free speech rights of everyday Americans is already paying dividends,” Carr wrote on X today. “Facebook’s announcements is [sic] a good step in the right direction. I look forward to monitoring these developments and their implementation. The work continues until the censorship cartel is completely dismantled and destroyed.”

Group: Meta is “saying the truth doesn’t matter”

Meta’s changes were criticized by Public Citizen, a nonprofit advocacy group founded by Ralph Nader. “Asking users to fact-check themselves is tantamount to Meta saying the truth doesn’t matter,” Public Citizen co-president Lisa Gilbert said. “Misinformation will flow more freely with this policy change, as we cannot assume that corrections will be made when false information proliferates. The American people deserve accurate information about our elections, health risks, the environment, and much more.”

Media advocacy group Free Press said that “Zuckerberg is one of many billionaires who are cozying up to dangerous demagogues like Trump and pushing initiatives that favor their bottom lines at the expense of everything and everyone else.” Meta appears to be abandoning its “responsibility to protect its many users, and align[ing] the company more closely with an incoming president who’s a known enemy of accountability,” Free Press Senior Counsel Nora Benavidez said.

X’s Community Notes system was criticized in a recent report by the Center for Countering Digital Hate (CCDH), which said it “found that 74 percent of accurate community notes on US election misinformation never get shown to users.” (X previously sued the CCDH, but the lawsuit was dismissed by a federal judge.)

Previewing other changes, Zuckerberg said that Meta will eliminate content restrictions “that are just out of touch with mainstream discourse” and change how it enforces policies “to reduce the mistakes that account for the vast majority of censorship on our platforms.”

“We used to have filters that scanned for any policy violation. Now, we’re going to focus those filters on tackling illegal and high-severity violations, and for lower severity violations, we’re going to rely on someone reporting an issue before we take action,” he said. “The problem is the filters make mistakes, and they take down a lot of content that they shouldn’t. So by dialing them back, we’re going to dramatically reduce the amount of censorship on our platforms.”

Meta to relax filters, recommend more political content

Zuckerberg said Meta will re-tune content filters “to require much higher confidence before taking down content.” He said this means Meta will “catch less bad stuff” but will “also reduce the number of innocent people’s posts and accounts that we accidentally take down.”

Meta has “built a lot of complex systems to moderate content,” he noted. Even if these systems “accidentally censor just 1 percent of posts, that’s millions of people, and we’ve reached a point where it’s just too many mistakes and too much censorship,” he said.

Kaplan wrote that Meta has censored too much harmless content and that “too many people find themselves wrongly locked up in ‘Facebook jail.'”

“In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content,” Kaplan wrote. “This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable.”

Another upcoming change is that Meta will recommend more political posts. “For a while, the community asked to see less politics because it was making people stressed, so we stopped recommending these posts,” Zuckerberg said. “But it feels like we’re in a new era now, and we’re starting to get feedback that people want to see this content again, so we’re going to start phasing this back into Facebook, Instagram, and Threads while working to keep the communities friendly and positive.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Meta axes third-party fact-checkers in time for second Trump term Read More »

eu-fines-meta-e800-million-for-breaking-law-with-marketplace

EU fines Meta €800 million for breaking law with Marketplace

During her tenure, Vestager has repeatedly targeted the world’s biggest tech companies, with some of the toughest actions against tech giants such as Apple, Google, and Microsoft.

The EU Commission on Thursday said Meta is “dominant in the market for personal social networks (…) as well as in the national markets for online display advertising on social media.”

Facebook Marketplace, launched in 2016, is a popular platform to buy and sell second-hand goods, especially household items such as furniture.

Meta has argued that it operates in a highly competitive environment. In a post published on Thursday, the tech giant said marketplaces in Europe continue “to grow and dominate in the EU,” pointing to platforms such as eBay, Leboncoin in France, and Marktplaats in the Netherlands, as “formidable competitors.”

Meta’s fine comes at a period of political transition both in the EU and the US.

Brussels officials have been aggressive both in their rhetoric and their antitrust probes against Big Tech giants as they sought to open markets for local start-ups.

In the past five years, EU regulators have also passed a landmark piece of legislation—the Digital Markets Act—with the aim to slow down dominant tech players and boost the local tech industry.

However, some observers expect the new commission, which is set to start a new 5-year term in weeks, to strike a more conciliatory tone over fears of retaliation from the incoming Trump administration.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

EU fines Meta €800 million for breaking law with Marketplace Read More »

meta-beats-suit-over-tool-that-lets-facebook-users-unfollow-everything

Meta beats suit over tool that lets Facebook users unfollow everything

Meta has defeated a lawsuit—for now—that attempted to invoke Section 230 protections for a third-party tool that would have made it easy for Facebook users to toggle on and off their news feeds as they pleased.

The lawsuit was filed by Ethan Zuckerman, a professor at University of Massachusetts Amherst. He feared that Meta might sue to block his tool, Unfollow Everything 2.0, because Meta threatened to sue to block the original tool when it was released by another developer. In May, Zuckerman told Ars that he was “suing Facebook to make it better” and planned to use Section 230’s shield to do it.

Zuckerman’s novel legal theory argued that Congress always intended for Section 230 to protect third-party tools designed to empower users to take control over potentially toxic online environments. In his complaint, Zuckerman tried to convince a US district court in California that:

Section 230(c)(2)(B) immunizes from legal liability “a provider of software or enabling tools that filter, screen, allow, or disallow content that the provider or user considers obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Through this provision, Congress intended to promote the development of filtering tools that enable users to curate their online experiences and avoid content they would rather not see.

Digital rights advocates, the Electronic Frontier Foundation (EFF), the Center for Democracy and Technology, and the American Civil Liberties Union of Northern California, supported Zuckerman’s case, urging that the court protect middleware. But on Thursday, Judge Jacqueline Scott Corley granted Meta’s motion to dismiss at a hearing.

Corley has not yet posted her order on the motion to dismiss, but Zuckerman’s lawyers at the Knight Institute confirmed to Ars that their Section 230 argument did not factor into her decision. In a statement, lawyers said that Corley left the door open on the Section 230 claims, and EFF senior staff attorney Sophia Cope, who was at the hearing, told Ars Corley agreed that on “the merits the case raises important issues.”

Meta beats suit over tool that lets Facebook users unfollow everything Read More »

facebook,-nvidia-push-scotus-to-limit-“nuisance”-investor-suits-after-scandals

Facebook, Nvidia push SCOTUS to limit “nuisance” investor suits after scandals


Facebook, Nvidia ask SCOTUS to narrow legal paths to retrieve investor losses.

The Supreme Court will soon weigh two cases that could potentially make it harder for misled investors to sue Big Tech companies after major scandals.

One case involves one of the largest tech scandals of all time, the Facebook-Cambridge Analytica data breach. In 2019, Facebook agreed to pay “more than $5 billion in civil penalties to settle charges by the Federal Trade Commission (FTC) and the Securities and Exchange Commission (SEC) that it had misled its users and investors over the privacy and security of user data on its platform,” a Supreme Court filing said.

The other case involves an allegation that Nvidia intentionally hid how much of its 2017–2018 GPU demand was due to a volatile cryptocurrency boom and not Nvidia’s core gaming business—allegedly misleading investors ahead of a crypto crash. After the bust, Nvidia suddenly had to slash half a billion dollars from its earnings projection, and market experts later estimated that the firm had understated its crypto-related revenue by more than a billion. In 2022, Nvidia paid a $5.5 million SEC penalty over the inadequate disclosures that one SEC chief said “deprived investors of critical information to evaluate the company’s business in a key market.”

Investors, however, have not yet settled their own legal challenges. In both cases, investors suing convinced the 9th Circuit that the companies were guilty of misleading investors. But now, the tech companies have appealed to the Supreme Court, hoping to reverse those rulings.

In case documents, each claimed that their investors have not satisfied high legal bars, which Nvidia argued Congress designed to prevent “frivolous” or “nuisance” lawsuits from going on “fishing expeditions” to claim securities “fraud by hindsight.” Both warned that SCOTUS upholding the 9th Circuit rulings risked flooding courts with frivolous suits, with Nvidia cautioning that such lawsuits can be “used to injure the entire US economy.”

The Supreme Court will hear arguments in the Facebook case on Wednesday, November 6, then the Nvidia case on November 13.

SCOTUS may be persuaded by tech companies still stuck coping with the aftermath of scandals. A former SEC lawyer, Andrew Feller, told Reuters that the Supreme Court’s conservative majority may continue its “recent track record of handing down business-friendly decisions that narrowed the authority of federal regulators” in these cases. Both cases give justices opportunities to “rein in the power of private plaintiffs to enforce federal rules aimed at punishing corporate misconduct,” Reuters reported.

Facebook defends describing risk as hypothetical

The Facebook case centers on an SEC disclosure where Facebook said that its business may be harmed by a data breach, posing that as a hypothetical, without mentioning the ongoing Cambridge Analytica data breach. Specifically, Facebook wrote, “[a]ny failure to prevent or mitigate . . . improper access to or disclosure of our data or user data . . . could result in the loss or misuse of such data, which could harm our business and reputation and diminish our competitive position.”

Investors felt misled, accusing Facebook of hiding the breach by only presenting the risk as a hypothetical that implied no breach had ever occurred in the past and certainly did not disclose the present risk.

However, in a SCOTUS filing, Facebook insisted that “no reasonable investor would interpret a risk disclosure using probabilistic, forward-looking language as impliedly representing that the specified triggering event had never occurred in the past.”

Facebook is now arguing that SCOTUS agreeing that the company should have disclosed the major data breach “would result in a regime under which companies would be required to disclose every previous material incident they have experienced—effectively creating a sweeping regime of omissions liability.”

According to Facebook, news broke about the Cambridge Analytica data breach in 2015, and its business wasn’t immediately harmed. Following that logic, the social media company hopes that SCOTUS will agree that Facebook was only required to disclose the data breach in its SEC filing if Facebook knew its business would likely be harmed from the ongoing breach.

By affirming the 9th Circuit ruling, Facebook alleged, SCOTUS would be “vastly expanding the circumstances in which risk disclosures are deemed false or misleading,” exposing to legal challenges “a wide range of previously immune forward-looking statements—revenue projections, future business plans or objectives, and the like.”

But investors suing argue that Facebook is still being misleading about the data scandal in its court filings.

“The only reason Facebook has ever given to explain why the misappropriation risked no harm was that the event was allegedly disclosed to the public in 2015 and no one cared,” investors’ SCOTUS brief said. But in 2015, a report exposing a data breach tied to a Ted Cruz campaign was denied by Cambridge Analytica and prompted a Facebook investigation that concluded no damage had been done.

“Facebook actively misled the public about its investigation, ‘represent[ing] that no misconduct had been discovered,'” investors alleged, and “Facebook’s deception extended to its public filings with the SEC.”

According to investors, the real damage was done when the true extent of the Cambridge Analytica scandal was exposed in 2018. That caused substantial revenue losses that Facebook likely understood it was risking while allegedly leaving investors blind to those risks for years.

Investors argue that disclosure should not be required of every data breach that hits Facebook, whether it harms its business or not, but that the Cambridge Analytica data breach was significant and should have been disclosed as a material risk. The 9th Circuit agreed, holding that “publicly treating such a material adverse event as a merely hypothetical prospect can be misleading even if the event has not yet produced follow-on business harm because the company has kept the truth from the public.”

They further argued that requiring so-called overdisclosure wouldn’t trigger unwarranted litigation, as Facebook suggests, because Congress has always “given considerable attention to concerns over abusive private litigation.”

If Facebook wins, investors alleged, SCOTUS risks giving any tech company “a license to intentionally mislead investors about the occurrence of hugely material events by describing those events as purely hypothetical prospects.” Siding with Facebook would allegedly give “companies an incentive to stuff their annual reports with boilerplate, generic warnings that reveal little about the company’s actual business and to cover up events that could give rise to corporate scandals, as Facebook did here.”

Facebook argued that if the SEC is concerned about specific disclosures connected to the data breach, “the SEC can invoke the rulemaking process to impose” a requirement that companies must disclose all “past material adverse events.”

Nvidia disputes expert’s crypto data

While the Facebook case involved a bigger scandal, the Nvidia case could have bigger legal implications if Nvidia wins.

In the Nvidia case, investors argued that Nvidia CEO Jensen Huang made public statements allegedly misleading investors by downplaying the high demand for GPUs tied to volatile crypto markets. To plead their case, investors relied on statements from Nvidia employees, internal documents like meeting slides, industry research, as well as an expert opinion crunching general market numbers and estimating that Nvidia “underreported its crypto revenues by $1.126 billion.”

Nvidia claimed it’s far more plausible that the company simply made an “honest miscalculation” while navigating a complex emerging market.

To defend against the suit, Nvidia is arguing that the Private Securities Litigation Reform Act (PSLRA) imposes “special burdens on plaintiffs seeking to bring federal securities fraud class actions” through “heightened pleading requirements” to deter frivolous lawsuits arguing fraud by hindsight.

According to Nvidia, the PSLRA requires investors to allege particular facts based on particular contents of internal Nvidia documents, which goes beyond relying on an expert opinion. The tech company has urged SCOTUS that the 9th Circuit “‘significantly erode[d]” the PSLRA requirements by allowing Plaintiffs to “simply” hire “an expert who manufactured data to fit their allegations.”

“They hired an expert to create data and then filed a class action alleging that Nvidia and its CEO committed securities fraud by failing to disclose the data invented by Plaintiffs’ expert,” Nvidia argued.

This allegedly “eviscerates the guardrails that Congress erected to protect the public from abusive securities litigation” and creates a “dangerous” and “easy-to-replicate ‘roadmap’ for plaintiffs to sidestep the PSLRA in this recurring context.”

“Far from serving Congress’s goal of guarding against fishing expeditions by vexatious litigants, the Ninth Circuit’s opinion declares it open season so long as a plaintiff has funding to hire an expert,” Nvidia alleged.

Investors are hoping SCOTUS will uphold the 9th Circuit’s judgment. Instead of seeing their suit as frivolous, they argued that the SEC fine over the same misconduct “undermines any suggestion that this is the type of frivolous suit that the PSLRA was meant to screen out.”

They’ve disputed Nvidia’s arguments that they’ve relied solely on a hired expert to support their claims, arguing that each fact was corroborated by employee witnesses and third-party reports.

If Nvidia wins, investors warned, the SCOTUS decision would risk harming a wide range of private securities litigation that Congress has found “‘is an indispensable tool’ for ‘defrauded investors’ to ‘recover their losses without having to rely upon government action.'”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Facebook, Nvidia push SCOTUS to limit “nuisance” investor suits after scandals Read More »