Meta

meta-plans-to-test-and-tinker-with-x’s-community-notes-algorithm

Meta plans to test and tinker with X’s community notes algorithm

Meta also confirmed that it won’t be reducing visibility of misleading posts with community notes. That’s a change from the prior system, Meta noted, which had penalties associated with fact-checking.

According to Meta, X’s algorithm cannot be gamed, supposedly safeguarding “against organized campaigns” striving to manipulate notes and “influence what notes get published or what they say.” Meta claims it will rely on external research on community notes to avoid that pitfall, but as recently as last October, outside researchers had suggested that X’s Community Notes were easily sabotaged by toxic X users.

“We don’t expect this process to be perfect, but we’ll continue to improve as we learn,” Meta said.

Meta confirmed that the company plans to tweak X’s algorithm over time to develop its own version of community notes, which “may explore different or adjusted algorithms to support how Community Notes are ranked and rated.”

In a post, X’s Support account said that X was “excited” that Meta was using its “well-established, academically studied program as a foundation” for its community notes.

Meta plans to test and tinker with X’s community notes algorithm Read More »

ai-firms-follow-deepseek’s-lead,-create-cheaper-models-with-“distillation”

AI firms follow DeepSeek’s lead, create cheaper models with “distillation”

Thanks to distillation, developers and businesses can access these models’ capabilities at a fraction of the price, allowing app developers to run AI models quickly on devices such as laptops and smartphones.

Developers can use OpenAI’s platform for distillation, learning from the large language models that underpin products like ChatGPT. OpenAI’s largest backer, Microsoft, used GPT-4 to distill its small language family of models Phi as part of a commercial partnership after investing nearly $14 billion into the company.

However, the San Francisco-based start-up has said it believes DeepSeek distilled OpenAI’s models to train its competitor, a move that would be against its terms of service. DeepSeek has not commented on the claims.

While distillation can be used to create high-performing models, experts add they are more limited.

“Distillation presents an interesting trade-off; if you make the models smaller, you inevitably reduce their capability,” said Ahmed Awadallah of Microsoft Research, who said a distilled model can be designed to be very good at summarising emails, for example, “but it really would not be good at anything else.”

David Cox, vice-president for AI models at IBM Research, said most businesses do not need a massive model to run their products, and distilled ones are powerful enough for purposes such as customer service chatbots or running on smaller devices like phones.

“Any time you can [make it less expensive] and it gives you the right performance you want, there is very little reason not to do it,” he added.

That presents a challenge to many of the business models of leading AI firms. Even if developers use distilled models from companies like OpenAI, they cost far less to run, are less expensive to create, and, therefore, generate less revenue. Model-makers like OpenAI often charge less for the use of distilled models as they require less computational load.

AI firms follow DeepSeek’s lead, create cheaper models with “distillation” Read More »

meta-claims-torrenting-pirated-books-isn’t-illegal-without-proof-of-seeding

Meta claims torrenting pirated books isn’t illegal without proof of seeding

Just because Meta admitted to torrenting a dataset of pirated books for AI training purposes, that doesn’t necessarily mean that Meta seeded the file after downloading it, the social media company claimed in a court filing this week.

Evidence instead shows that Meta “took precautions not to ‘seed’ any downloaded files,” Meta’s filing said. Seeding refers to sharing a torrented file after the download completes, and because there’s allegedly no proof of such “seeding,” Meta insisted that authors cannot prove Meta shared the pirated books with anyone during the torrenting process.

Whether or not Meta actually seeded the pirated books could make a difference in a copyright lawsuit from book authors including Richard Kadrey, Sarah Silverman, and Ta-Nehisi Coates. Authors had previously alleged that Meta unlawfully copied and distributed their works through AI outputs—an increasingly common complaint that so far has barely been litigated. But Meta’s admission to torrenting appears to add a more straightforward claim of unlawful distribution of copyrighted works through illegal torrenting, which has long been considered established case-law.

Authors have alleged that “Meta deliberately engaged in one of the largest data piracy campaigns in history to acquire text data for its LLM training datasets, torrenting and sharing dozens of terabytes of pirated data that altogether contain many millions of copyrighted works.” Separate from their copyright infringement claims opposing Meta’s AI training on pirated copies of their books, authors alleged that Meta torrenting the dataset was “independently illegal” under California’s Computer Data Access and Fraud Act (CDAFA), which allegedly “prevents the unauthorized taking of data, including copyrighted works.”

Meta, however, is hoping to convince the court that torrenting is not in and of itself illegal, but is, rather, a “widely-used protocol to download large files.” According to Meta, the decision to download the pirated books dataset from pirate libraries like LibGen and Z-Library was simply a move to access “data from a ‘well-known online repository’ that was publicly available via torrents.”

Meta claims torrenting pirated books isn’t illegal without proof of seeding Read More »

arm-to-start-making-server-cpus-in-house

Arm to start making server CPUs in-house

Cambridge-headquartered Arm has more than doubled in value to $160 billion since it listed on Nasdaq in 2023, carried higher by explosive investor interest in AI. Arm’s partnerships with Nvidia and Amazon have driven its rapid growth in the data centers that power AI assistants from OpenAI, Meta, and Anthropic.

Meta is the latest big tech company to turn to Arm for server chips, displacing those traditionally provided by Intel and AMD.

During last month’s earnings call, Meta’s finance chief Susan Li said it would be “extending our custom silicon efforts to [AI] training workloads” to drive greater efficiency and performance by tuning its chips to its particular computing needs.

Meanwhile, an Arm-produced chip is also likely to eventually play a role in Sir Jony Ive’s secretive plans to build a new kind of AI-powered personal device, which is a collaboration between the iPhone designer’s firm LoveFrom, OpenAI’s Sam Altman, and SoftBank.

Arm’s designs have been used in more than 300 billion chips, including almost all of the world’s smartphones. Its power-efficient designs have made its CPUs, the general-purpose workhorse that sits at the heart of any computer, an increasingly attractive alternative to Intel’s chips in PCs and servers at a time when AI is making data centers much more energy-intensive.

Arm, which started out in a converted turkey barn in Cambridgeshire 35 years ago, became ubiquitous in the mobile market by licensing its designs to Apple for its iPhone chips, as well as Android suppliers such as Qualcomm and MediaTek. Maintaining its unique position in the center of the fiercely competitive mobile market has required a careful balancing act for Arm.

But Son has long pushed for Arm to make more money from its intellectual property. Under Haas, who became chief executive in 2022, Arm’s business model began to evolve, with a focus on driving higher royalties from customers as the company designs more of the building blocks needed to make a chip.

Going a step further by building and selling its own complete chip is a bold move by Haas that risks putting it on a collision course with customers such as Qualcomm, which is already locked in a legal battle with Arm over licensing terms, and Nvidia, the world’s most valuable chipmaker.

Arm, SoftBank, and Meta declined to comment.

Additional reporting by Hannah Murphy.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Arm to start making server CPUs in-house Read More »

”torrenting-from-a-corporate-laptop-doesn’t-feel-right”:-meta-emails-unsealed

”Torrenting from a corporate laptop doesn’t feel right”: Meta emails unsealed

Emails discussing torrenting prove that Meta knew it was “illegal,” authors alleged. And Bashlykov’s warnings seemingly landed on deaf ears, with authors alleging that evidence showed Meta chose to instead hide its torrenting as best it could while downloading and seeding terabytes of data from multiple shadow libraries as recently as April 2024.

Meta allegedly concealed seeding

Supposedly, Meta tried to conceal the seeding by not using Facebook servers while downloading the dataset to “avoid” the “risk” of anyone “tracing back the seeder/downloader” from Facebook servers, an internal message from Meta researcher Frank Zhang said, while describing the work as in “stealth mode.” Meta also allegedly modified settings “so that the smallest amount of seeding possible could occur,” a Meta executive in charge of project management, Michael Clark, said in a deposition.

Now that new information has come to light, authors claim that Meta staff involved in the decision to torrent LibGen must be deposed again, because allegedly the new facts “contradict prior deposition testimony.”

Mark Zuckerberg, for example, claimed to have no involvement in decisions to use LibGen to train AI models. But unredacted messages show the “decision to use LibGen occurred” after “a prior escalation to MZ,” authors alleged.

Meta did not immediately respond to Ars’ request for comment and has maintained throughout the litigation that AI training on LibGen was “fair use.”

However, Meta has previously addressed its torrenting in a motion to dismiss filed last month, telling the court that “plaintiffs do not plead a single instance in which any part of any book was, in fact, downloaded by a third party from Meta via torrent, much less that Plaintiffs’ books were somehow distributed by Meta.”

While Meta may be confident in its legal strategy despite the new torrenting wrinkle, the social media company has seemingly complicated its case by allowing authors to expand the distribution theory that’s key to winning a direct copyright infringement claim beyond just claiming that Meta’s AI outputs unlawfully distributed their works.

As limited discovery on Meta’s seeding now proceeds, Meta is not fighting the seeding aspect of the direct copyright infringement claim at this time, telling the court that it plans to “set… the record straight and debunk… this meritless allegation on summary judgment.”

”Torrenting from a corporate laptop doesn’t feel right”: Meta emails unsealed Read More »

ai-haters-build-tarpits-to-trap-and-trick-ai-scrapers-that-ignore-robots.txt

AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt


Making AI crawlers squirm

Attackers explain how an anti-spam defense became an AI weapon.

Last summer, Anthropic inspired backlash when its ClaudeBot AI crawler was accused of hammering websites a million or more times a day.

And it wasn’t the only artificial intelligence company making headlines for supposedly ignoring instructions in robots.txt files to avoid scraping web content on certain sites. Around the same time, Reddit’s CEO called out all AI companies whose crawlers he said were “a pain in the ass to block,” despite the tech industry otherwise agreeing to respect “no scraping” robots.txt rules.

Watching the controversy unfold was a software developer whom Ars has granted anonymity to discuss his development of malware (we’ll call him Aaron). Shortly after he noticed Facebook’s crawler exceeding 30 million hits on his site, Aaron began plotting a new kind of attack on crawlers “clobbering” websites that he told Ars he hoped would give “teeth” to robots.txt.

Building on an anti-spam cybersecurity tactic known as tarpitting, he created Nepenthes, malicious software named after a carnivorous plant that will “eat just about anything that finds its way inside.”

Aaron clearly warns users that Nepenthes is aggressive malware. It’s not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an “infinite maze” of static files with no exit links, where they “get stuck” and “thrash around” for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That’s likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.

Tarpits were originally designed to waste spammers’ time and resources, but creators like Aaron have now evolved the tactic into an anti-AI weapon. As of this writing, Aaron confirmed that Nepenthes can effectively trap all the major web crawlers. So far, only OpenAI’s crawler has managed to escape.

It’s unclear how much damage tarpits or other AI attacks can ultimately do. Last May, Laxmi Korada, Microsoft’s director of partner technology, published a report detailing how leading AI companies were coping with poisoning, one of the earliest AI defense tactics deployed. He noted that all companies have developed poisoning countermeasures, while OpenAI “has been quite vigilant” and excels at detecting the “first signs of data poisoning attempts.”

Despite these efforts, he concluded that data poisoning was “a serious threat to machine learning models.” And in 2025, tarpitting represents a new threat, potentially increasing the costs of fresh data at a moment when AI companies are heavily investing and competing to innovate quickly while rarely turning significant profits.

“A link to a Nepenthes location from your site will flood out valid URLs within your site’s domain name, making it unlikely the crawler will access real content,” a Nepenthes explainer reads.

The only AI company that responded to Ars’ request to comment was OpenAI, whose spokesperson confirmed that OpenAI is already working on a way to fight tarpitting.

“We’re aware of efforts to disrupt AI web crawlers,” OpenAI’s spokesperson said. “We design our systems to be resilient while respecting robots.txt and standard web practices.”

But to Aaron, the fight is not about winning. Instead, it’s about resisting the AI industry further decaying the Internet with tech that no one asked for, like chatbots that replace customer service agents or the rise of inaccurate AI search summaries. By releasing Nepenthes, he hopes to do as much damage as possible, perhaps spiking companies’ AI training costs, dragging out training efforts, or even accelerating model collapse, with tarpits helping to delay the next wave of enshittification.

“Ultimately, it’s like the Internet that I grew up on and loved is long gone,” Aaron told Ars. “I’m just fed up, and you know what? Let’s fight back, even if it’s not successful. Be indigestible. Grow spikes.”

Nepenthes instantly inspires another tarpit

Nepenthes was released in mid-January but was instantly popularized beyond Aaron’s expectations after tech journalist Cory Doctorow boosted a tech commentator, Jürgen Geuter, praising the novel AI attack method on Mastodon. Very quickly, Aaron was shocked to see engagement with Nepenthes skyrocket.

“That’s when I realized, ‘oh this is going to be something,'” Aaron told Ars. “I’m kind of shocked by how much it’s blown up.”

It’s hard to tell how widely Nepenthes has been deployed. Site owners are discouraged from flagging when the malware has been deployed, forcing crawlers to face unknown “consequences” if they ignore robots.txt instructions.

Aaron told Ars that while “a handful” of site owners have reached out and “most people are being quiet about it,” his web server logs indicate that people are already deploying the tool. Likely, site owners want to protect their content, deter scraping, or mess with AI companies.

When software developer and hacker Gergely Nagy, who goes by the handle “algernon” online, saw Nepenthes, he was delighted. At that time, Nagy told Ars that nearly all of his server’s bandwidth was being “eaten” by AI crawlers.

Already blocking scraping and attempting to poison AI models through a simpler method, Nagy took his defense method further and created his own tarpit, Iocaine. He told Ars the tarpit immediately killed off about 94 percent of bot traffic to his site, which was primarily from AI crawlers. Soon, social media discussion drove users to inquire about Iocaine deployment, including not just individuals but also organizations wanting to take stronger steps to block scraping.

Iocaine takes ideas (not code) from Nepenthes, but it’s more intent on using the tarpit to poison AI models. Nagy used a reverse proxy to trap crawlers in an “infinite maze of garbage” in an attempt to slowly poison their data collection as much as possible for daring to ignore robots.txt.

Taking its name from “one of the deadliest poisons known to man” from The Princess Bride, Iocaine is jokingly depicted as the “deadliest poison known to AI.” While there’s no way of validating that claim, Nagy’s motto is that the more poisoning attacks that are out there, “the merrier.” He told Ars that his primary reasons for building Iocaine were to help rights holders wall off valuable content and stop AI crawlers from crawling with abandon.

Tarpits aren’t perfect weapons against AI

Running malware like Nepenthes can burden servers, too. Aaron likened the cost of running Nepenthes to running a cheap virtual machine on a Raspberry Pi, and Nagy said that serving crawlers Iocaine costs about the same as serving his website.

But Aaron told Ars that Nepenthes wasting resources is the chief objection he’s seen preventing its deployment. Critics fear that deploying Nepenthes widely will not only burden their servers but also increase the costs of powering all that AI crawling for nothing.

“That seems to be what they’re worried about more than anything,” Aaron told Ars. “The amount of power that AI models require is already astronomical, and I’m making it worse. And my view of that is, OK, so if I do nothing, AI models, they boil the planet. If I switch this on, they boil the planet. How is that my fault?”

Aaron also defends against this criticism by suggesting that a broader impact could slow down AI investment enough to possibly curb some of that energy consumption. Perhaps due to the resistance, AI companies will be pushed to seek permission first to scrape or agree to pay more content creators for training on their data.

“Any time one of these crawlers pulls from my tarpit, it’s resources they’ve consumed and will have to pay hard cash for, but, being bullshit, the money [they] have spent to get it won’t be paid back by revenue,” Aaron posted, explaining his tactic online. “It effectively raises their costs. And seeing how none of them have turned a profit yet, that’s a big problem for them. The investor money will not continue forever without the investors getting paid.”

Nagy agrees that the more anti-AI attacks there are, the greater the potential is for them to have an impact. And by releasing Iocaine, Nagy showed that social media chatter about new attacks can inspire new tools within a few days. Marcus Butler, an independent software developer, similarly built his poisoning attack called Quixotic over a few days, he told Ars. Soon afterward, he received messages from others who built their own versions of his tool.

Butler is not in the camp of wanting to destroy AI. He told Ars that he doesn’t think “tools like Quixotic (or Nepenthes) will ‘burn AI to the ground.'” Instead, he takes a more measured stance, suggesting that “these tools provide a little protection (a very little protection) against scrapers taking content and, say, reposting it or using it for training purposes.”

But for a certain sect of Internet users, every little bit of protection seemingly helps. Geuter linked Ars to a list of tools bent on sabotaging AI. Ultimately, he expects that tools like Nepenthes are “probably not gonna be useful in the long run” because AI companies can likely detect and drop gibberish from training data. But Nepenthes represents a sea change, Geuter told Ars, providing a useful tool for people who “feel helpless” in the face of endless scraping and showing that “the story of there being no alternative or choice is false.”

Criticism of tarpits as AI weapons

Critics debating Nepenthes’ utility on Hacker News suggested that most AI crawlers could easily avoid tarpits like Nepenthes, with one commenter describing the attack as being “very crawler 101.” Aaron said that was his “favorite comment” because if tarpits are considered elementary attacks, he has “2 million lines of access log that show that Google didn’t graduate.”

But efforts to poison AI or waste AI resources don’t just mess with the tech industry. Governments globally are seeking to leverage AI to solve societal problems, and attacks on AI’s resilience seemingly threaten to disrupt that progress.

Nathan VanHoudnos is a senior AI security research scientist in the federally funded CERT Division of the Carnegie Mellon University Software Engineering Institute, which partners with academia, industry, law enforcement, and government to “improve the security and resilience of computer systems and networks.” He told Ars that new threats like tarpits seem to replicate a problem that AI companies are already well aware of: “that some of the stuff that you’re going to download from the Internet might not be good for you.”

“It sounds like these tarpit creators just mainly want to cause a little bit of trouble,” VanHoudnos said. “They want to make it a little harder for these folks to get” the “better or different” data “that they’re looking for.”

VanHoudnos co-authored a paper on “Counter AI” last August, pointing out that attackers like Aaron and Nagy are limited in how much they can mess with AI models. They may have “influence over what training data is collected but may not be able to control how the data are labeled, have access to the trained model, or have access to the Al system,” the paper said.

Further, AI companies are increasingly turning to the deep web for unique data, so any efforts to wall off valuable content with tarpits may be coming right when crawling on the surface web starts to slow, VanHoudnos suggested.

But according to VanHoudnos, AI crawlers are also “relatively cheap,” and companies may deprioritize fighting against new attacks on crawlers if “there are higher-priority assets” under attack. And tarpitting “does need to be taken seriously because it is a tool in a toolkit throughout the whole life cycle of these systems. There is no silver bullet, but this is an interesting tool in a toolkit,” he said.

Offering a choice to abstain from AI training

Aaron told Ars that he never intended Nepenthes to be a major project but that he occasionally puts in work to fix bugs or add new features. He said he’d consider working on integrations for real-time reactions to crawlers if there was enough demand.

Currently, Aaron predicts that Nepenthes might be most attractive to rights holders who want AI companies to pay to scrape their data. And many people seem enthusiastic about using it to reinforce robots.txt. But “some of the most exciting people are in the ‘let it burn’ category,” Aaron said. These people are drawn to tools like Nepenthes as an act of rebellion against AI making the Internet less useful and enjoyable for users.

Geuter told Ars that he considers Nepenthes “more of a sociopolitical statement than really a technological solution (because the problem it’s trying to address isn’t purely technical, it’s social, political, legal, and needs way bigger levers).”

To Geuter, a computer scientist who has been writing about the social, political, and structural impact of tech for two decades, AI is the “most aggressive” example of “technologies that are not done ‘for us’ but ‘to us.'”

“It feels a bit like the social contract that society and the tech sector/engineering have had (you build useful things, and we’re OK with you being well-off) has been canceled from one side,” Geuter said. “And that side now wants to have its toy eat the world. People feel threatened and want the threats to stop.”

As AI evolves, so do attacks, with one 2021 study showing that increasingly stronger data poisoning attacks, for example, were able to break data sanitization defenses. Whether these attacks can ever do meaningful destruction or not, Geuter sees tarpits as a “powerful symbol” of the resistance that Aaron and Nagy readily joined.

“It’s a great sign to see that people are challenging the notion that we all have to do AI now,” Geuter said. “Because we don’t. It’s a choice. A choice that mostly benefits monopolists.”

Tarpit creators like Nagy will likely be watching to see if poisoning attacks continue growing in sophistication. On the Iocaine site—which, yes, is protected from scraping by Iocaine—he posted this call to action: “Let’s make AI poisoning the norm. If we all do it, they won’t have anything to crawl.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt Read More »

reddit-won’t-interfere-with-users-revolting-against-x-with-subreddit-bans

Reddit won’t interfere with users revolting against X with subreddit bans

A Reddit spokesperson told Ars that decisions to ban or not ban X links are user-driven. Subreddit members are allowed to suggest and institute subreddit rules, they added.

“Notably, many Reddit communities also prohibit Reddit links,” the Reddit representative pointed out. They noted that Reddit as a company doesn’t currently have any ban on links to X.

A ban against links to an entire platform isn’t outside of the ordinary for Reddit. Numerous subreddits ban social media links, Reddit’s spokesperson said. r/EarthPorn, a subreddit for landscape photography, for example, doesn’t allow website links because all posts “must be static images,” per the subreddit’s official rules. r/AskReddit, meanwhile, only allows for questions asked in the title of a Reddit post and doesn’t allow for use of the text box, including for sharing links.

“Reddit has a longstanding commitment to freedom of speech and freedom of association,” Reddit’s spokesperson said. They added that any person is free to make or moderate their own community. Those unsatisfied with a forum about Seahawks football that doesn’t have X links could feel free to make their own subreddit. Although, some of the subreddits considering X bans, like r/MadeMeSmile, already have millions of followers.

Meta bans also under discussion

As 404 Media noted, some Redditors are also pushing to block content from Facebook, Instagram, and other Meta properties in response to new Donald Trump-friendly policies instituted by owner Mark Zuckerberg, like Meta killing diversity programs and axing third-party fact-checkers.

Reddit won’t interfere with users revolting against X with subreddit bans Read More »

anthropic-chief-says-ai-could-surpass-“almost-all-humans-at-almost-everything”-shortly-after-2027

Anthropic chief says AI could surpass “almost all humans at almost everything” shortly after 2027

He then shared his concerns about how human-level AI models and robotics that are capable of replacing all human labor may require a complete re-think of how humans value both labor and themselves.

“We’ve recognized that we’ve reached the point as a technological civilization where the idea, there’s huge abundance and huge economic value, but the idea that the way to distribute that value is for humans to produce economic labor, and this is where they feel their sense of self worth,” he added. “Once that idea gets invalidated, we’re all going to have to sit down and figure it out.”

The eye-catching comments, similar to comments about AGI made recently by OpenAI CEO Sam Altman, come as Anthropic negotiates a $2 billion funding round that would value the company at $60 billion. Amodei disclosed that Anthropic’s revenue multiplied tenfold in 2024.

Amodei distances himself from “AGI” term

Even with his dramatic predictions, Amodei distanced himself from a term for this advanced labor-replacing AI favored by Altman, “artificial general intelligence” (AGI), calling it in a separate CNBC interview from the same event in Switzerland a marketing term.

Instead, he prefers to describe future AI systems as a “country of geniuses in a data center,” he told CNBC. Amodei wrote in an October 2024 essay that such systems would need to be “smarter than a Nobel Prize winner across most relevant fields.”

On Monday, Google announced an additional $1 billion investment in Anthropic, bringing its total commitment to $3 billion. This follows Amazon’s $8 billion investment over the past 18 months. Amazon plans to integrate Claude models into future versions of its Alexa speaker.

Anthropic chief says AI could surpass “almost all humans at almost everything” shortly after 2027 Read More »

meta-to-cut-5%-of-employees-deemed-unfit-for-zuckerberg’s-ai-fueled-future

Meta to cut 5% of employees deemed unfit for Zuckerberg’s AI-fueled future

Anticipating that 2025 will be an “intense year” requiring rapid innovation, Mark Zuckerberg reportedly announced that Meta would be cutting 5 percent of its workforce—targeting “lowest performers.”

Bloomberg reviewed the internal memo explaining the cuts, which was posted to Meta’s internal Workplace forum Tuesday. In it, Zuckerberg confirmed that Meta was shifting its strategy to “move out low performers faster” so that Meta can hire new talent to fill those vacancies this year.

“I’ve decided to raise the bar on performance management,” Zuckerberg said. “We typically manage out people who aren’t meeting expectations over the course of a year, but now we’re going to do more extensive performance-based cuts during this cycle.”

Cuts will likely impact more than 3,600 employees, as Meta’s most recent headcount in September totaled about 72,000 employees. It may not be as straightforward as letting go anyone with an unsatisfactory performance review, as Zuckerberg said that any employee not currently meeting expectations could be spared if Meta is “optimistic about their future performance,” The Wall Street Journal reported.

Any employees affected will be notified by February 10 and receive “generous severance,” Zuckerberg’s memo promised.

This is the biggest round of cuts at Meta since 2023, when Meta laid off 10,000 employees during what Zuckerberg dubbed the “year of efficiency.” Those layoffs followed a prior round where 11,000 lost their jobs and Zuckerberg realized that “leaner is better.” He told employees in 2023 that a “surprising result” from reducing the workforce was “that many things have gone faster.”

“A leaner org will execute its highest priorities faster,” Zuckerberg wrote in 2023. “People will be more productive, and their work will be more fun and fulfilling. We will become an even greater magnet for the most talented people. That’s why in our Year of Efficiency, we are focused on canceling projects that are duplicative or lower priority and making every organization as lean as possible.”

Meta to cut 5% of employees deemed unfit for Zuckerberg’s AI-fueled future Read More »

mastodon’s-founder-cedes-control,-refuses-to-become-next-musk-or-zuckerberg

Mastodon’s founder cedes control, refuses to become next Musk or Zuckerberg

And perhaps in a nod to Meta’s recent changes, Mastodon also vowed to “invest deeply in trust and safety” and ensure “everyone, especially marginalized communities,” feels “safe” on the platform.

To become a more user-focused paradise of “resilient, governable, open and safe digital spaces,” Mastodon is going to need a lot more funding. The blog called for donations to help fund an annual operating budget of $5.1 million (5 million euros) in 2025. That’s a massive leap from the $152,476 (149,400 euros) total operating expenses Mastodon reported in 2023.

Other social networks wary of EU regulations

Mastodon has decided to continue basing its operations in Europe, while still maintaining a separate US-based nonprofit entity as a “fundraising hub,” the blog said.

It will take time, Mastodon said, to “select the appropriate jurisdiction and structure in Europe” before Mastodon can then “determine which other (subsidiary) legal structures are needed to support operations and sustainability.”

While Mastodon is carefully getting re-settled as a nonprofit in Europe, Zuckerberg this week went on Joe Rogan’s podcast to call on Donald Trump to help US tech companies fight European Union fines, Politico reported.

Some critics suggest the recent policy changes on Meta platforms were intended to win Trump’s favor, partly to get Trump on Meta’s side in the fight against the EU’s strict digital laws. According to France24, Musk’s recent combativeness with EU officials suggests Musk might team up with Zuckerberg in that fight (unlike that cage fight pitting the wealthy tech titans against each other that never happened).

Experts told France24 that EU officials may “perhaps wrongly” already be fearful about ruffling Trump’s feathers by targeting his tech allies and would likely need to use the “full legal arsenal” of EU digital laws to “stand up to Big Tech” once Trump’s next term starts.

As Big Tech prepares to continue battling EU regulators, Mastodon appears to be taking a different route, laying roots in Europe and “establishing the appropriate governance and leadership frameworks that reflect the nature and purpose of Mastodon as a whole” and “responsibly serve the community,” its blog said.

“Our core mission remains the same: to create the tools and digital spaces where people can build authentic, constructive online communities free from ads, data exploitation, manipulative algorithms, or corporate monopolies,” Mastodon’s blog said.

Mastodon’s founder cedes control, refuses to become next Musk or Zuckerberg Read More »

meta-kills-diversity-programs,-claiming-dei-has-become-“too-charged”

Meta kills diversity programs, claiming DEI has become “too charged”

Meta has reportedly ended diversity, equity, and inclusion (DEI) programs that influenced staff hiring and training, as well as vendor decisions, effective immediately.

According to an internal memo viewed by Axios and verified by Ars, Meta’s vice president of human resources, Janelle Gale, told Meta employees that the shift was due to “legal and policy landscape surrounding diversity, equity, and inclusion efforts in the United States is changing.”

It’s another move by Meta that some view as part of the company’s larger effort to align with the incoming Trump administration’s politics. In December, Donald Trump promised to crack down on DEI initiatives at companies and on college campuses, The Guardian reported.

Earlier this week, Meta cut its fact-checking program, which was introduced in 2016 after Trump’s first election to prevent misinformation from spreading. In a statement announcing Meta’s pivot to X’s Community Notes-like approach to fact-checking, Meta CEO Mark Zuckerberg claimed that fact-checkers were “too politically biased” and “destroyed trust” on Meta platforms like Facebook, Instagram, and Threads.

Trump has also long promised to renew his war on alleged social media censorship while in office. Meta faced backlash this week over leaked rule changes relaxing Meta’s hate speech policies, The Intercept reported, which Zuckerberg said were “out of touch with mainstream discourse.”  Those changes included allowing anti-trans slurs previously banned, as well as permitting women to be called “property” and gay people to be called “mentally ill,” Mashable reported. In a statement, GLAAD said that rolling back safety guardrails risked turning Meta platforms into “unsafe landscapes filled with dangerous hate speech, violence, harassment, and misinformation” and alleged that Meta appeared to be willing to “normalize anti-LGBTQ hatred for profit.”

Meta kills diversity programs, claiming DEI has become “too charged” Read More »

meta-axes-third-party-fact-checkers-in-time-for-second-trump-term

Meta axes third-party fact-checkers in time for second Trump term


Zuckerberg says Meta will “work with President Trump” to fight censorship.

Meta CEO Mark Zuckerberg during the Meta Connect event in Menlo Park, California on September 25, 2024.  Credit: Getty Images | Bloomberg

Meta announced today that it’s ending the third-party fact-checking program it introduced in 2016, and will rely instead on a Community Notes approach similar to what’s used on Elon Musk’s X platform.

The end of third-party fact-checking and related changes to Meta policies could help the company make friends in the Trump administration and in governments of conservative-leaning states that have tried to impose legal limits on content moderation. The operator of Facebook and Instagram announced the changes in a blog post and a video message recorded by CEO Mark Zuckerberg.

“Governments and legacy media have pushed to censor more and more. A lot of this is clearly political,” Zuckerberg said. He said the recent elections “feel like a cultural tipping point toward once again prioritizing speech.”

“We’re going to get rid of fact-checkers and replace them with Community Notes, similar to X, starting in the US,” Zuckerberg said. “After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried in good faith to address those concerns without becoming the arbiters of truth. But the fact-checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the US.”

Meta says the soon-to-be-discontinued fact-checking program includes over 90 third-party organizations that evaluate posts in over 60 languages. The US-based fact-checkers are AFP USA, Check Your Fact, Factcheck.org, Lead Stories, PolitiFact, Science Feedback, Reuters Fact Check, TelevisaUnivision, The Dispatch, and USA Today.

The independent fact-checkers rate the accuracy of posts and apply ratings such as False, Altered, Partly False, Missing Context, Satire, and True. Meta adds notices to posts rated as false or misleading and notifies users before they try to share the content or if they shared it in the past.

Meta: Experts “have their own biases”

In the blog post that accompanied Zuckerberg’s video message, Chief Global Affairs Officer Joel Kaplan said the 2016 decision to use independent fact-checkers seemed like “the best and most reasonable choice at the time… The intention of the program was to have these independent experts give people more information about the things they see online, particularly viral hoaxes, so they were able to judge for themselves what they saw and read.”

But experts “have their own biases and perspectives,” and the program imposed “intrusive labels and reduced distribution” of content “that people would understand to be legitimate political speech and debate,” Kaplan wrote.

The X-style Community Notes system lets the community “decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see… Just like they do on X, Community Notes [on Meta sites] will require agreement between people with a range of perspectives to help prevent biased ratings,” Kaplan wrote.

The end of third-party fact-checking will be implemented in the US before other countries. Meta will also move its internal trust and safety and content moderation teams out of California, Zuckerberg said. “Our US-based content review is going to be based in Texas. As we work to promote free expression, I think it will help us build trust to do this work in places where there is less concern about the bias of our teams,” he said. Meta will continue to take “legitimately bad stuff” like drugs, terrorism, and child exploitation “very seriously,” Zuckerberg said.

Zuckerberg pledges to work with Trump

Meta will “phase in a more comprehensive community notes system” over the next couple of months, Zuckerberg said. Meta, which donated $1 million to Trump’s inaugural fund, will also “work with President Trump to push back on governments around the world that are going after American companies and pushing to censor more,” Zuckerberg said.

Zuckerberg said that “Europe has an ever-increasing number of laws institutionalizing censorship,” that “Latin American countries have secret courts that can quietly order companies to take things down,” and that “China has censored apps from even working in the country.” Meta needs “the support of the US government” to push back against other countries’ content-restriction orders, he said.

“That’s why it’s been so difficult over the past four years when even the US government has pushed for censorship,” Zuckerberg said, referring to the Biden administration. “By going after US and other American companies, it has emboldened other governments to go even further. But now we have the opportunity to restore free expression, and I am excited to take it.”

Brendan Carr, Trump’s pick to lead the Federal Communications Commission, praised Meta’s policy changes. Carr has promised to shift the FCC’s focus from regulating telecom companies to cracking down on Big Tech and media companies that he alleges are part of a “censorship cartel.”

“President Trump’s resolute and strong support for the free speech rights of everyday Americans is already paying dividends,” Carr wrote on X today. “Facebook’s announcements is [sic] a good step in the right direction. I look forward to monitoring these developments and their implementation. The work continues until the censorship cartel is completely dismantled and destroyed.”

Group: Meta is “saying the truth doesn’t matter”

Meta’s changes were criticized by Public Citizen, a nonprofit advocacy group founded by Ralph Nader. “Asking users to fact-check themselves is tantamount to Meta saying the truth doesn’t matter,” Public Citizen co-president Lisa Gilbert said. “Misinformation will flow more freely with this policy change, as we cannot assume that corrections will be made when false information proliferates. The American people deserve accurate information about our elections, health risks, the environment, and much more.”

Media advocacy group Free Press said that “Zuckerberg is one of many billionaires who are cozying up to dangerous demagogues like Trump and pushing initiatives that favor their bottom lines at the expense of everything and everyone else.” Meta appears to be abandoning its “responsibility to protect its many users, and align[ing] the company more closely with an incoming president who’s a known enemy of accountability,” Free Press Senior Counsel Nora Benavidez said.

X’s Community Notes system was criticized in a recent report by the Center for Countering Digital Hate (CCDH), which said it “found that 74 percent of accurate community notes on US election misinformation never get shown to users.” (X previously sued the CCDH, but the lawsuit was dismissed by a federal judge.)

Previewing other changes, Zuckerberg said that Meta will eliminate content restrictions “that are just out of touch with mainstream discourse” and change how it enforces policies “to reduce the mistakes that account for the vast majority of censorship on our platforms.”

“We used to have filters that scanned for any policy violation. Now, we’re going to focus those filters on tackling illegal and high-severity violations, and for lower severity violations, we’re going to rely on someone reporting an issue before we take action,” he said. “The problem is the filters make mistakes, and they take down a lot of content that they shouldn’t. So by dialing them back, we’re going to dramatically reduce the amount of censorship on our platforms.”

Meta to relax filters, recommend more political content

Zuckerberg said Meta will re-tune content filters “to require much higher confidence before taking down content.” He said this means Meta will “catch less bad stuff” but will “also reduce the number of innocent people’s posts and accounts that we accidentally take down.”

Meta has “built a lot of complex systems to moderate content,” he noted. Even if these systems “accidentally censor just 1 percent of posts, that’s millions of people, and we’ve reached a point where it’s just too many mistakes and too much censorship,” he said.

Kaplan wrote that Meta has censored too much harmless content and that “too many people find themselves wrongly locked up in ‘Facebook jail.'”

“In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content,” Kaplan wrote. “This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable.”

Another upcoming change is that Meta will recommend more political posts. “For a while, the community asked to see less politics because it was making people stressed, so we stopped recommending these posts,” Zuckerberg said. “But it feels like we’re in a new era now, and we’re starting to get feedback that people want to see this content again, so we’re going to start phasing this back into Facebook, Instagram, and Threads while working to keep the communities friendly and positive.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Meta axes third-party fact-checkers in time for second Trump term Read More »