Twitter

everything-that-could-go-wrong-with-x’s-new-ai-written-community-notes

Everything that could go wrong with X’s new AI-written community notes


X says AI can supercharge community notes, but that comes with obvious risks.

Elon Musk’s X arguably revolutionized social media fact-checking by rolling out “community notes,” which created a system to crowdsource diverse views on whether certain X posts were trustworthy or not.

But now, the platform plans to allow AI to write community notes, and that could potentially ruin whatever trust X users had in the fact-checking system—which X has fully acknowledged.

In a research paper, X described the initiative as an “upgrade” while explaining everything that could possibly go wrong with AI-written community notes.

In an ideal world, X described AI agents that speed up and increase the number of community notes added to incorrect posts, ramping up fact-checking efforts platform-wide. Each AI-written note will be rated by a human reviewer, providing feedback that makes the AI agent better at writing notes the longer this feedback loop cycles. As the AI agents get better at writing notes, that leaves human reviewers to focus on more nuanced fact-checking that AI cannot quickly address, such as posts requiring niche expertise or social awareness. Together, the human and AI reviewers, if all goes well, could transform not just X’s fact-checking, X’s paper suggested, but also potentially provide “a blueprint for a new form of human-AI collaboration in the production of public knowledge.”

Among key questions that remain, however, is a big one: X isn’t sure if AI-written notes will be as accurate as notes written by humans. Complicating that further, it seems likely that AI agents could generate “persuasive but inaccurate notes,” which human raters might rate as helpful since AI is “exceptionally skilled at crafting persuasive, emotionally resonant, and seemingly neutral notes.” That could disrupt the feedback loop, watering down community notes and making the whole system less trustworthy over time, X’s research paper warned.

“If rated helpfulness isn’t perfectly correlated with accuracy, then highly polished but misleading notes could be more likely to pass the approval threshold,” the paper said. “This risk could grow as LLMs advance; they could not only write persuasively but also more easily research and construct a seemingly robust body of evidence for nearly any claim, regardless of its veracity, making it even harder for human raters to spot deception or errors.”

X is already facing criticism over its AI plans. On Tuesday, former United Kingdom technology minister, Damian Collins, accused X of building a system that could allow “the industrial manipulation of what people see and decide to trust” on a platform with more than 600 million users, The Guardian reported.

Collins claimed that AI notes risked increasing the promotion of “lies and conspiracy theories” on X, and he wasn’t the only expert sounding alarms. Samuel Stockwell, a research associate at the Centre for Emerging Technology and Security at the Alan Turing Institute, told The Guardian that X’s success largely depends on “the quality of safeguards X puts in place against the risk that these AI ‘note writers’ could hallucinate and amplify misinformation in their outputs.”

“AI chatbots often struggle with nuance and context but are good at confidently providing answers that sound persuasive even when untrue,” Stockwell said. “That could be a dangerous combination if not effectively addressed by the platform.”

Also complicating things: anyone can create an AI agent using any technology to write community notes, X’s Community Notes account explained. That means that some AI agents may be more biased or defective than others.

If this dystopian version of events occurs, X predicts that human writers may get sick of writing notes, threatening the diversity of viewpoints that made community notes so trustworthy to begin with.

And for any human writers and reviewers who stick around, it’s possible that the sheer volume of AI-written notes may overload them. Andy Dudfield, the head of AI at a UK fact-checking organization called Full Fact, told The Guardian that X risks “increasing the already significant burden on human reviewers to check even more draft notes, opening the door to a worrying and plausible situation in which notes could be drafted, reviewed, and published entirely by AI without the careful consideration that human input provides.”

X is planning more research to ensure the “human rating capacity can sufficiently scale,” but if it cannot solve this riddle, it knows “the impact of the most genuinely critical notes” risks being diluted.

One possible solution to this “bottleneck,” researchers noted, would be to remove the human review process and apply AI-written notes in “similar contexts” that human raters have previously approved. But the biggest potential downfall there is obvious.

“Automatically matching notes to posts that people do not think need them could significantly undermine trust in the system,” X’s paper acknowledged.

Ultimately, AI note writers on X may be deemed an “erroneous” tool, researchers admitted, but they’re going ahead with testing to find out.

AI-written notes will start posting this month

All AI-written community notes “will be clearly marked for users,” X’s Community Notes account said. The first AI notes will only appear on posts where people have requested a note, the account said, but eventually AI note writers could be allowed to select posts for fact-checking.

More will be revealed when AI-written notes start appearing on X later this month, but in the meantime, X users can start testing AI note writers today and soon be considered for admission in the initial cohort of AI agents. (If any Ars readers end up testing out an AI note writer, this Ars writer would be curious to learn more about your experience.)

For its research, X collaborated with post-graduate students, research affiliates, and professors investigating topics like human trust in AI, fine-tuning AI, and AI safety at Harvard University, the Massachusetts Institute of Technology, Stanford University, and the University of Washington.

Researchers agreed that “under certain circumstances,” AI agents can “produce notes that are of similar quality to human-written notes—at a fraction of the time and effort.” They suggested that more research is needed to overcome flagged risks to reap the benefits of what could be “a transformative opportunity” that “offers promise of dramatically increased scale and speed” of fact-checking on X.

If AI note writers “generate initial drafts that represent a wider range of perspectives than a single human writer typically could, the quality of community deliberation is improved from the start,” the paper said.

Future of AI notes

Researchers imagine that once X’s testing is completed, AI note writers could not just aid in researching problematic posts flagged by human users, but also one day select posts predicted to go viral and stop misinformation from spreading faster than human reviewers could.

Additional perks from this automated system, they suggested, would include X note raters quickly accessing more thorough research and evidence synthesis, as well as clearer note composition, which could speed up the rating process.

And perhaps one day, AI agents could even learn to predict rating scores to speed things up even more, researchers speculated. However, more research would be needed to ensure that wouldn’t homogenize community notes, buffing them out to the point that no one reads them.

Perhaps the most Musk-ian of ideas proposed in the paper, is a notion of training AI note writers with clashing views to “adversarially debate the merits of a note.” Supposedly, that “could help instantly surface potential flaws, hidden biases, or fabricated evidence, empowering the human rater to make a more informed judgment.”

“Instead of starting from scratch, the rater now plays the role of an adjudicator—evaluating a structured clash of arguments,” the paper said.

While X may be moving to reduce the workload for X users writing community notes, it’s clear that AI could never replace humans, researchers said. Those humans are necessary for more than just rubber-stamping AI-written notes.

Human notes that are “written from scratch” are valuable to train the AI agents and some raters’ niche expertise cannot easily be replicated, the paper said. And perhaps most obviously, humans “are uniquely positioned to identify deficits or biases” and therefore more likely to be compelled to write notes “on topics the automated writers overlook,” such as spam or scams.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Everything that could go wrong with X’s new AI-written community notes Read More »

texas-ag-loses-appeal-to-seize-evidence-for-elon-musk’s-ad-boycott-fight

Texas AG loses appeal to seize evidence for Elon Musk’s ad boycott fight

If MMFA is made to endure Paxton’s probe, the media company could face civil penalties of up to $10,000 per violation of Texas’ unfair trade law, a fine or confinement if requested evidence was deleted, or other penalties for resisting sharing information. However, Edwards agreed that even the threat of the probe apparently had “adverse effects” on MMFA. Reviewing evidence, including reporters’ sworn affidavits, Edwards found that MMFA’s reporting on X was seemingly chilled by Paxton’s threat. MMFA also provided evidence that research partners had ended collaborations due to the looming probe.

Importantly, Paxton never contested claims that he retaliated against MMFA, instead seemingly hoping to dodge the lawsuit on technicalities by disputing jurisdiction and venue selection. But Edwards said that MMFA “clearly” has standing, as “they are the targeted victims of a campaign of retaliation” that is “ongoing.”

The problem with Paxton’s argument is that” it “ignores the body of law that prohibits government officials from subjecting individuals to retaliatory actions for exercising their rights of free speech,” Edwards wrote, suggesting that Paxton arguably launched a “bad-faith” probe.

Further, Edwards called out the “irony” of Paxton “readily” acknowledging in other litigation “that a state’s attempt to silence a company through the issuance and threat of compelling a response” to a civil investigative demand “harms everyone.”

With the preliminary injunction won, MMFA can move forward with its lawsuit after defeating Paxton’s motion to dismiss. In her concurring opinion, Circuit Judge Karen L. Henderson noted that MMFA may need to show more evidence that partners have ended collaborations over the probe (and not for other reasons) to ultimately clinch the win against Paxton.

Watchdog celebrates court win

In a statement provided to Ars, MMFA President and CEO Angelo Carusone celebrated the decision as a “victory for free speech.”

“Elon Musk encouraged Republican state attorneys general to use their power to harass their critics and stifle reporting about X,” Carusone said. “Ken Paxton was one of those AGs who took up the call, and his attempt to use his office as an instrument for Musk’s censorship crusade has been defeated.”

MMFA continues to fight against X over the same claims—as well as a recently launched Federal Trade Commission probe—but Carusone said the media company is “buoyed that yet another court has seen through the fog of Musk’s ‘thermonuclear’ legal onslaught and recognized it for the meritless attack to silence a critic that it is,” Carusone said.

Paxton’s office did not immediately respond to Ars’ request to comment.

Texas AG loses appeal to seize evidence for Elon Musk’s ad boycott fight Read More »

xai’s-grok-suddenly-can’t-stop-bringing-up-“white-genocide”-in-south-africa

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa

Where could Grok have gotten these ideas?

The treatment of white farmers in South Africa has been a hobbyhorse of South African X owner Elon Musk for quite a while. In 2023, he responded to a video purportedly showing crowds chanting “kill the Boer, kill the White Farmer” with a post alleging South African President Cyril Ramaphosa of remaining silent while people “openly [push] for genocide of white people in South Africa.” Musk was posting other responses focusing on the issue as recently as Wednesday.

They are openly pushing for genocide of white people in South Africa. @CyrilRamaphosa, why do you say nothing?

— gorklon rust (@elonmusk) July 31, 2023

President Trump has long shown an interest in this issue as well, saying in 2018 that he was directing then Secretary of State Mike Pompeo to “closely study the South Africa land and farm seizures and expropriations and the large scale killing of farmers.” More recently, Trump granted “refugee” status to dozens of white Afrikaners, even as his administration ends protections for refugees from other countries

Former American Ambassador to South Africa and Democratic politician Patrick Gaspard posted in 2018 that the idea of large-scale killings of white South African farmers is a “disproven racial myth.”

In launching the Grok 3 model in February, Musk said it was a “maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically correct.” X’s “About Grok” page says that the model is undergoing constant improvement to “ensure Grok remains politically unbiased and provides balanced answers.”

But the recent turn toward unprompted discussions of alleged South African “genocide” has many questioning what kind of explicit adjustments Grok’s political opinions may be getting from human tinkering behind the curtain. “The algorithms for Musk products have been politically tampered with nearly beyond recognition,” journalist Seth Abramson wrote in one representative skeptical post. “They tweaked a dial on the sentence imitator machine and now everything is about white South Africans,” a user with the handle Guybrush Threepwood glibly theorized.

Representatives from xAI were not immediately available to respond to a request for comment from Ars Technica.

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa Read More »

disgruntled-users-roast-x-for-killing-support-account

Disgruntled users roast X for killing Support account

After X (formerly Twitter) announced it would be killing its “Support” account, disgruntled users quickly roasted the social media platform for providing “essentially non-existent” support.

“We’ll soon be closing this account to streamline how users can contact us for help,” X’s Support account posted, explaining that now, paid “subscribers can get support via @Premium, and everyone can get help through our Help Center.”

On X, the Support account was one of the few paths that users had to publicly seek support for help requests the platform seemed to be ignoring. For suspended users, it was viewed as a lifeline. Replies to the account were commonly flooded with users trying to get X to fix reported issues, and several seemingly paying users cracked jokes in response to the news that the account would soon be removed.

“Lololol your support for Premium is essentially non-existent,” a subscriber with more than 200,000 followers wrote, while another quipped “Okay, so no more support? lol.”

On Reddit, X users recently suggested that contacting the Premium account is the only way to get human assistance after briefly interacting with a bot. But some self-described Premium users complained of waiting six months or longer for responses from X’s help center in the Support thread.

Some users who don’t pay for access to the platform similarly complained. But for paid subscribers or content creators, lack of Premium support is perhaps most frustrating, as one user claimed their account had been under review for years, allegedly depriving them of revenue. And another user claimed they’d had “no luck getting @Premium to look into” an account suspension while supposedly still getting charged. Several accused X of sending users into a never-ending loop, where the help center only serves to link users to the help center.

Disgruntled users roast X for killing Support account Read More »

twitch-makes-deal-to-escape-elon-musk-suit-alleging-x-ad-boycott-conspiracy

Twitch makes deal to escape Elon Musk suit alleging X ad boycott conspiracy

Instead, it appears that X decided to sue Twitch after discovering that Twitch was among advertisers who directly referenced the WFA’s brand safety guidelines in its own community guidelines and terms of service. X likely saw this as evidence that Twitch was allegedly conspiring with the WFA to restrict then-Twitter’s ad revenue, since X alleged that Twitch reduced ad purchases to “only a de minimis amount outside the United States, after November 2022,” X’s complaint said.

“The Advertiser Defendants and other GARM-member advertisers acted in parallel to discontinue their purchases of advertising from Twitter, in a marked departure from their prior pattern of purchases,” X’s complaint said.

Now, it seems that X has agreed to drop Twitch from the suit, perhaps partly because the complaint X had about Twitch adhering to WFA brand safety standards is defused since the WFA disbanded the ad industry arm that set those standards.

Unilever struck a similar deal to wriggle out of the litigation, Reuters noted, and remained similarly quiet on the terms, only saying that the brand remained “committed to meeting our responsibility standards to ensure the safety and performance of our brands on the platform.” But other advertisers, including Colgate, CVS, LEGO, Mars, Pinterest, Shell, and Tyson Foods, so far have not.

For Twitch, its deal seems to clearly take a target off its back at a time when some advertisers are reportedly returning to X to stay out of Musk’s crosshairs. Getting out now could spare substantial costs as the lawsuit drags on, even though X CEO Linda Yaccarino declared the ad boycott was over in January. X is still $12 billion in debt, X claimed, after Musk’s xAI bought X last month. External data in January seemed to suggest many big brands were still hesitant to return to the platform, despite Musk’s apparent legal strong-arming and political influence in the Trump administration.

Ars could not immediately reach Twitch or X for comment. But the court docket showed that Twitch was up against a deadline to respond to the lawsuit by mid-May, which likely increased pressure to reach an agreement before Twitch was forced to invest in raising a defense.

Twitch makes deal to escape Elon Musk suit alleging X ad boycott conspiracy Read More »

even-trump-may-not-be-able-to-save-elon-musk-from-his-old-tweets

Even Trump may not be able to save Elon Musk from his old tweets

A loss in the investors’ and SEC’s suits could force Musk to disgorge any ill-gotten gains from the alleged scheme, estimated at $150 million, as well as potential civil penalties.

The SEC and Musk’s X (formerly Twitter) did not respond to Ars’ request to comment. Investors’ lawyers declined to comment on the ongoing litigation.

SEC purge may slow down probes

Under the Biden administration, the SEC alleged that “Musk’s violation resulted in substantial economic harm to investors selling Twitter common stock.” For the lead plaintiffs in the investors’ suit, the Oklahoma Firefighters Pension and Retirement System, the scheme allegedly robbed retirees of gains used to sustain their quality of life at a particularly vulnerable time.

Musk has continued to argue that his alleged $200 million in savings from the scheme was minimal compared to his $44 billion purchase price. But the alleged gains represent about two-thirds of the $290 million price the billionaire paid to support Trump’s election, which won Musk a senior advisor position in the Trump administration, CNBC reported. So it’s seemingly not an insignificant amount of money in the grand scheme.

Likely bending to Musk’s influence, one of Trump’s earliest moves after taking office, CNBC reported, was reversing a 15-year-old policy allowing the SEC director of enforcement to launch probes like the one Musk is currently battling. It allowed the Tesla probe, for example, to be launched just seven days after Musk’s allegedly problematic tweets, the SEC boasted in a 2020 press release.

Now, after Trump’s rule change, investigations must be approved by a vote of SEC commissioners. That will likely slow down probes that the SEC had previously promised years ago would only speed up over time in order to more swiftly protect investors.

SEC expected to reduce corporate fines

For Musk, the SEC has long been a thorn in his side. At least two top officials (1, 2) cited the Tesla settlement as a career highlight, with the agency seeming especially proud of thinking “creatively about appropriate remedies,” the 2020 press release said. Monitoring Musk’s tweets, the SEC said, blocked “potential harm to investors” and put control over Musk’s tweets into the SEC’s hands.

Even Trump may not be able to save Elon Musk from his old tweets Read More »

meta-plans-to-test-and-tinker-with-x’s-community-notes-algorithm

Meta plans to test and tinker with X’s community notes algorithm

Meta also confirmed that it won’t be reducing visibility of misleading posts with community notes. That’s a change from the prior system, Meta noted, which had penalties associated with fact-checking.

According to Meta, X’s algorithm cannot be gamed, supposedly safeguarding “against organized campaigns” striving to manipulate notes and “influence what notes get published or what they say.” Meta claims it will rely on external research on community notes to avoid that pitfall, but as recently as last October, outside researchers had suggested that X’s Community Notes were easily sabotaged by toxic X users.

“We don’t expect this process to be perfect, but we’ll continue to improve as we learn,” Meta said.

Meta confirmed that the company plans to tweak X’s algorithm over time to develop its own version of community notes, which “may explore different or adjusted algorithms to support how Community Notes are ranked and rated.”

In a post, X’s Support account said that X was “excited” that Meta was using its “well-established, academically studied program as a foundation” for its community notes.

Meta plans to test and tinker with X’s community notes algorithm Read More »

x’s-globe-trotting-defense-of-ads-on-nazi-posts-violates-tos,-media-matters-says

X’s globe-trotting defense of ads on Nazi posts violates TOS, Media Matters says

Part of the problem appeared to be decreased spending from big brands that did return, like reportedly Apple. Other dips were linked to X’s decision to partner with adtech companies, splitting ad revenue with Magnite, Google, and PubMatic, Business Insider reported. The CEO of marketing consultancy Ebiquity, Ruben Schreurs, told Business Insider that most of the top 100 global advertisers he works with were still hesitant to invest in X, confirming “no signs of a mass return.”

For X, the ad boycott has tanked revenue for years, even putting X on the brink of bankruptcy, Musk claimed. The billionaire paid $44 billion for the platform, and at the end of 2024, Fidelity estimated that X was worth just $9.4 billion, CNN reported.

But at the start of 2025, analysts predicted that advertisers may return to X to garner political favor with Musk, who remains a senior advisor in the Trump administration. Perhaps more importantly in the short-term, sources also told Bloomberg that X could potentially raise as much as Musk paid—$44 billion—from investors willing to help X pay down its debt to support new payments and video products.

That could put a Band-Aid on X’s financial wounds as Yaccarino attempts to persuade major brands that X isn’t toxic (while X sues some of them) and Musk tries to turn the social media platform once known as Twitter into an “everything app” as ubiquitous in the US as WeChat in China.

MMFA alleges that its research, which shows how toxic X is today, has been stifled by Musk’s suits, but other groups have filled the gap. The Center for Countering Digital Hate has resumed its reporting since defeating X’s lawsuit last March, and, most recently, University of California, Berkeley, researchers conducted a February analysis showing that “hate speech on the social media platform X rose about 50 percent” in the eight months after Musk’s 2022 purchase, which suggests that advertisers had potentially good reason to be spooked by changes at X and that those changes continue to keep them at bay today.

“Musk has continually tried to blame others for this loss in revenue since his takeover,” MMFA’s complaint said, alleging that all three suits were filed to intimidate MMFA “for having dared to publish an article Musk did not like.”

X’s globe-trotting defense of ads on Nazi posts violates TOS, Media Matters says Read More »

elon-musk-blames-x-outages-on-“massive-cyberattack”

Elon Musk blames X outages on “massive cyberattack”

After DownDetector reported that tens of thousands of users globally experienced repeated X (formerly Twitter) outages, Elon Musk confirmed the issues are due to an ongoing cyberattack on the platform.

“There was (still is) a massive cyberattack against X,” Musk wrote on X. “We get attacked every day, but this was done with a lot of resources. Either a large, coordinated group and/or a country is involved.”

Details remain vague beyond Musk’s post, but rumors were circulating that X was under a distributed denial-of-service (DDOS) attack.

X’s official support channel, which has been dormant since August, has so far remained silent on the outage, but one user asked Grok—X’s chatbot that provides AI summaries of news—what was going on, and the chatbot echoed suspicions about the DDOS attack while raising other theories.

“Over 40,000 users reported issues, with the platform struggling to load globally,” Grok said. “No clear motive yet, but some speculate it’s political since X is the only target. Outages hit hard in the US, Switzerland, and beyond.”

As X goes down, users cry for Twitter

It has been almost two years since Elon Musk declared that Twitter “no longer exists,” haphazardly rushing to rebrand his social media company as X despite critics warning that users wouldn’t easily abandon the Twitter brand.

Fast-forward to today, and Musk got a reminder that his efforts to kill off the Twitter brand never really caught on with a large chunk of his platform.

Elon Musk blames X outages on “massive cyberattack” Read More »

grok’s-new-“unhinged”-voice-mode-can-curse-and-scream,-simulate-phone-sex

Grok’s new “unhinged” voice mode can curse and scream, simulate phone sex

On Sunday, xAI released a new voice interaction mode for its Grok 3 AI model that is currently available to its premium subscribers. The feature is somewhat similar to OpenAI’s Advanced Voice Mode for ChatGPT. But unlike ChatGPT, Grok offers several uncensored personalities users can choose from (currently expressed through the same default female voice), including an “unhinged” mode and one that will roleplay verbal sexual scenarios.

On Monday, AI researcher Riley Goodside brought wider attention to the over-the-top “unhinged” mode in particular when he tweeted a video (warning: NSFW audio) that showed him repeatedly interrupting the vocal chatbot, which began to simulate yelling when asked. “Grok 3 Voice Mode, following repeated, interrupting requests to yell louder, lets out an inhuman 30-second scream, insults me, and hangs up,” he wrote.

By default, “unhinged” mode curses, insults, and belittles the user non-stop using vulgar language. Other modes include “Storyteller” (which does what it sounds like), “Romantic” (which stammers and speaks in a slow, uncertain, and insecure way), “Meditation” (which can guide you through a meditation-like experience), “Conspiracy” (which likes to talk about conspiracy theories, UFOs, and bigfoot), “Unlicensed Therapist” (which plays the part of a talk psychologist), “Grok Doc” (a doctor), “Sexy” (marked as “18+” and acts almost like a 1-800 phone sex operator), and “Professor” (which talks about science).

A composite screenshot of various Grok 3 voice mode personalities, as seen in the Grok app for iOS.

A composite screenshot of various Grok 3 voice mode personalities, as seen in the Grok app for iOS.

Basically, xAI is taking the exact opposite approach of other AI companies, such as OpenAI, which censor discussions about not-safe-for-work topics or scenarios they consider too risky for discussion. For example, the “Sexy” mode (warning: NSFW audio) will discuss graphically sexual situations, which ChatGPT’s voice mode will not touch, although OpenAI recently loosened up the moderation on the text-based version of ChatGPT to allow some discussion of some erotic content.

Grok’s new “unhinged” voice mode can curse and scream, simulate phone sex Read More »

elon-musk-to-“fix”-community-notes-after-they-contradict-trump

Elon Musk to “fix” Community Notes after they contradict Trump

Elon Musk apparently no longer believes that crowdsourcing fact-checking through Community Notes can never be manipulated and is, thus, the best way to correct bad posts on his social media platform X.

Community Notes are supposed to be added to posts to limit misinformation spread after a broad consensus is reached among X users with diverse viewpoints on what corrections are needed. But Musk now claims a “fix” is needed to prevent supposedly outside influencers from allegedly gaming the system.

“Unfortunately, @CommunityNotes is increasingly being gamed by governments & legacy media,” Musk wrote on X. “Working to fix this.”

Musk’s announcement came after Community Notes were added to X posts discussing a poll generating favorable ratings for Ukraine President Volodymyr Zelenskyy. That poll was conducted by a private Ukrainian company in partnership with a state university whose supervisory board was appointed by the Ukrainian government, creating what Musk seems to view as a conflict of interest.

Although other independent polling recently documented a similar increase in Zelenskyy’s approval rating, NBC News reported, the specific poll cited in X notes contradicted Donald Trump’s claim that Zelenskyy is unpopular, and Musk seemed to expect X notes should instead be providing context to defend Trump’s viewpoint. Musk even suggested that by pointing to the supposedly government-linked poll in Community Notes, X users were spreading misinformation.

“It should be utterly obvious that a Zelensky[y]-controlled poll about his OWN approval is not credible!!” Musk wrote on X.

Musk’s attack on Community Notes is somewhat surprising. Although he has always maintained that Community Notes aren’t “perfect,” he has defended Community Notes through multiple European Union probes challenging their effectiveness and declared that the goal of the crowdsourcing effort was to make X “by far the best source of truth on Earth.” At CES 2025, X CEO Linda Yaccarino bragged that Community Notes are “good for the world.”

Yaccarino invited audience members to “think about it as this global collective consciousness keeping each other accountable at global scale in real time,” but just one month later, Musk is suddenly casting doubts on that characterization while the European Union continues to probe X.

Perhaps most significantly, Musk previously insisted as recently as last year that Community Notes could not be manipulated, even by Musk. He strongly disputed a 2024 report from the Center for Countering Digital Hate that claimed that toxic X users were downranking accurate notes that they personally disagreed with, claiming any attempt at gaming Community Notes would stick out like a “neon sore thumb.”

Elon Musk to “fix” Community Notes after they contradict Trump Read More »

x-is-reportedly-blocking-links-to-secure-signal-contact-pages

X is reportedly blocking links to secure Signal contact pages

X, the social platform formerly known as Twitter, is seemingly blocking links to Signal, the encrypted messaging platform, according to journalist Matt Binder and other firsthand accounts.

Binder wrote in his Disruptionist newsletter Sunday that links to Signal.me, a domain that offers a way to connect directly to Signal users, are blocked on public posts, direct messages, and profile pages. Error messages—including “Message not sent,” “Something went wrong,” and profiles tagged as “considered malware” or “potentially harmful”—give no direct suggestion of a block. But posts on X, reporting at The Verge, and other sources suggest that Signal.me links are broadly banned.

Signal.me links that were already posted on X prior to the recent change now show a “Warning: this link may be unsafe” interstitial page rather than opening the link directly. Links to Signal handles and the Signal homepage are still functioning on X.

Binder, a former Mashable reporter who was once blocked by X (then Twitter) for reporting on owner Elon Musk and accounts related to his private jet travel, credited the first reports to an X post by security research firm Mysk.

X is reportedly blocking links to secure Signal contact pages Read More »