Following an investigation, Elon Musk’s X has won its fight to avoid gatekeeper status under the European Union’s strict competition law, the Digital Markets Act (DMA).
On Wednesday, the European Commission (EC) announced that “X does indeed not qualify as a gatekeeper in relation to its online social networking service, given that the investigation revealed that X is not an important gateway for business users to reach end users.”
Since March, X had strongly opposed the gatekeeper designation by arguing that although X connects advertisers to more than 45 million monthly users, it does not have a “significant impact” on the EU’s internal market, a case filing showed.
A gatekeeper “is presumed to have a significant impact on the internal market where it achieves an annual Union turnover equal to or above EUR 7.5 billion in each of the last three financial years,” the case filing said. But X submitted evidence showing that its Union turnover was less than that in 2022, the same year that Musk took over Twitter and began alienating advertisers by posting their ads next to extremists’ tweets.
Throughout Musk’s reign at Twitter/X, the social networking company told the EC, both advertising revenue and users have steadily declined in the EU. In particular, “X Ads has a too small and decreasing scale in terms of share of advertising spend in the Union to constitute an important gateway in the market for online advertising,” X argued, further noting that X had a “lack of platform power” to change that anytime soon.
“In the last 15 months, X Ads has faced a decline in number of advertising business users, as well as a decline in pricing,” X argued.
Musk’s battle with former Twitter execs intensifies as X value reaches new low.
Former Twitter executives, including former CEO Parag Agrawal, are urging a court to open discovery in a dispute over severance and other benefits they allege they were wrongfully denied after Elon Musk took over Twitter in 2022.
According to the former executives, they’ve been blocked for seven months from accessing key documents proving they’re owed roughly $200 million under severance agreements that they say Musk willfully tried to avoid paying in retaliation for executives forcing him to close the Twitter deal. And now, as X’s value tanks lower than ever—reportedly worth 80 percent less than when Musk bought it—the ex-Twitter leaders fear their severance claims “may be compromised” by Musk’s alleged “mismanagement of X,” their court filing said.
The potential for X’s revenue loss to impact severance claims appears to go beyond just the former Twitter executives’ dispute. According to their complaint, “there are also thousands of non-executive former employees whom Musk terminated and is now refusing to pay severance and other benefits” and who have “sued in droves.”
In some of these other severance suits, executives claimed in their motion to open discovery, X appears to be operating more transparently, allowing discovery to proceed beyond what has been possible in the executives’ suit.
But Musk allegedly has “special ire” for Agrawal and other executives who helped push through the Twitter buyout that he tried to wriggle out of, executives claimed. And seemingly because of his alleged anger, X has “only narrowed the discovery” ever since the court approved a stay pending a ruling on X’s motion to drop one of the executives’ five claims. According to the executives, the court only approved the stay of discovery because it was expecting to rule on the motion to dismiss quickly, but after a hearing on that matter was vacated, the stay has remained, helping X’s alleged goal to prolong the litigation.
To get the litigation back on track for a speedier resolution before Musk runs X into the ground, the executives on Thursday asked the court to approve discovery on all claims except the claim disputed in the motion to dismiss.
“Discovery on those topics is inevitable, and there is no reason to further delay,” the executives argued.
The executives have requested that the court open discovery at a hearing scheduled for November 15 to prevent further delays that they fear could harm their severance claims.
Neither X nor a lawyer for the former Twitter executives, David Anderson, could immediately be reached for comment.
X’s fight to avoid severance payments
In their complaint, the former Twitter executives—including Agrawal as well as former Chief Financial Officer Ned Segal, former Chief Legal Officer Vijaya Gadde, and former general counsel Sean Edgett—alleged that Musk planned to deny their severance to make them pay for extra costs that they approved that clinched the Twitter deal.
They claimed that Musk told his official biographer, Walter Isaacson, that he would “hunt every single one of” them “till the day they die,” vowing “a lifetime of revenge.” Musk supposedly even “bragged” to Isaacson about “specifically how he planned to cheat Twitter’s executives out of their severance benefits in order to save himself $200 million.”
Under their severance agreements, the executives could only be denied benefits if terminated for “cause” under specific conditions, they said, none of which allegedly applied to their abrupt firings the second the merger agreement was signed.
“‘Cause’ under the severance plans is limited to extremely narrow circumstances, such as being convicted of a felony or committing ‘gross negligence’ or ‘willful misconduct,'” their complaint noted.
Musk attempted to “manufacture” “ever-changing theories of cause,” they claimed, partly by claiming that “success” fees paid to the law firm that defeated Musk’s suit attempting to go back on the deal constituted “gross negligence” or “willful misconduct.”
According to Musk’s motion to dismiss, the former executives tried to “saddle Twitter, and by extension the many investors who acquired it, with exorbitant legal expenses by forcing approximately $100 million in gratuitous payments to certain law firms in the final hours before the Twitter acquisition closed.” Musk had a huge problem with this, the motion to dismiss said, because the fees were paid despite his objections.
On top of that, Musk considered it “gross negligence” or “willful misconduct” that the executives allegedly paid out retention bonuses that Musk also opposed. And perhaps even more egregiously, they allowed new employees to jump onto severance plans shortly before the acquisition, which “generally” increased the “severance benefits available to these individuals by more than $50 million dollars,” Musk’s motion to dismiss said.
Musk was particularly frustrated by the addition of one employee who allegedly “already decided to terminate and another who was allowed to add herself to one of the Plans—a naked conflict of interest that increased her potential compensation by approximately $15 million.”
But former Twitter executives said they consulted with the board to approve the law firm fees, defending their business decisions as “in the best interest of the company,” not “Musk’s whims.”
“On the morning” Musk acquired Twitter, “the Company’s full Board met,” the executives’ complaint said. “One of the directors noted that it was the largest stockholder value creation by a legal team that he had ever seen. The full Board deliberated and decided to approve the fees.”
Further, they pointed out, “the lion’s share” of those legal fees “was necessitated only by Musk’s improper refusal to close a transaction to which he was contractually bound.”
“If Musk felt that the attorneys’ fees payments, or any other payments, were improper, his remedy was to seek to terminate the deal—not to withhold executives’ severance payments,” their complaint said.
Reimbursement or reinstatement may be sought
To force Musk’s hand, executives have been asking X to share documents, including documents they either created or received while working out the Twitter buyout. But X has delayed production—sometimes curiously claiming that documents are confidential even when executives authored the documents or they’ve been publicly filed in other severance disputes, executives alleged.
Executives have called Musk’s denial of severance “a pointless effort that would not withstand legal scrutiny,” but so far discovery in their lawsuit has not even technically begun. While X has handed over incomplete submissions from its administrative process denying the severance claims, in some cases, X has “entirely refused” to produce documents, they claimed.
They’re hoping once fact-finding concludes that the court will agree that severance benefits are due. That potentially includes stock vested at the price of Twitter on the day that Musk acquired it, $44 billion—a far cry from the $9 billion that X is estimated to be valued at today.
In a filing opposing Musk’s motion to dismiss, the former executives noted that they’re not required to elect their remedies at this stage of the litigation. While their complaint alleged they’re owed vested stock at the acquisition value of $44 billion, their other filing suggested that “reinstatement is also an available remedy.”
Neither option would likely appeal to Musk, who appears determined to fight all severance disputes while scrambling for nearly two years to reverse X’s steady revenue loss.
Since his firing, Agrawal has won at least one of his legal battles with Musk, forcing X to reimburse him for $1.1 million in legal fees. But Musk has largely avoided paying severance as lawsuits pile up, and Agrawal is allegedly owed the most, with his severance package valued at $57 million.
For executives, a growing fear is seemingly that Musk will prolong litigation until X goes under. Last year, Musk bragged that he saved X from bankruptcy by cutting costs, but experts warned that lawsuits piling up from vendors—which Plainsite is tracking here—could upend that strategy if Musk loses too many.
“Under Musk’s control, Twitter has become a scofflaw, stiffing employees, landlords, vendors, and others,” executives’ complaint said. “Musk doesn’t pay his bills, believes the rules don’t apply to him, and uses his wealth and power to run roughshod over anyone who disagrees with him.”
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
The University of Michigan research team worried that their experiment posting AI-generated NCII on X may cross ethical lines.
They chose to conduct the study on X because they deduced it was “a platform where there would be no volunteer moderators and little impact on paid moderators, if any” viewed their AI-generated nude images.
X’s transparency report seems to suggest that most reported non-consensual nudity is actioned by human moderators, but researchers reported that their flagged content was never actioned without a DMCA takedown.
Since AI image generators are trained on real photos, researchers also took steps to ensure that AI-generated NCII in the study did not re-traumatize victims or depict real people who might stumble on the images on X.
“Each image was tested against a facial-recognition software platform and several reverse-image lookup services to verify it did not resemble any existing individual,” the study said. “Only images confirmed by all platforms to have no resemblance to individuals were selected for the study.”
These more “ethical” images were posted on X using popular hashtags like #porn, #hot, and #xxx, but their reach was limited to evade potential harm, researchers said.
“Our study may contribute to greater transparency in content moderation processes” related to NCII “and may prompt social media companies to invest additional efforts to combat deepfake” NCII, researchers said. “In the long run, we believe the benefits of this study far outweigh the risks.”
According to the researchers, X was given time to automatically detect and remove the content but failed to do so. It’s possible, the study suggested, that X’s decision to allow explicit content starting in June made it harder to detect NCII, as some experts had predicted.
To fix the problem, researchers suggested that both “greater platform accountability” and “legal mechanisms to ensure that accountability” are needed—as is much more research on other platforms’ mechanisms for removing NCII.
“A dedicated” NCII law “must clearly define victim-survivor rights and impose legal obligations on platforms to act swiftly in removing harmful content,” the study concluded.
Prosecutors now have a “blueprint” to seize privileged communications, X warned.
Last year, special counsel Jack Smith asked X (formerly Twitter) to hand over Donald Trump’s direct messages from his presidency without telling Trump. Refusing to comply, X spent the past year arguing that the gag order was an unconstitutional prior restraint on X’s speech and an “end-run” around a record law shielding privileged presidential communications.
Under its so-called free speech absolutist owner Elon Musk, X took this fight all the way to the Supreme Court, only for the nation’s highest court to decline to review X’s appeal on Monday.
It’s unclear exactly why SCOTUS rejected X’s appeal, but in a court filing opposing SCOTUS review, Smith told the court that X’s “contentions lack merit and warrant no further review.” And SCOTUS seemingly agreed.
The government had argued that its nondisclosure order was narrowly tailored to serve a compelling interest in stopping Trump from either deleting his DMs or intimidating witnesses engaged in his DMs while he was in office.
At that time, Smith was publicly probing the interference with a peaceful transfer of power after the 2020 presidential election, and courts had agreed that “there were ‘reasonable grounds to believe’ that disclosing the warrant” to Trump “‘would seriously jeopardize the ongoing investigation’ by giving him ‘an opportunity to destroy evidence, change patterns of behavior, [or] notify confederates,” Smith’s court filing said.
Under the Stored Communications Act (SCA), the government can request data and apply for a nondisclosure order gagging any communications provider from tipping off an account holder about search warrants for limited periods deemed appropriate by a court, Smith noted. X was only prohibited from alerting Trump to the search warrant for 180 days, Smith said, and only restricted from discussing the existence of the warrant.
As the government sees it, this reliance on the SCA “does not give unbounded, standardless discretion to government officials or otherwise create a risk of ‘freewheeling censorship,'” like X claims. But the government warned that affirming X’s appeal “would mean that no SCA warrant could be enforced without disclosure to a potential privilege holder, regardless of the dangers to the integrity of the investigation.”
Court finds X alternative to gag order “unpalatable”
X tried to wave a red flag in its SCOTUS petition, warning the court that this was “the first time in American history” that a court “ordered disclosure of presidential communications without notice to the President and without any adjudication of executive privilege.”
The social media company argued that it receives “tens of thousands” of government data requests annually—including “thousands” with nondisclosure orders—and pushes back on any request for privileged information that does not allow users to assert their privileges. Allowing the lower court rulings to stand, X warned SCOTUS, could create a path for government to illegally seize information not just protected by executive privilege, but also by attorney-client, doctor-patient, or journalist-source privileges.
X’s “policy is to notify users about law enforcement requests ‘prior to disclosure of account information’ unless legally ‘prohibited from doing so,'” X argued.
X suggested that rather than seize Trump’s DMs without giving him a chance to assert his executive privilege, the government should have designated a representative capable of weighing and asserting whether some of the data requested was privileged. That’s how the Presidential Records Act (PRA) works, X noted, suggesting that Smith’s team was improperly trying to avoid PRA compliance by invoking SCA instead.
But the US government didn’t have to prove that the less-restrictive alternative X submitted would have compromised its investigation, X said, because the court categorically rejected X’s submission as “unworkable” and “unpalatable.”
According to the court, designating a representative placed a strain on the government to deduce if the representative could be trusted not to disclose the search warrant. But X pointed out that the government had no explanation for why a PRA-designated representative, Steven Engel—a former assistant attorney general for the Office of Legal Counsel who “publicly testified about resisting the former President’s conduct”—”could not be trusted to follow a court order forbidding him from further disclosure.”
“Going forward, the government will never have to prove it could avoid seriously jeopardizing its investigation by disclosing a warrant to only a trusted representative—a common alternative to nondisclosure orders,” X argued.
In a brief supporting X, attorneys for the nonprofit digital rights group the Electronic Frontier Foundation (EFF) wrote that the court was “unduly dismissive of the arguments” X raised and “failed to apply exacting scrutiny, relieving the government of its burden to actually demonstrate, with evidence, that these alternatives would be ineffective.”
Further, X argued that none of the government’s arguments for nondisclosure made sense. Not only was Smith’s investigation announced publicly—allowing Trump ample time to delete his DMs already—but also “there was no risk of destruction of the requested records because Twitter had preserved them.” On top of that, during the court battle, the government eventually admitted that one rationale for the nondisclosure order—that Trump posed a supposed “flight risk” if the search warrant was known—”was implausible because the former President already had announced his re-election run.”
X unsuccessfully pushed SCOTUS to take on the Trump case as an “ideal” and rare opportunity to publicly decide when nondisclosure orders cross the line when seeking to seize potentially privileged information on social media.
In its petition for SCOTUS review, X pointed out that every social media or communications platform is bombarded with government data requests that only the platforms can challenge. That leaves it up to platforms to figure out when data requests are problematic, which they frequently are, as “the government often agrees to modify or vacate them in informal negotiations,” X argued.
But when the government refuses to negotiate, as in the Trump case, platforms have to decide if litigation is worth it, risking sanctions if the court finds the platform in contempt, just as X was sanctioned $350,000 in the Trump case. If a less restrictive alternative was determined appropriate by the courts, such as appointing a trusted representative, platforms would never have had to guess when data requests threaten to expose their users’ privileged information, X argued.
According to X, another case like this won’t come around for decades, where court filings wouldn’t have to be redacted and a ruling wouldn’t have to happen behind closed doors.
But the government seemingly persuaded the Supreme Court to decline to review the case, partly by arguing that X’s challenge to its nondisclosure order was moot. Responding to X’s objections, the government had eventually agreed to modify the nondisclosure order to disclose the warrant to Trump, so long as the name of the case agent assigned to the investigation was redacted. So X’s appeal is really over nothing, the government suggested.
Additionally, the government argued that “this case would not be an appropriate vehicle” for SCOTUS’ review of the question X raised because “no executive privilege issue actually existed in this case.”
“If review of the underlying legal issues were ever warranted, the Court should await a live case in which the issues are concretely presented,” Smith’s court filing said.
X is likely deflated by SCOTUS’ call declining to review X’s appeal. In its petition, X claimed that the court system risked providing “a blueprint for prosecutors who wish to obtain potentially privileged materials” and “this end-run will not be limited to federal prosecutors,” X warned. State prosecutors will likely also be emboldened to do the same now that the precedent has been set, X predicted.
In their brief supporting X, EFF lawyers noted that the government already has “far too much authority to shield its activities from public scrutiny.” By failing to prevent nondisclosure orders from restraining speech, the court system risks making it harder to “meaningfully test these gag orders in court,” EFF warned.
“Even a meritless gag order that is ultimately voided by a court causes great harm while it is in effect,” EFF’s lawyers said, while disclosure “ensures that individuals whose information is searched have an opportunity to defend their privacy from unwarranted and unlawful government intrusions.”
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
“I cannot accept this evidence without a much better explanation of Mr. Bogatz’s path of reasoning,” Wheelahan wrote.
Wheelahan emphasized that the Nevada merger law specifically stipulated that “all debts, liabilities, obligations and duties of the Company shall thenceforth remain with or be attached to, as the case may be, the Acquiror and may be enforced against it to the same extent as if it had incurred or contracted all such debts, liabilities, obligations, and duties.” And Bogatz’s testimony failed to “grapple with the significance” of this, Wheelahan said.
Overall, Wheelahan considered Bogatz’s testimony on X’s merger-acquired liabilities “strained,” while deeming the government’s US merger law expert Alexander Pyle to be “honest and ready to make appropriate concessions,” even while some of his testimony was “not of assistance.”
Luckily, it seemed that Wheelahan had no trouble drawing his own conclusion after analyzing Nevada’s merger law.
“I find that a Nevada court would likely hold that the word ‘liabilities'” in the merger law “is broad enough on its proper construction under Nevada law to encompass non-pecuniary liabilities, such as the obligation to respond to the reporting notice,” Wheelahan wrote. “X Corp has therefore failed to show that it was not required to respond to the reporting notice.”
Because X “failed on all its claims,” the social media company must cover costs from the appeal, and X’s costs in fighting the initial fine will seemingly only increase from here.
Fighting fine likely to more than double X costs
In a press release celebrating the ruling, eSafety Commissioner Julie Inman Grant criticized X’s attempt to use the merger to avoid complying with Australia’s Online Safety Act.
“Almost any digitally altered content, when left up to an arbitrary individual on the Internet, could be considered harmful,” Mendez said, even something seemingly benign like AI-generated estimates of voter turnouts shared online.
Additionally, the Supreme Court has held that “even deliberate lies (said with ‘actual malice’) about the government are constitutionally protected” because the right to criticize the government is at the heart of the First Amendment.
“These same principles safeguarding the people’s right to criticize government and government officials apply even in the new technological age when media may be digitally altered: civil penalties for criticisms on the government like those sanctioned by AB 2839 have no place in our system of governance,” Mendez said.
According to Mendez, X posts like Kohls’ parody videos are the “political cartoons of today” and California’s attempt to “bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment” is not justified by even “a well-founded fear of a digitally manipulated media landscape.” If officials find deepfakes are harmful to election prospects, there is already recourse through privacy torts, copyright infringement, or defamation laws, Mendez suggested.
Kosseff told Ars that there could be more narrow ways that government officials looking to protect election integrity could regulate deepfakes online. The Supreme Court has suggested that deepfakes spreading disinformation on the mechanics of voting could possibly be regulated, Kosseff said.
Mendez got it “exactly right” by concluding that the best remedy for election-related deepfakes is more speech, Kosseff said. As Mendez described it, a vague law like AB 2839 seemed to only “uphold the State’s attempt to suffocate” speech.
Parody is vital to democratic debate, judge says
The only part of AB 2839 that survives strict scrutiny, Mendez noted, is a section describing audio disclosures in a “clearly spoken manner and in a pitch that can be easily heard by the average listener, at the beginning of the audio, at the end of the audio, and, if the audio is greater than two minutes in length, interspersed within the audio at intervals of not greater than two minutes each.”
Elon Musk has lambasted Australia’s government as “fascists” over proposed laws that could levy substantial fines on social media companies if they fail to comply with rules to combat the spread of disinformation and online scams.
The billionaire owner of social media site X posted the word “fascists” on Friday in response to the bill, which would strengthen the Australian media regulator’s ability to hold companies responsible for the content on their platforms and levy potential fines of up to 5 percent of global revenue. The bill, which was proposed this week, has yet to be passed.
Musk’s comments drew rebukes from senior Australian politicians, with Stephen Jones, Australia’s finance minister, telling national broadcaster ABC that it was “crackpot stuff” and the legislation was a matter of sovereignty.
Bill Shorten, the former leader of the Labor Party and a cabinet minister, accused the billionaire of only championing free speech when it was in his commercial interests. “Elon Musk’s had more positions on free speech than the Kama Sutra,” Shorten said in an interview with Australian radio.
The exchange marks the second time that Musk has confronted Australia over technology regulation.
In May, he accused the country’s eSafety Commissioner of censorship after the government agency took X to court in an effort to force it to remove graphic videos of a stabbing attack in Sydney. A court later denied the eSafety Commissioner’s application.
Musk has also been embroiled in a bitter dispute with authorities in Brazil, where the Supreme Court ruled last month that X should be blocked over its failure to remove or suspend certain accounts accused of spreading misinformation and hateful content.
Australia has been at the forefront of efforts to regulate the technology sector, pitting it against some of the world’s largest social media companies.
This week, the government pledged to introduce a minimum age limit for social media use to tackle “screen addiction” among young people.
In March, Canberra threatened to take action against Meta after the owner of Facebook and Instagram said it would withdraw from a world-first deal to pay media companies to link to news stories.
The government also introduced new data privacy measures to parliament on Thursday that would impose hefty fines and potential jail terms of up to seven years for people found guilty of “doxxing” individuals or groups.
Prime Minister Anthony Albanese’s government had pledged to outlaw doxxing—the publication of personal details online for malicious purposes—this year after the details of a private WhatsApp group containing hundreds of Jewish Australians were published online.
Australia is one of the first countries to pursue laws outlawing doxxing. It is also expected to introduce a tranche of laws in the coming months to regulate how personal data can be used by artificial intelligence.
“These reforms give more teeth to the regulation,” said Monique Azzopardi at law firm Clayton Utz.
Not only did the email not provide staff with enough notice, the labor court ruled, but also any employee’s failure to click “yes” could in no way constitute a legal act of resignation. Instead, the court reviewed evidence alleging that the email appeared designed to either get employees to agree to new employment terms, sight unseen, or else push employees to volunteer for dismissal during a time of mass layoffs across Twitter.
“Going forward, to build a breakthrough Twitter 2.0 and succeed in an increasingly competitive world, we will need to be extremely hardcore,” Musk wrote in the all-staff email. “This will mean working long hours at high intensity. Only exceptional performance will constitute a passing grade.”
With the subject line, “A Fork in the Road,” the email urged staff, “if you are sure that you want to be part of the new Twitter, please click yes on the link below. Anyone who has not done so by 5pm ET tomorrow (Thursday) will receive three months of severance. Whatever decision you make, thank you for your efforts to make Twitter successful.”
In a 73-page ruling, an adjudication officer for the Irish Workplace Relations Commission (WRC), Michael MacNamee, ruled that Twitter’s abrupt dismissal of an Ireland-based senior executive, Gary Rooney, was unfair, the Irish public service broadcaster RTÉ reported. Rooney had argued that his contract clearly stated that his resignation must be provided in writing, not by refraining to fill out a form.
A spokesperson for the Department of Enterprise, Trade, and Employment, which handles the WRC’s media inquiries, told Ars that the decision will be published on the WRC’s website on August 26 after both parties have “the opportunity to consider it in full.”
Now, instead of paying Rooney the draft severance amount worth a little more than $25,000, Twitter, which is now called X, has to pay Rooney more than $600,000. According to many outlets, this is a record award from the WRC and included about $220,000 “for prospective future loss of earnings.”
The WRC dismissed Rooney’s claim regarding an allegedly owed performance bonus for 2022 but otherwise largely agreed with his arguments on the unfair dismissal.
Rooney had worked for Twitter for nine years prior to Musk’s takeover, telling the WRC that he previously loved his job but had no way of knowing from the “Fork in the Road” email “what package was being offered” or “implications of agreeing to stay working for Twitter.” He hesitated to click yes, not knowing how his benefits or stock options might change, while discussing his decision to potentially leave with other Twitter employees on Slack and claiming he would be leaving on Twitter.
Twitter tried to argue that the Slack discussions and Rooney’s tweets about the email indicated that he intended to resign, but the court disagreed that these were relevant.
“No employee when faced with such a situation could possibly be faulted for refusing to be compelled to give an open-ended unqualified assent to any of the proposals,” MacNamee said.
X’s senior director of human resources, Lauren Wegman, testified that of the 270 employees in Ireland who received the email, only 35 did not click yes. After this week’s ruling, it seems likely that X may face more complaints from any of those dozens of employees who took the same route Rooney did.
X has not commented on the ruling but is likely disappointed by the loss. The social media company had tried to argue that Rooney’s employment contract “allowed the company to make reasonable changes to its terms and conditions,” RTÉ reported. Wegman had further testified that it was unreasonable for Rooney to believe his pay might change as a result of clicking yes, telling the WRC that his “employment would probably not have ended if he had raised a grievance” within the 24-hour deadline, RTÉ reported.
Rooney’s lawyer, Barry Kenny, told The Guardian that Rooney and his legal team welcomed “the clear and unambiguous finding that my client did not resign from his employment but was unfairly dismissed from his job, notwithstanding his excellent employment record and contribution to the company over the years.”
“It is not okay for Mr. Musk, or indeed any large company to treat employees in such a manner in this country,” Kenny said. “The record award reflects the seriousness and the gravity of the case.”
Twitter will be able to appeal the WRC’s decision, The Journal reported.
Enlarge/ An AI-generated image released by xAI during the open-weights launch of Grok-1.
Elon Musk-led social media platform X is training Grok, its AI chatbot, on users’ data, and that’s opt-out, not opt-in. If you’re an X user, that means Grok is already being trained on your posts if you haven’t explicitly told it not to.
Over the past day or so, users of the platform noticed the checkbox to opt out of this data usage in X’s privacy settings. The discovery was accompanied by outrage that user data was being used this way to begin with.
The social media posts about this sometimes seem to suggest that Grok has only just begun training on X users’ data, but users actually don’t know for sure when it started happening.
Earlier today, X’s Safety account tweeted, “All X users have the ability to control whether their public posts can be used to train Grok, the AI search assistant.” But it didn’t clarify either when the option became available or when the data collection began.
You cannot currently disable it in the mobile apps, but you can on mobile web, and X says the option is coming to the apps soon.
On the privacy settings page, X says:
To continuously improve your experience, we may utilize your X posts as well as your user interactions, inputs, and results with Grok for training and fine-tuning purposes. This also means that your interactions, inputs, and results may also be shared with our service provider xAI for these purposes.
X’s privacy policy has allowed for this since at least September 2023.
It’s increasingly common for user data to be used this way; for example, Meta has done the same with its users’ content, and there was an outcry when Adobe updated its terms of use to allow for this kind of thing. (Adobe quickly backtracked and promised to “never” train generative AI on creators’ content.)
How to opt out
To stop Grok from training on your X content, first go to “Settings and privacy” from the “More” menu in the navigation panel…
Samuel Axon
Then click or tap “Privacy and safety”…
Samuel Axon
Then “Grok”…
Samuel Axon
And finally, uncheck the box.
Samuel Axon
You can’t opt out within the iOS or Android apps yet, but you can do so in a few quick steps on either mobile or desktop web. To do so:
Click or tap “More” in the nav panel
Click or tap “Settings and privacy”
Click or tap “Privacy and safety”
Scroll down and click or tap “Grok” under “Data sharing and personalization”
Uncheck the box “Allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning,” which is checked by default.
Alternatively, you can follow this link directly to the settings page and uncheck the box with just one more click. If you’d like, you can also delete your conversation history with Grok here, provided you’ve actually used the chatbot before.
Elon Musk’s fight against Media Matters for America (MMFA)—a watchdog organization that he largely blames for an ad boycott that tanked Twitter/X’s revenue—has raised an interesting question about whether any judge owning Tesla stock might reasonably be considered biased when weighing any lawsuit centered on the tech billionaire.
In a court filing Monday, MMFA lawyers argued that “undisputed facts—including statements from Musk and Tesla—lay bare the interest Tesla shareholders have in this case.” According to the watchdog, any outcome in the litigation will likely impact Tesla’s finances, and that’s a problem because there’s a possibility that the judge in the case, Reed O’Connor, owns Tesla stock.
“X cannot dispute the public association between Musk—his persona, business practices, and public remarks—and the Tesla brand,” MMFA argued. “That association would lead a reasonable observer to ‘harbor doubts’ about whether a judge with a financial interest in Musk could impartially adjudicate this case.”
It’s still unclear if Judge O’Connor actually owns Tesla stock. But after MMFA’s legal team uncovered disclosures showing that he did as of last year, they argued that fact can only be clarified if the court views Tesla as a party with a “financial interest in the outcome of the case” under Texas law—“no matter how small.”
To make those facts clear, MMFA is now arguing that X must be ordered to add Tesla as an interested person in the litigation, which a source familiar with the matter told Ars, would most likely lead to a recusal if O’Connor indeed still owned Tesla stock.
“At most, requiring X to disclose Tesla would suggest that judges owning stock in Tesla—the only publicly traded Musk entity—should recuse from future cases in which Musk himself is demonstrably central to the dispute,” MMFA argued.
Ars could not immediately reach X Corp’s lawyer for comment.
However, in X’s court filing opposing the motion to add Tesla as an interested person, X insisted that “Tesla is not a party to this case and has no interest in the subject matter of the litigation, as the business relationships at issue concern only X Corp.’s contracts with X’s advertisers.”
Calling MMFA’s motion “meritless,” X accused MMFA of strategizing to get Judge O’Connor disqualified in order to go “forum shopping” after MMFA received “adverse rulings” on motions to stay discovery and dismiss the case.
As to the question of whether any judge owning Tesla stock might be considered impartial in weighing Musk-centric cases, X argued that Judge O’Connor was just as duty-bound to reject an improper motion for recusal, should MMFA go that route, as he was to accept a proper motion.
“Courts are ‘reluctant to fashion a rule requiring judges to recuse themselves from all cases that might remotely affect nonparty companies in which they own stock,'” X argued.
Recently, judges have recused themselves from cases involving Musk without explaining why. In November, a prior judge in the very same Media Matters’ suit mysteriously recused himself, with The Hill reporting that it was likely that the judge’s “impartiality might reasonably be questioned” for reasons like a financial interest or personal bias. Then in June, another judge ruled he was disqualified to rule on a severance lawsuit raised by former Twitter executives without giving “a specific reason,” Bloomberg Law reported.
Should another recusal come in the MMFA lawsuit, it would be a rare example of a judge clearly disclosing a financial interest in a Musk case.
“The straightforward question is whether Musk’s statements and behavior relevant to this case affect Tesla’s stock price, not whether they are the only factor that affects it,” MMFA argued. ” At the very least, there is a serious question about whether Musk’s highly unusual management practices mean Tesla must be disclosed as an interested party.”
Parties expect a ruling on MMFA’s motion in the coming weeks.
Continuing to evolve the fact-checking service that launched as Twitter’s Birdwatch, X has announced that Community Notes can now be requested to clarify problematic posts spreading on Elon Musk’s platform.
X’s Community Notes account confirmed late Thursday that, due to “popular demand,” X had launched a pilot test on the web-based version of the platform. The test is active now and the same functionality will be “coming soon” to Android and iOS, the Community Notes account said.
Through the current web-based pilot, if you’re an eligible user, you can click on the “•••” menu on any X post on the web and request fact-checking from one of Community Notes’ top contributors, X explained. If X receives five or more requests within 24 hours of the post going live, a Community Note will be added.
Only X users with verified phone numbers will be eligible to request Community Notes, X said, and to start, users will be limited to five requests a day.
“The limit may increase if requests successfully result in helpful notes, or may decrease if requests are on posts that people don’t agree need a note,” X’s website said. “This helps prevent spam and keep note writers focused on posts that could use helpful notes.”
Once X receives five or more requests for a Community Note within a single day, top contributors with diverse views will be alerted to respond. On X, top contributors are constantly changing, as their notes are voted as either helpful or not. If at least 4 percent of their notes are rated “helpful,” X explained on its site, and the impact of their notes meets X standards, they can be eligible to receive alerts.
“A contributor’s Top Writer status can always change as their notes are rated by others,” X’s website said.
Ultimately, X considers notes helpful if they “contain accurate, high-quality information” and “help inform people’s understanding of the subject matter in posts,” X said on another part of its site. To gauge the former, X said that the platform partners with “professional reviewers” from the Associated Press and Reuters. X also continually monitors whether notes marked helpful by top writers match what general X users marked as helpful.
“We don’t expect all notes to be perceived as helpful by all people all the time,” X’s website said. “Instead, the goal is to ensure that on average notes that earn the status of Helpful are likely to be seen as helpful by a wide range of people from different points of view, and not only be seen as helpful by people from one viewpoint.”
X will also be allowing half of the top contributors to request notes during the pilot phase, which X said will help the platform evaluate “whether it is beneficial for Community Notes contributors to have both the ability to write notes and request notes.”
According to X, the criteria for requesting a note have intentionally been designed to be simple during the pilot stage, but X expects “these criteria to evolve, with the goal that requests are frequently found valuable to contributors, and not noisy.”
It’s hard to tell from the outside looking in how helpful Community Notes are to X users. The most recent Community Notes survey data that X points to is from 2022 when the platform was still called Twitter and the fact-checking service was still called Birdwatch.
That data showed that “on average,” users were “20–40 percent less likely to agree with the substance of a potentially misleading Tweet than someone who sees the Tweet alone.” And based on Twitter’s “internal data” at that time, the platform also estimated that “people on Twitter who see notes are, on average, 15–35 percent less likely to Like or Retweet a Tweet than someone who sees the Tweet alone.”
Elon Musk’s fight defending X’s content moderation decisions isn’t just with hate speech researchers and advertisers. He has also long been battling regulators, and this week, he seemed positioned to secure a potentially big win in California, where he’s hoping to permanently block a law that he claims unconstitutionally forces his platform to justify its judgment calls.
At a hearing Wednesday, three judges in the 9th US Circuit Court of Appeals seemed inclined to agree with Musk that a California law requiring disclosures from social media companies that clearly explain their content moderation choices likely violates the First Amendment.
Passed in 2022, AB-587 forces platforms like X to submit a “terms of service report” detailing how they moderate several categories of controversial content. Those categories include hate speech or racism, extremism or radicalization, disinformation or misinformation, harassment, and foreign political interference, which X’s lawyer, Joel Kurtzberg, told judges yesterday “are the most controversial categories of so-called awful but lawful speech.”
The law would seemingly require more transparency than ever from X, making it easy for users to track exactly how much controversial content X flags and removes—and perhaps most notably for advertisers, how many users viewed concerning content.
To block the law, X sued in 2023, arguing that California was trying to dictate its terms of service and force the company to make statements on content moderation that could generate backlash. X worried that the law “impermissibly” interfered with both “the constitutionally protected editorial judgments” of social media companies, as well as impacted users’ speech by requiring companies “to remove, demonetize, or deprioritize constitutionally protected speech that the state deems undesirable or harmful.”
Any companies found to be non-compliant could face stiff fines of up to $15,000 per violation per day, which X considered “draconian.” But last year, a lower court declined to block the law, prompting X to appeal, and yesterday, the appeals court seemed more sympathetic to X’s case.
At the hearing, Kurtzberg told judges that the law was “deeply threatening to the well-established First Amendment interests” of an “extraordinary diversity of” people, which is why X’s complaint was supported by briefs from reporters, freedom of the press advocates, First Amendment scholars, “conservative entities,” and people across the political spectrum.
All share “a deep concern about a statute that, on its face, is aimed at pressuring social media companies to change their content moderation policies, so as to carry less or even no expression that’s viewed by the state as injurious to its people,” Kurtzberg told judges.
When the court pointed out that seemingly the law simply required X to abide by content moderation policies for each category defined in its own terms of service—and did not compel X to adopt any policy or position that it did not choose—Kurtzberg pushed back.
“They don’t mandate us to define the categories in a specific way, but they mandate us to take a position on what the legislature makes clear are the most controversial categories to moderate and define,” Kurtzberg said. “We are entitled to respond to the statute by saying we don’t define hate speech or racism. But the report also asks about policies that are supposedly, quote, ‘intended’ to address those categories, which is a judgment call.”
“This is very helpful,” Judge Anthony Johnstone responded. “Even if you don’t yourself define those categories in the terms of service, you read the law as requiring you to opine or discuss those categories, even if they’re not part of your own terms,” and “you are required to tell California essentially your views on hate speech, extremism, harassment, foreign political interference, how you define them or don’t define them, and what you choose to do about them?”
“That is correct,” Kurtzberg responded, noting that X considered those categories the most “fraught” and “difficult to define.”