“I cannot accept this evidence without a much better explanation of Mr. Bogatz’s path of reasoning,” Wheelahan wrote.
Wheelahan emphasized that the Nevada merger law specifically stipulated that “all debts, liabilities, obligations and duties of the Company shall thenceforth remain with or be attached to, as the case may be, the Acquiror and may be enforced against it to the same extent as if it had incurred or contracted all such debts, liabilities, obligations, and duties.” And Bogatz’s testimony failed to “grapple with the significance” of this, Wheelahan said.
Overall, Wheelahan considered Bogatz’s testimony on X’s merger-acquired liabilities “strained,” while deeming the government’s US merger law expert Alexander Pyle to be “honest and ready to make appropriate concessions,” even while some of his testimony was “not of assistance.”
Luckily, it seemed that Wheelahan had no trouble drawing his own conclusion after analyzing Nevada’s merger law.
“I find that a Nevada court would likely hold that the word ‘liabilities'” in the merger law “is broad enough on its proper construction under Nevada law to encompass non-pecuniary liabilities, such as the obligation to respond to the reporting notice,” Wheelahan wrote. “X Corp has therefore failed to show that it was not required to respond to the reporting notice.”
Because X “failed on all its claims,” the social media company must cover costs from the appeal, and X’s costs in fighting the initial fine will seemingly only increase from here.
Fighting fine likely to more than double X costs
In a press release celebrating the ruling, eSafety Commissioner Julie Inman Grant criticized X’s attempt to use the merger to avoid complying with Australia’s Online Safety Act.
“Almost any digitally altered content, when left up to an arbitrary individual on the Internet, could be considered harmful,” Mendez said, even something seemingly benign like AI-generated estimates of voter turnouts shared online.
Additionally, the Supreme Court has held that “even deliberate lies (said with ‘actual malice’) about the government are constitutionally protected” because the right to criticize the government is at the heart of the First Amendment.
“These same principles safeguarding the people’s right to criticize government and government officials apply even in the new technological age when media may be digitally altered: civil penalties for criticisms on the government like those sanctioned by AB 2839 have no place in our system of governance,” Mendez said.
According to Mendez, X posts like Kohls’ parody videos are the “political cartoons of today” and California’s attempt to “bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment” is not justified by even “a well-founded fear of a digitally manipulated media landscape.” If officials find deepfakes are harmful to election prospects, there is already recourse through privacy torts, copyright infringement, or defamation laws, Mendez suggested.
Kosseff told Ars that there could be more narrow ways that government officials looking to protect election integrity could regulate deepfakes online. The Supreme Court has suggested that deepfakes spreading disinformation on the mechanics of voting could possibly be regulated, Kosseff said.
Mendez got it “exactly right” by concluding that the best remedy for election-related deepfakes is more speech, Kosseff said. As Mendez described it, a vague law like AB 2839 seemed to only “uphold the State’s attempt to suffocate” speech.
Parody is vital to democratic debate, judge says
The only part of AB 2839 that survives strict scrutiny, Mendez noted, is a section describing audio disclosures in a “clearly spoken manner and in a pitch that can be easily heard by the average listener, at the beginning of the audio, at the end of the audio, and, if the audio is greater than two minutes in length, interspersed within the audio at intervals of not greater than two minutes each.”
Elon Musk is apparently conceding defeat in his fight with Brazil Supreme Court Judge Alexandre de Moraes, as the X social platform has started complying with the judge’s demands in an attempt to get the service un-blocked in the country.
X previously refused to suspend dozens of accounts accused of spreading disinformation. Internet service providers have been blocking X under orders from the government since early September, and De Moraes seized $2 million from a Starlink bank account and $1.3 million from an X account to collect on fines issued to X.
X has claimed the orders violate Brazil’s own laws. “Unlike other social media and technology platforms, we will not comply in secret with illegal orders. To our users in Brazil and around the world, X remains committed to protecting your freedom of speech,” the company said in late August.
But in a reversal detailed in a court filing on Friday night, “X’s lawyers said the company had done exactly what Mr. Musk vowed not to: take down accounts that a Brazilian justice ordered removed because the judge said they threatened Brazil’s democracy,” The New York Times reported. “X also complied with the justice’s other demands, including paying fines and naming a new formal representative in the country, the lawyers said.” (X said last month that its previous legal representative in Brazil resigned after de Moraes threatened her with imprisonment.)
X has to prove compliance
According to Reuters, “It was not immediately clear which were the accounts X has been ordered to block, as the probe is confidential.” But it has been reported that many of the accounts belonged to supporters of former President Jair Bolsonaro, who was accused of instigating the January 8, 2023, attack on the Brazilian Congress after his election loss. Some of the accounts reportedly belonged to users accused of threatening federal police officers involved in a probe of Bolsonaro.
De Moraes acknowledged X’s about-face in an order issued Saturday and said that X must submit documents proving its compliance before it can be reinstated. X had an estimated 22 million users in Brazil before the suspension. Bluesky and Meta’s Threads gained users in the country after X was blocked by ISPs.
X briefly became accessible in Brazil last week after the company started routing traffic through Cloudflare, but Brazil’s telecom regulatory agency said that Cloudflare subsequently made changes that let ISPs resume their blocking of X without affecting other websites that use Cloudflare. (Cloudflare CEO Matthew Prince later denied working with the Brazilian government to implement any such changes.) While X said it was merely “an inadvertent and temporary service restoration to Brazilian users,” de Moraes announced a new daily fine of more than $900,000 for failing to comply with the order suspending X operations in Brazil.
Cards Against Humanity sued SpaceX yesterday, alleging that Elon Musk’s firm illegally took over a plot of land on the US/Mexico border that the party-game company bought in 2017 in an attempt to stymie then-President Trump’s attempt to build a wall.
“As part of CAH’s 2017 holiday campaign, while Donald Trump was President, CAH created a supporter-funded campaign to take a stand against the building of a Border Wall,” said the lawsuit filed in Cameron County District Court in Texas. Cards Against Humanity says it received $15 donations from 150,000 people and used part of that money to buy “a plot of vacant land in Cameron County based upon CAH’s promise to ‘make it as time-consuming and expensive as possible for Trump to build his wall.'”
Cards Against Humanity says it mowed the land “and maintained it in its natural state, marking the edge of the lot with a fence and a ‘No Trespassing’ sign.” But instead of Trump taking over the land, Cards Against Humanity says the parcel was “interfered with and invaded” by Musk’s space company. The lawsuit includes pictures that, according to Cards Against Humanity, show the land when it was first purchased and after SpaceX construction equipment and materials were placed on the land.
This picture was taken in 2017, according to Cards Against Humanity:
Cards Against Humanity
Cards Against Humanity says this picture of SpaceX equipment and materials on the same land was taken in 2024:
Cards Against Humanity
The lawsuit seeks up to $15 million to cover “the cost to restore and repair the Property, the diminution in the Property’s fair market value, the reasonable value of SpaceX’s use of the Property, the loss of goodwill, damages to CAH’s reputation, and other pecuniary loss and actual damages suffered by CAH.” The suit also seeks punitive damages.
Lawsuit: SpaceX “never asked for permission”
The lawsuit said that SpaceX “acquired many of the vacant lots along the road on which the Property is situated,” and started using the Cards Against Humanity property as its own:
SpaceX and/or its contractors entered the Property and, after erecting posts to mark the property line, proceeded to ignore any distinction based upon property ownership. The site was cleared of vegetation, and the soil was compacted with gravel or other substance to allow SpaceX and its contractors to run and park its vehicles all over the Property. Generators were brought in to run equipment and lights while work was being performed before and after daylight. An enormous mound of gravel was unloaded onto the Property; the gravel is being stored and used for the construction of buildings by SpaceX’s contractors along the road.
Large pieces of construction equipment and numerous construction-related vehicles are utilized and stored on the Property continuously. And, of course, workers are present performing construction work and staging materials and vehicles for work to be performed on other tracts. In short, SpaceX has treated the Property as its own for at least six (6) months without regard for CAH’s property rights nor the safety of anyone entering what has become a worksite that is presumably governed by OSHA safety requirements.
The lawsuit said that “SpaceX has never asked for permission to use the Property, much less for the egregious appropriation of the Property for its own profit-making purposes,” and “never reached out to CAH to explain or apologize for the damage caused to the Property and CAH’s ownership interest therein.”
We contacted SpaceX about the lawsuit and will update this article if it provides a response.
In his complaint, Christopher Kohls—who is known as “Mr Reagan” on YouTube and X (formerly Twitter)—said that he was suing “to defend all Americans’ right to satirize politicians.” He claimed that California laws, AB 2655 and AB 2839, were urgently passed after X owner Elon Musk shared a partly AI-generated parody video on the social media platform that Kohls created to “lampoon” presidential hopeful Kamala Harris.
AB 2655, known as the “Defending Democracy from Deepfake Deception Act,” prohibits creating “with actual malice” any “materially deceptive audio or visual media of a candidate for elective office with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate, within 60 days of the election.” It requires social media platforms to block or remove any reported deceptive material and label “certain additional content” deemed “inauthentic, fake, or false” to prevent election interference.
The other law at issue, AB 2839, titled “Elections: deceptive media in advertisements,” bans anyone from “knowingly distributing an advertisement or other election communication” with “malice” that “contains certain materially deceptive content” within 120 days of an election in California and, in some cases, within 60 days after an election.
Both bills were signed into law on September 17, and Kohls filed his complaint that day, alleging that both must be permanently blocked as unconstitutional.
Elon Musk called out for boosting Kohls’ video
Kohls’ video that Musk shared seemingly would violate these laws by using AI to make Harris appear to give speeches that she never gave. The manipulated audio sounds like Harris, who appears to be mocking herself as a “diversity hire” and claiming that any critics must be “sexist and racist.”
“Making fun of presidential candidates and other public figures is an American pastime,” Kohls said, defending his parody video. He pointed to a long history of political cartoons and comedic impressions of politicians, claiming that “AI-generated commentary, though a new mode of speech, falls squarely within this tradition.”
While Kohls’ post was clearly marked “parody” in the YouTube title and in his post on X, that “parody” label did not carry over when Musk re-posted the video. This lack of a parody label on Musk’s post—which got approximately 136 million views, roughly twice as many as Kohls’ post—set off California governor Gavin Newsom, who immediately blasted Musk’s post and vowed on X to make content like Kohls’ video “illegal.”
In response to Newsom, Musk poked fun at the governor, posting that “I checked with renowned world authority, Professor Suggon Deeznutz, and he said parody is legal in America.” For his part, Kohls put up a second parody video targeting Harris, calling Newsom a “bully” in his complaint and claiming that he had to “punch back.”
Shortly after these online exchanges, California lawmakers allegedly rushed to back the governor, Kohls’ complaint said. They allegedly amended the deepfake bills to ensure that Kohls’ video would be banned when the bills were signed into law, replacing a broad exception for satire in one law with a narrower safe harbor that Kohls claimed would chill humorists everywhere.
“For videos,” his complaint said, disclaimers required under AB 2839 must “appear for the duration of the video” and “must be in a font size ‘no smaller than the largest font size of other text appearing in the visual media.'” For a satirist like Kohls who uses large fonts to optimize videos for mobile, this “would require the disclaimer text to be so large that it could not fit on the screen,” his complaint said.
On top of seeming impractical, the disclaimers would “fundamentally” alter “the nature of his message” by removing the comedic effect for viewers by distracting from what allegedly makes the videos funny—”the juxtaposition of over-the-top statements by the AI-generated ‘narrator,’ contrasted with the seemingly earnest style of the video as if it were a genuine campaign ad,” Kohls’ complaint alleged.
Imagine watching Saturday Night Live with prominent disclaimers taking up your TV screen, his complaint suggested.
It’s possible that Kohls’ concerns about AB 2839 are unwarranted. Newsom spokesperson Izzy Gardon told Politico that Kohls’ parody label on X was good enough to clear him of liability under the law.
“Requiring them to use the word ‘parody’ on the actual video avoids further misleading the public as the video is shared across the platform,” Gardon said. “It’s unclear why this conservative activist is suing California. This new disclosure law for election misinformation isn’t any more onerous than laws already passed in other states, including Alabama.”
Elon Musk has lambasted Australia’s government as “fascists” over proposed laws that could levy substantial fines on social media companies if they fail to comply with rules to combat the spread of disinformation and online scams.
The billionaire owner of social media site X posted the word “fascists” on Friday in response to the bill, which would strengthen the Australian media regulator’s ability to hold companies responsible for the content on their platforms and levy potential fines of up to 5 percent of global revenue. The bill, which was proposed this week, has yet to be passed.
Musk’s comments drew rebukes from senior Australian politicians, with Stephen Jones, Australia’s finance minister, telling national broadcaster ABC that it was “crackpot stuff” and the legislation was a matter of sovereignty.
Bill Shorten, the former leader of the Labor Party and a cabinet minister, accused the billionaire of only championing free speech when it was in his commercial interests. “Elon Musk’s had more positions on free speech than the Kama Sutra,” Shorten said in an interview with Australian radio.
The exchange marks the second time that Musk has confronted Australia over technology regulation.
In May, he accused the country’s eSafety Commissioner of censorship after the government agency took X to court in an effort to force it to remove graphic videos of a stabbing attack in Sydney. A court later denied the eSafety Commissioner’s application.
Musk has also been embroiled in a bitter dispute with authorities in Brazil, where the Supreme Court ruled last month that X should be blocked over its failure to remove or suspend certain accounts accused of spreading misinformation and hateful content.
Australia has been at the forefront of efforts to regulate the technology sector, pitting it against some of the world’s largest social media companies.
This week, the government pledged to introduce a minimum age limit for social media use to tackle “screen addiction” among young people.
In March, Canberra threatened to take action against Meta after the owner of Facebook and Instagram said it would withdraw from a world-first deal to pay media companies to link to news stories.
The government also introduced new data privacy measures to parliament on Thursday that would impose hefty fines and potential jail terms of up to seven years for people found guilty of “doxxing” individuals or groups.
Prime Minister Anthony Albanese’s government had pledged to outlaw doxxing—the publication of personal details online for malicious purposes—this year after the details of a private WhatsApp group containing hundreds of Jewish Australians were published online.
Australia is one of the first countries to pursue laws outlawing doxxing. It is also expected to introduce a tranche of laws in the coming months to regulate how personal data can be used by artificial intelligence.
“These reforms give more teeth to the regulation,” said Monique Azzopardi at law firm Clayton Utz.
Enlarge/ A screenshot of Taylor Swift’s Kamala Harris Instagram post, captured on September 11, 2024.
On Tuesday night, Taylor Swift endorsed Vice President Kamala Harris for US President on Instagram, citing concerns over AI-generated deepfakes as a key motivator. The artist’s warning aligns with current trends in technology, especially in an era where AI synthesis models can easily create convincing fake images and videos.
“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site,” she wrote in her Instagram post. “It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.”
In August 2024, former President Donald Trump posted AI-generated images on Truth Social falsely suggesting Swift endorsed him, including a manipulated photo depicting Swift as Uncle Sam with text promoting Trump. The incident sparked Swift’s fears about the spread of misinformation through AI.
This isn’t the first time Swift and generative AI have appeared together in the news. In February, we reported that a flood of explicit AI-generated images of Swift originated from a 4chan message board where users took part in daily challenges to bypass AI image generator filters.
Enlarge/ Elon Musk speaks at the Satellite Conference and Exhibition on March 9, 2020 in Washington, DC.
Getty Images | Win McNamee
US District Judge Reed O’Connor today recused himself from Elon Musk’s lawsuit alleging that advertisers targeted X with an illegal boycott.
O’Connor was apparently Musk’s preferred judge in the lawsuit filed last week against the World Federation of Advertisers (WFA) and several large corporations. In order to land O’Connor, the Musk-owned X Corp. sued in the Wichita Falls division of the US District Court for the Northern District of Texas.
O’Connor purchased Tesla stock, a fact that generated controversy in a different X lawsuit that he is still overseeing. He also invested in Unilever, one of the defendants in X’s advertising lawsuit. The Unilever investment appears to be what drove O’Connor’s recusal decision.
“I hereby recuse myself from the above numbered case,” O’Connor wrote in a filing today. The case was reassigned to District Judge Ed Kinkeade. Both judges were appointed by President George W. Bush. O’Connor is based in Fort Worth, while Kinkeade is based in Dallas.
A financial disclosure report for calendar year 2022 shows that O’Connor owned stock in Unilever valued at $15,000 or less. The investment generated a dividend of $1,000 or less during 2022, the filing indicates. Unilever is one of the defendants named in X’s advertising lawsuit, along with Mars, Incorporated; CVS Health Corporation; and Ørsted A/S.
The 2022 disclosure also listed a purchase of Tesla stock valued between $15,001 and $50,000. “It is unclear whether O’Connor has sold his investment of up to $50,000 in Tesla stock, because the judge’s disclosure form covering the 2023 calendar year is not publicly available,” NPR wrote on Friday. “He has requested a filing extension, according to an official with the administrative office of US courts who was not authorized to speak on the record.”
Kinkeade filed a 2023 financial disclosure report, which is much shorter than O’Connor’s and lists several rental properties and bank interest.
Media Matters questioned judge’s impartiality
O’Connor’s Tesla stock has been a point of contention in X’s case against Media Matters for America, which O’Connor has not recused himself from. O’Connor remaining on the Media Matters case while recusing himself from the advertising case suggests that his Unilever investment is the main factor in the recusal.
Media Matters drew Musk’s ire when it published research on ads being placed next to pro-Nazi content on X. Musk’s lawsuit blames Media Matters for the platform’s advertising losses.
Media Matters argued in a July court filing that Tesla, the Musk-led electric carmaker, should be listed by X as an “interested party” in the case. “Here, if the Court indeed owns stock in Tesla, recusal would be required under two separate provisions of the judicial recusal statue,” Media Matters wrote. “By failing to disclose Tesla, however, X has deprived the Court of information it needed to make an informed recusal decision.”
Media Matters said there is a public association between Musk and the Tesla brand, and that this association leads to doubts “about whether a judge with a financial interest in Musk could impartially adjudicate” the case filed by X.
“Because an investment in Tesla is, in large part, a bet on Musk’s reputation and management choices—key issues in this case—ownership of Tesla stock would be disqualifying,” Media Matters wrote.
X, previously named Twitter, has argued that O’Connor shouldn’t have to recuse himself from the Media Matters case. Tesla does not exert any control over X, and Media Matters’ argument that Tesla has an interest in the case is “tenuous and speculative,” X wrote in a court filing.
O’Connor gave X a victory in April when he denied a Media Matters motion to delay discovery until its motion to dismiss is resolved. Media Matters has complained about the financial toll of the lawsuit, telling the court that “X’s discovery requests are extremely broad and unduly burdensome.” Media Matters also issued a statement to the press saying it needed to lay off staff because of a “legal assault on multiple fronts.”
O’Connor was assigned to the Media Matters case in November 2023 after the original judge recused himself.
Not only did the email not provide staff with enough notice, the labor court ruled, but also any employee’s failure to click “yes” could in no way constitute a legal act of resignation. Instead, the court reviewed evidence alleging that the email appeared designed to either get employees to agree to new employment terms, sight unseen, or else push employees to volunteer for dismissal during a time of mass layoffs across Twitter.
“Going forward, to build a breakthrough Twitter 2.0 and succeed in an increasingly competitive world, we will need to be extremely hardcore,” Musk wrote in the all-staff email. “This will mean working long hours at high intensity. Only exceptional performance will constitute a passing grade.”
With the subject line, “A Fork in the Road,” the email urged staff, “if you are sure that you want to be part of the new Twitter, please click yes on the link below. Anyone who has not done so by 5pm ET tomorrow (Thursday) will receive three months of severance. Whatever decision you make, thank you for your efforts to make Twitter successful.”
In a 73-page ruling, an adjudication officer for the Irish Workplace Relations Commission (WRC), Michael MacNamee, ruled that Twitter’s abrupt dismissal of an Ireland-based senior executive, Gary Rooney, was unfair, the Irish public service broadcaster RTÉ reported. Rooney had argued that his contract clearly stated that his resignation must be provided in writing, not by refraining to fill out a form.
A spokesperson for the Department of Enterprise, Trade, and Employment, which handles the WRC’s media inquiries, told Ars that the decision will be published on the WRC’s website on August 26 after both parties have “the opportunity to consider it in full.”
Now, instead of paying Rooney the draft severance amount worth a little more than $25,000, Twitter, which is now called X, has to pay Rooney more than $600,000. According to many outlets, this is a record award from the WRC and included about $220,000 “for prospective future loss of earnings.”
The WRC dismissed Rooney’s claim regarding an allegedly owed performance bonus for 2022 but otherwise largely agreed with his arguments on the unfair dismissal.
Rooney had worked for Twitter for nine years prior to Musk’s takeover, telling the WRC that he previously loved his job but had no way of knowing from the “Fork in the Road” email “what package was being offered” or “implications of agreeing to stay working for Twitter.” He hesitated to click yes, not knowing how his benefits or stock options might change, while discussing his decision to potentially leave with other Twitter employees on Slack and claiming he would be leaving on Twitter.
Twitter tried to argue that the Slack discussions and Rooney’s tweets about the email indicated that he intended to resign, but the court disagreed that these were relevant.
“No employee when faced with such a situation could possibly be faulted for refusing to be compelled to give an open-ended unqualified assent to any of the proposals,” MacNamee said.
X’s senior director of human resources, Lauren Wegman, testified that of the 270 employees in Ireland who received the email, only 35 did not click yes. After this week’s ruling, it seems likely that X may face more complaints from any of those dozens of employees who took the same route Rooney did.
X has not commented on the ruling but is likely disappointed by the loss. The social media company had tried to argue that Rooney’s employment contract “allowed the company to make reasonable changes to its terms and conditions,” RTÉ reported. Wegman had further testified that it was unreasonable for Rooney to believe his pay might change as a result of clicking yes, telling the WRC that his “employment would probably not have ended if he had raised a grievance” within the 24-hour deadline, RTÉ reported.
Rooney’s lawyer, Barry Kenny, told The Guardian that Rooney and his legal team welcomed “the clear and unambiguous finding that my client did not resign from his employment but was unfairly dismissed from his job, notwithstanding his excellent employment record and contribution to the company over the years.”
“It is not okay for Mr. Musk, or indeed any large company to treat employees in such a manner in this country,” Kenny said. “The record award reflects the seriousness and the gravity of the case.”
Twitter will be able to appeal the WRC’s decision, The Journal reported.
Elon Musk’s X Corp. today sued the World Federation of Advertisers and several large corporations, claiming they “conspired, along with dozens of non-defendant co-conspirators, to collectively withhold billions of dollars in advertising revenue” from the social network formerly known as Twitter.
“We tried peace for 2 years, now it is war,” Musk wrote today, a little over eight months after telling boycotting advertisers to “go fuck yourself.”
X’s lawsuit in US District Court for the Northern District of Texas targets a World Federation of Advertisers initiative called the Global Alliance for Responsible Media (GARM). The other defendants are Unilever PLC; Unilever United States; Mars, Incorporated; CVS Health Corporation; and Ørsted A/S. Those companies are all members of GARM. X itself is still listed as one of the group’s members.
“This is an antitrust action relating to a group boycott by competing advertisers of one of the most popular social media platforms in the United States… Concerned that Twitter might deviate from certain brand safety standards for advertising on social media platforms set through GARM, the conspirators collectively acted to enforce Twitter’s adherence to those standards through the boycott,” the lawsuit said.
The lawsuit seeks treble damages to be calculated based on the “actual damages in an amount to be determined at trial.” X also wants “a permanent injunction under Section 16 of the Clayton Act, enjoining Defendants from continuing to conspire with respect to the purchase of advertising from Plaintiff.”
The lawsuit came several weeks after Musk wrote that X “has no choice but to file suit against the perpetrators and collaborators in the advertising boycott racket,” and called for “criminal prosecution.” Musk’s complaints were buoyed by a House Judiciary Committee report claiming that “the extent to which GARM has organized its trade association and coordinates actions that rob consumers of choices is likely illegal under the antitrust laws and threatens fundamental American freedoms.”
Yaccarino claims “illegal boycott” is stain on industry
We contacted all of the organizations named as defendants in the lawsuit and will update this article if any provide a response.
An advertising industry watchdog group called the Check My Ads Institute, which is not involved in the lawsuit, said that Musk’s claims should fail under the First Amendment. “Advertisers have a First Amendment right to choose who and what they want to be associated with… Elon Musk and X executives have the right, protected by the First Amendment, to say what they want online, even when it’s inaccurate, and advertisers have the right to keep their ads away from it,” the group said.
X CEO Linda Yaccarino posted an open letter to advertisers claiming that the alleged “illegal boycott” is “a stain on a great industry, and cannot be allowed to continue.”
“The illegal behavior of these organizations and their executives cost X billions of dollars… To those who broke the law, we say enough is enough. We are compelled to seek justice for the harm that has been done by these and potentially additional defendants, depending what the legal process reveals,” Yaccarino wrote.
Yaccarino also sought to gain support from X users in a video message. “These organizations targeted our company and you, our users,” she said.
X doesn’t provide public earnings reports because Musk took the company private after buying Twitter. A recent New York Times article said that “in the second quarter of this year, X earned $114 million in revenue in the United States, a 25 percent decline from the first quarter and a 53 percent decline from the previous year.”
Enlarge/ Elon Musk and Sam Altman share the stage in 2015, the same year that Musk alleged that Altman’s “deception” began.
After withdrawing his lawsuit in June for unknown reasons, Elon Musk has revived a complaint accusing OpenAI and its CEO Sam Altman of fraudulently inducing Musk to contribute $44 million in seed funding by promising that OpenAI would always open-source its technology and prioritize serving the public good over profits as a permanent nonprofit.
Instead, Musk alleged that Altman and his co-conspirators—”preying on Musk’s humanitarian concern about the existential dangers posed by artificial intelligence”—always intended to “betray” these promises in pursuit of personal gains.
As OpenAI’s technology advanced toward artificial general intelligence (AGI) and strove to surpass human capabilities, “Altman set the bait and hooked Musk with sham altruism then flipped the script as the non-profit’s technology approached AGI and profits neared, mobilizing Defendants to turn OpenAI, Inc. into their personal piggy bank and OpenAI into a moneymaking bonanza, worth billions,” Musk’s complaint said.
Where Musk saw OpenAI as his chance to fund a meaningful rival to stop Google from controlling the most powerful AI, Altman and others “wished to launch a competitor to Google” and allegedly deceived Musk to do it. According to Musk:
The idea Altman sold Musk was that a non-profit, funded and backed by Musk, would attract world-class scientists, conduct leading AI research and development, and, as a meaningful counterweight to Google’s DeepMind in the race for Artificial General Intelligence (“AGI”), decentralize its technology by making it open source. Altman assured Musk that the non-profit structure guaranteed neutrality and a focus on safety and openness for the benefit of humanity, not shareholder value. But as it turns out, this was all hot-air philanthropy—the hook for Altman’s long con.
Without Musk’s involvement and funding during OpenAI’s “first five critical years,” Musk’s complaint said, “it is fair to say” that “there would have been no OpenAI.” And when Altman and others repeatedly approached Musk with plans to shift OpenAI to a for-profit model, Musk held strong to his morals, conditioning his ongoing contributions on OpenAI remaining a nonprofit and its tech largely remaining open source.
“Either go do something on your own or continue with OpenAI as a nonprofit,” Musk told Altman in 2018 when Altman tried to “recast the nonprofit as a moneymaking endeavor to bring in shareholders, sell equity, and raise capital.”
“I will no longer fund OpenAI until you have made a firm commitment to stay, or I’m just being a fool who is essentially providing free funding to a startup,” Musk said at the time. “Discussions are over.”
But discussions weren’t over. And now Musk seemingly does feel like a fool after OpenAI exclusively licensed GPT-4 and all “pre-AGI” technology to Microsoft in 2023, while putting up paywalls and “failing to publicly disclose the non-profit’s research and development, including details on GPT-4, GPT-4T, and GPT-4o’s architecture, hardware, training method, and training computation.” This excluded the public “from open usage of GPT-4 and related technology to advance Defendants and Microsoft’s own commercial interests,” Musk alleged.
Now Musk has revived his suit against OpenAI, asking the court to award maximum damages for OpenAI’s alleged fraud, contract breaches, false advertising, acts viewed as unfair to competition, and other violations.
He has also asked the court to determine a very technical question: whether OpenAI’s most recent models should be considered AGI and therefore Microsoft’s license voided. That’s the only way to ensure that a private corporation isn’t controlling OpenAI’s AGI models, which Musk repeatedly conditioned his financial contributions upon preventing.
“Musk contributed considerable money and resources to launch and sustain OpenAI, Inc., which was done on the condition that the endeavor would be and remain a non-profit devoted to openly sharing its technology with the public and avoid concentrating its power in the hands of the few,” Musk’s complaint said. “Defendants knowingly and repeatedly accepted Musk’s contributions in order to develop AGI, with no intention of honoring those conditions once AGI was in reach. Case in point: GPT-4, GPT-4T, and GPT-4o are all closed source and shrouded in secrecy, while Defendants actively work to transform the non-profit into a thoroughly commercial business.”
Musk wants Microsoft’s GPT-4 license voided
Musk also asked the court to null and void OpenAI’s exclusive license to Microsoft, or else determine “whether GPT-4, GPT-4T, GPT-4o, and other OpenAI next generation large language models constitute AGI and are thus excluded from Microsoft’s license.”
It’s clear that Musk considers these models to be AGI, and he’s alleged that Altman’s current control of OpenAI’s Board—after firing dissidents in 2023 whom Musk claimed tried to get Altman ousted for prioritizing profits over AI safety—gives Altman the power to obscure when OpenAI’s models constitute AGI.
Enlarge/ An AI-generated image released by xAI during the open-weights launch of Grok-1.
Elon Musk-led social media platform X is training Grok, its AI chatbot, on users’ data, and that’s opt-out, not opt-in. If you’re an X user, that means Grok is already being trained on your posts if you haven’t explicitly told it not to.
Over the past day or so, users of the platform noticed the checkbox to opt out of this data usage in X’s privacy settings. The discovery was accompanied by outrage that user data was being used this way to begin with.
The social media posts about this sometimes seem to suggest that Grok has only just begun training on X users’ data, but users actually don’t know for sure when it started happening.
Earlier today, X’s Safety account tweeted, “All X users have the ability to control whether their public posts can be used to train Grok, the AI search assistant.” But it didn’t clarify either when the option became available or when the data collection began.
You cannot currently disable it in the mobile apps, but you can on mobile web, and X says the option is coming to the apps soon.
On the privacy settings page, X says:
To continuously improve your experience, we may utilize your X posts as well as your user interactions, inputs, and results with Grok for training and fine-tuning purposes. This also means that your interactions, inputs, and results may also be shared with our service provider xAI for these purposes.
X’s privacy policy has allowed for this since at least September 2023.
It’s increasingly common for user data to be used this way; for example, Meta has done the same with its users’ content, and there was an outcry when Adobe updated its terms of use to allow for this kind of thing. (Adobe quickly backtracked and promised to “never” train generative AI on creators’ content.)
How to opt out
To stop Grok from training on your X content, first go to “Settings and privacy” from the “More” menu in the navigation panel…
Samuel Axon
Then click or tap “Privacy and safety”…
Samuel Axon
Then “Grok”…
Samuel Axon
And finally, uncheck the box.
Samuel Axon
You can’t opt out within the iOS or Android apps yet, but you can do so in a few quick steps on either mobile or desktop web. To do so:
Click or tap “More” in the nav panel
Click or tap “Settings and privacy”
Click or tap “Privacy and safety”
Scroll down and click or tap “Grok” under “Data sharing and personalization”
Uncheck the box “Allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning,” which is checked by default.
Alternatively, you can follow this link directly to the settings page and uncheck the box with just one more click. If you’d like, you can also delete your conversation history with Grok here, provided you’ve actually used the chatbot before.