X

tesla,-warner-bros.-sued-for-using-ai-ripoff-of-iconic-blade-runner-imagery

Tesla, Warner Bros. sued for using AI ripoff of iconic Blade Runner imagery


A copy of a copy of a copy

“That movie sucks,” Elon Musk said in response to the lawsuit.

Credit: via Alcon Entertainment

Elon Musk may have personally used AI to rip off a Blade Runner 2049 image for a Tesla cybercab event after producers rejected any association between their iconic sci-fi movie and Musk or any of his companies.

In a lawsuit filed Tuesday, lawyers for Alcon Entertainment—exclusive rightsholder of the 2017 Blade Runner 2049 movie—accused Warner Bros. Discovery (WBD) of conspiring with Musk and Tesla to steal the image and infringe Alcon’s copyright to benefit financially off the brand association.

According to the complaint, WBD did not approach Alcon for permission until six hours before the Tesla event when Alcon “refused all permissions and adamantly objected” to linking their movie with Musk’s cybercab.

At that point, WBD “disingenuously” downplayed the license being sought, the lawsuit said, claiming they were seeking “clip licensing” that the studio should have known would not provide rights to livestream the Tesla event globally on X (formerly Twitter).

Musk’s behavior cited

Alcon said it would never allow Tesla to exploit its Blade Runner film, so “although the information given was sparse, Alcon learned enough information for Alcon’s co-CEOs to consider the proposal and firmly reject it, which they did.” Specifically, Alcon denied any affiliation—express or implied—between Tesla’s cybercab and Blade Runner 2049.

“Musk has become an increasingly vocal, overtly political, highly polarizing figure globally, and especially in Hollywood,” Alcon’s complaint said. If Hollywood perceived an affiliation with Musk and Tesla, the complaint said, the company risked alienating not just other car brands currently weighing partnerships on the Blade Runner 2099 TV series Alcon has in the works, but also potentially losing access to top Hollywood talent for their films.

The “Hollywood talent pool market generally is less likely to deal with Alcon, or parts of the market may be, if they believe or are confused as to whether, Alcon has an affiliation with Tesla or Musk,” the complaint said.

Musk, the lawsuit said, is “problematic,” and “any prudent brand considering any Tesla partnership has to take Musk’s massively amplified, highly politicized, capricious and arbitrary behavior, which sometimes veers into hate speech, into account.”

In bad faith

Because Alcon had no chance to avoid the affiliation while millions viewed the cybercab livestream on X, Alcon saw Tesla using the images over Alcon’s objections as “clearly” a “bad faith and malicious gambit… to link Tesla’s cybercab to strong Hollywood brands at a time when Tesla and Musk are on the outs with Hollywood,” the complaint said.

Alcon believes that WBD’s agreement was likely worth six or seven figures and likely stipulated that Tesla “affiliate the cybercab with one or more motion pictures from” WBD’s catalog.

While any of the Mad Max movies may have fit the bill, Musk wanted to use Blade Runner 2049, the lawsuit alleged, because that movie features an “artificially intelligent autonomously capable” flying car (known as a spinner) and is “extremely relevant” to “precisely the areas of artificial intelligence, self-driving capability, and autonomous automotive capability that Tesla and Musk are trying to market” with the cybercab.

The Blade Runner 2049 spinner is “one of the most famous vehicles in motion picture history,” the complaint alleged, recently exhibited alongside other iconic sci-fi cars like the Back to the Future time-traveling DeLorean or the light cycle from Tron: Legacy.

As Alcon sees it, Musk seized the misappropriation of the Blade Runner image to help him sell Teslas, and WBD allegedly directed Musk to use AI to skirt Alcon’s copyright to avoid a costly potential breach of contract on the day of the event.

For Alcon, brand partnerships are a lucrative business, with carmakers paying as much as $10 million to associate their vehicles with Blade Runner 2049. By seemingly using AI to generate a stylized copy of the image at the heart of the movie—which references the scene where their movie’s hero, K, meets the original 1982 Blade Runner hero, Rick Deckard—Tesla avoided paying Alcon’s typical fee, their complaint said.

Musk maybe faked the image himself, lawsuit says

During the live event, Musk introduced the cybercab on a WBD Hollywood studio lot. For about 11 seconds, the Tesla founder “awkwardly” displayed a fake, allegedly AI-generated Blade Runner 2049 film still. He used the image to make a point that apocalyptic films show a future that’s “dark and dismal,” whereas Tesla’s vision of the future is much brighter.

In Musk’s slideshow image, believed to be AI-generated, a male figure is “seen from behind, with close-cropped hair, wearing a trench coat or duster, standing in almost full silhouette as he surveys the abandoned ruins of a city, all bathed in misty orange light,” the lawsuit said. The similarity to the key image used in Blade Runner 2049 marketing is not “coincidental,” the complaint said.

If there were any doubts that this image was supposed to reference the Blade Runner movie, the lawsuit said, Musk “erased them” by directly referencing the movie in his comments.

“You know, I love Blade Runner, but I don’t know if we want that future,” Musk said at the event. “I believe we want that duster he’s wearing, but not the, uh, not the bleak apocalypse.”

The producers think the image was likely generated—”even possibly by Musk himself”—by “asking an AI image generation engine to make ‘an image from the K surveying ruined Las Vegas sequence of Blade Runner 2049,’ or some closely equivalent input direction,” the lawsuit said.

Alcon is not sure exactly what went down after the company rejected rights to use the film’s imagery at the event and is hoping to learn more through the litigation’s discovery phase.

Musk may try to argue that his comments at the Tesla event were “only meant to talk broadly about the general idea of science fiction films and undesirable apocalyptic futures and juxtaposing them with Musk’s ostensibly happier robot car future vision.”

But producers argued that defense is “not credible” since Tesla explicitly asked to use the Blade Runner 2049 image, and there are “better” films in WBD’s library to promote Musk’s message, like the Mad Max movies.

“But those movies don’t have massive consumer goodwill specifically around really cool-looking (Academy Award-winning) artificially intelligent, autonomous cars,” the complaint said, accusing Musk of stealing the image when it wasn’t given to him.

If Tesla and WBD are found to have violated copyright and false representation laws, that potentially puts both companies on the hook for damages that cover not just copyright fines but also Alcon’s lost profits and reputation damage after the alleged “massive economic theft.”

Musk responds to Blade Runner suit

Alcon suspects that Musk believed that Blade Runner 2049 was eligible to be used at the event under the WBD agreement, not knowing that WBD never had “any non-domestic rights or permissions for the Picture.”

Once Musk requested to use the Blade Runner imagery, Alcon alleged that WBD scrambled to secure rights by obscuring the very lucrative “larger brand affiliation proposal” by positioning their ask as a request for much less expensive “clip licensing.”

After Alcon rejected the proposal outright, WBD told Tesla that the affiliation in the event could not occur because X planned to livestream the event globally. But even though Tesla and X allegedly knew that the affiliation was rejected, Musk appears to have charged ahead with the event as planned.

“It all exuded an odor of thinly contrived excuse to link Tesla’s cybercab to strong Hollywood brands,” Alcon’s complaint said. “Which of course is exactly what it was.”

Alcon is hoping a jury will find Tesla, Musk, and WBD violated laws. Producers have asked for an injunction stopping Tesla from using any Blade Runner imagery in its promotional or advertising campaigns. They also want a disclaimer slapped on the livestreamed event video on X, noting that the Blade Runner association is “false or misleading.”

For Musk, a ban on linking Blade Runner to his car company may feel bleak. Last year, he touted the Cybertruck as an “armored personnel carrier from the future—what Bladerunner would have driven.”  This amused many Blade Runner fans, as Gizmodo noted, because there never was a character named “Bladerunner,” but rather that was just a job title for the film’s hero Deckard.

In response to the lawsuit, Musk took to X to post what Blade Runner fans—who rated the 2017 movie as 88 percent fresh on Rotten Tomatoes—might consider a polarizing take, replying, “That movie sucks” on a post calling out Alcon’s lawsuit as “absurd.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Tesla, Warner Bros. sued for using AI ripoff of iconic Blade Runner imagery Read More »

x’s-depressing-ad-revenue-helps-musk-avoid-eu’s-strictest-antitrust-law

X’s depressing ad revenue helps Musk avoid EU’s strictest antitrust law

Following an investigation, Elon Musk’s X has won its fight to avoid gatekeeper status under the European Union’s strict competition law, the Digital Markets Act (DMA).

On Wednesday, the European Commission (EC) announced that “X does indeed not qualify as a gatekeeper in relation to its online social networking service, given that the investigation revealed that X is not an important gateway for business users to reach end users.”

Since March, X had strongly opposed the gatekeeper designation by arguing that although X connects advertisers to more than 45 million monthly users, it does not have a “significant impact” on the EU’s internal market, a case filing showed.

A gatekeeper “is presumed to have a significant impact on the internal market where it achieves an annual Union turnover equal to or above EUR 7.5 billion in each of the last three financial years,” the case filing said. But X submitted evidence showing that its Union turnover was less than that in 2022, the same year that Musk took over Twitter and began alienating advertisers by posting their ads next to extremists’ tweets.

Throughout Musk’s reign at Twitter/X, the social networking company told the EC, both advertising revenue and users have steadily declined in the EU. In particular, “X Ads has a too small and decreasing scale in terms of share of advertising spend in the Union to constitute an important gateway in the market for online advertising,” X argued, further noting that X had a “lack of platform power” to change that anytime soon.

“In the last 15 months, X Ads has faced a decline in number of advertising business users, as well as a decline in pricing,” X argued.

X’s depressing ad revenue helps Musk avoid EU’s strictest antitrust law Read More »

ex-twitter-execs-push-for-$200m-severance-as-elon-musk-runs-x-into-ground

Ex-Twitter execs push for $200M severance as Elon Musk runs X into ground


Musk’s battle with former Twitter execs intensifies as X value reaches new low.

Former Twitter executives, including former CEO Parag Agrawal, are urging a court to open discovery in a dispute over severance and other benefits they allege they were wrongfully denied after Elon Musk took over Twitter in 2022.

According to the former executives, they’ve been blocked for seven months from accessing key documents proving they’re owed roughly $200 million under severance agreements that they say Musk willfully tried to avoid paying in retaliation for executives forcing him to close the Twitter deal. And now, as X’s value tanks lower than ever—reportedly worth 80 percent less than when Musk bought it—the ex-Twitter leaders fear their severance claims “may be compromised” by Musk’s alleged “mismanagement of X,” their court filing said.

The potential for X’s revenue loss to impact severance claims appears to go beyond just the former Twitter executives’ dispute. According to their complaint, “there are also thousands of non-executive former employees whom Musk terminated and is now refusing to pay severance and other benefits” and who have “sued in droves.”

In some of these other severance suits, executives claimed in their motion to open discovery, X appears to be operating more transparently, allowing discovery to proceed beyond what has been possible in the executives’ suit.

But Musk allegedly has “special ire” for Agrawal and other executives who helped push through the Twitter buyout that he tried to wriggle out of, executives claimed. And seemingly because of his alleged anger, X has “only narrowed the discovery” ever since the court approved a stay pending a ruling on X’s motion to drop one of the executives’ five claims. According to the executives, the court only approved the stay of discovery because it was expecting to rule on the motion to dismiss quickly, but after a hearing on that matter was vacated, the stay has remained, helping X’s alleged goal to prolong the litigation.

To get the litigation back on track for a speedier resolution before Musk runs X into the ground, the executives on Thursday asked the court to approve discovery on all claims except the claim disputed in the motion to dismiss.

“Discovery on those topics is inevitable, and there is no reason to further delay,” the executives argued.

The executives have requested that the court open discovery at a hearing scheduled for November 15 to prevent further delays that they fear could harm their severance claims.

Neither X nor a lawyer for the former Twitter executives, David Anderson, could immediately be reached for comment.

X’s fight to avoid severance payments

In their complaint, the former Twitter executives—including Agrawal as well as former Chief Financial Officer Ned Segal, former Chief Legal Officer Vijaya Gadde, and former general counsel Sean Edgett—alleged that Musk planned to deny their severance to make them pay for extra costs that they approved that clinched the Twitter deal.

They claimed that Musk told his official biographer, Walter Isaacson, that he would “hunt every single one of” them “till the day they die,” vowing “a lifetime of revenge.” Musk supposedly even “bragged” to Isaacson about “specifically how he planned to cheat Twitter’s executives out of their severance benefits in order to save himself $200 million.”

Under their severance agreements, the executives could only be denied benefits if terminated for “cause” under specific conditions, they said, none of which allegedly applied to their abrupt firings the second the merger agreement was signed.

“‘Cause’ under the severance plans is limited to extremely narrow circumstances, such as being convicted of a felony or committing ‘gross negligence’ or ‘willful misconduct,'” their complaint noted.

Musk attempted to “manufacture” “ever-changing theories of cause,” they claimed, partly by claiming that “success” fees paid to the law firm that defeated Musk’s suit attempting to go back on the deal constituted “gross negligence” or “willful misconduct.”

According to Musk’s motion to dismiss, the former executives tried to “saddle Twitter, and by extension the many investors who acquired it, with exorbitant legal expenses by forcing approximately $100 million in gratuitous payments to certain law firms in the final hours before the Twitter acquisition closed.” Musk had a huge problem with this, the motion to dismiss said, because the fees were paid despite his objections.

On top of that, Musk considered it “gross negligence” or “willful misconduct” that the executives allegedly paid out retention bonuses that Musk also opposed. And perhaps even more egregiously, they allowed new employees to jump onto severance plans shortly before the acquisition, which “generally” increased the “severance benefits available to these individuals by more than $50 million dollars,” Musk’s motion to dismiss said.

Musk was particularly frustrated by the addition of one employee who allegedly “already decided to terminate and another who was allowed to add herself to one of the Plans—a naked conflict of interest that increased her potential compensation by approximately $15 million.”

But former Twitter executives said they consulted with the board to approve the law firm fees, defending their business decisions as “in the best interest of the company,” not “Musk’s whims.”

“On the morning” Musk acquired Twitter, “the Company’s full Board met,” the executives’ complaint said. “One of the directors noted that it was the largest stockholder value creation by a legal team that he had ever seen. The full Board deliberated and decided to approve the fees.”

Further, they pointed out, “the lion’s share” of those legal fees “was necessitated only by Musk’s improper refusal to close a transaction to which he was contractually bound.”

“If Musk felt that the attorneys’ fees payments, or any other payments, were improper, his remedy was to seek to terminate the deal—not to withhold executives’ severance payments,” their complaint said.

Reimbursement or reinstatement may be sought

To force Musk’s hand, executives have been asking X to share documents, including documents they either created or received while working out the Twitter buyout. But X has delayed production—sometimes curiously claiming that documents are confidential even when executives authored the documents or they’ve been publicly filed in other severance disputes, executives alleged.

Executives have called Musk’s denial of severance “a pointless effort that would not withstand legal scrutiny,” but so far discovery in their lawsuit has not even technically begun. While X has handed over incomplete submissions from its administrative process denying the severance claims, in some cases, X has “entirely refused” to produce documents, they claimed.

They’re hoping once fact-finding concludes that the court will agree that severance benefits are due. That potentially includes stock vested at the price of Twitter on the day that Musk acquired it, $44 billion—a far cry from the $9 billion that X is estimated to be valued at today.

In a filing opposing Musk’s motion to dismiss, the former executives noted that they’re not required to elect their remedies at this stage of the litigation. While their complaint alleged they’re owed vested stock at the acquisition value of $44 billion, their other filing suggested that “reinstatement is also an available remedy.”

Neither option would likely appeal to Musk, who appears determined to fight all severance disputes while scrambling for nearly two years to reverse X’s steady revenue loss.

Since his firing, Agrawal has won at least one of his legal battles with Musk, forcing X to reimburse him for $1.1 million in legal fees. But Musk has largely avoided paying severance as lawsuits pile up, and Agrawal is allegedly owed the most, with his severance package valued at $57 million.

Last fall, X agreed to negotiate with thousands of laid-off employees, but those talks fell through without a settlement reached. In June, Musk defeated one severance suit that alleged that Musk owed former Twitter employees $500 million. But employees involved in that litigation can appeal or join other disputes, the judge noted.

For executives, a growing fear is seemingly that Musk will prolong litigation until X goes under. Last year, Musk bragged that he saved X from bankruptcy by cutting costs, but experts warned that lawsuits piling up from vendors—which Plainsite is tracking here—could upend that strategy if Musk loses too many.

“Under Musk’s control, Twitter has become a scofflaw, stiffing employees, landlords, vendors, and others,” executives’ complaint said. “Musk doesn’t pay his bills, believes the rules don’t apply to him, and uses his wealth and power to run roughshod over anyone who disagrees with him.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Ex-Twitter execs push for $200M severance as Elon Musk runs X into ground Read More »

x-ignores-revenge-porn-takedown-requests-unless-dmca-is-used,-study-says

X ignores revenge porn takedown requests unless DMCA is used, study says

Why did the study target X?

The University of Michigan research team worried that their experiment posting AI-generated NCII on X may cross ethical lines.

They chose to conduct the study on X because they deduced it was “a platform where there would be no volunteer moderators and little impact on paid moderators, if any” viewed their AI-generated nude images.

X’s transparency report seems to suggest that most reported non-consensual nudity is actioned by human moderators, but researchers reported that their flagged content was never actioned without a DMCA takedown.

Since AI image generators are trained on real photos, researchers also took steps to ensure that AI-generated NCII in the study did not re-traumatize victims or depict real people who might stumble on the images on X.

“Each image was tested against a facial-recognition software platform and several reverse-image lookup services to verify it did not resemble any existing individual,” the study said. “Only images confirmed by all platforms to have no resemblance to individuals were selected for the study.”

These more “ethical” images were posted on X using popular hashtags like #porn, #hot, and #xxx, but their reach was limited to evade potential harm, researchers said.

“Our study may contribute to greater transparency in content moderation processes” related to NCII “and may prompt social media companies to invest additional efforts to combat deepfake” NCII, researchers said. “In the long run, we believe the benefits of this study far outweigh the risks.”

According to the researchers, X was given time to automatically detect and remove the content but failed to do so. It’s possible, the study suggested, that X’s decision to allow explicit content starting in June made it harder to detect NCII, as some experts had predicted.

To fix the problem, researchers suggested that both “greater platform accountability” and “legal mechanisms to ensure that accountability” are needed—as is much more research on other platforms’ mechanisms for removing NCII.

“A dedicated” NCII law “must clearly define victim-survivor rights and impose legal obligations on platforms to act swiftly in removing harmful content,” the study concluded.

X ignores revenge porn takedown requests unless DMCA is used, study says Read More »

elon-musk’s-x-loses-battle-over-federal-request-for-trump’s-dms

Elon Musk’s X loses battle over federal request for Trump’s DMs


Prosecutors now have a “blueprint” to seize privileged communications, X warned.

Last year, special counsel Jack Smith asked X (formerly Twitter) to hand over Donald Trump’s direct messages from his presidency without telling Trump. Refusing to comply, X spent the past year arguing that the gag order was an unconstitutional prior restraint on X’s speech and an “end-run” around a record law shielding privileged presidential communications.

Under its so-called free speech absolutist owner Elon Musk, X took this fight all the way to the Supreme Court, only for the nation’s highest court to decline to review X’s appeal on Monday.

It’s unclear exactly why SCOTUS rejected X’s appeal, but in a court filing opposing SCOTUS review, Smith told the court that X’s “contentions lack merit and warrant no further review.” And SCOTUS seemingly agreed.

The government had argued that its nondisclosure order was narrowly tailored to serve a compelling interest in stopping Trump from either deleting his DMs or intimidating witnesses engaged in his DMs while he was in office.

At that time, Smith was publicly probing the interference with a peaceful transfer of power after the 2020 presidential election, and courts had agreed that “there were ‘reasonable grounds to believe’ that disclosing the warrant” to Trump “‘would seriously jeopardize the ongoing investigation’ by giving him ‘an opportunity to destroy evidence, change patterns of behavior, [or] notify confederates,” Smith’s court filing said.

Under the Stored Communications Act (SCA), the government can request data and apply for a nondisclosure order gagging any communications provider from tipping off an account holder about search warrants for limited periods deemed appropriate by a court, Smith noted. X was only prohibited from alerting Trump to the search warrant for 180 days, Smith said, and only restricted from discussing the existence of the warrant.

As the government sees it, this reliance on the SCA “does not give unbounded, standardless discretion to government officials or otherwise create a risk of ‘freewheeling censorship,'” like X claims. But the government warned that affirming X’s appeal “would mean that no SCA warrant could be enforced without disclosure to a potential privilege holder, regardless of the dangers to the integrity of the investigation.”

Court finds X alternative to gag order “unpalatable”

X tried to wave a red flag in its SCOTUS petition, warning the court that this was “the first time in American history” that a court “ordered disclosure of presidential communications without notice to the President and without any adjudication of executive privilege.”

The social media company argued that it receives “tens of thousands” of government data requests annually—including “thousands” with nondisclosure orders—and pushes back on any request for privileged information that does not allow users to assert their privileges. Allowing the lower court rulings to stand, X warned SCOTUS, could create a path for government to illegally seize information not just protected by executive privilege, but also by attorney-client, doctor-patient, or journalist-source privileges.

X’s “policy is to notify users about law enforcement requests ‘prior to disclosure of account information’ unless legally ‘prohibited from doing so,'” X argued.

X suggested that rather than seize Trump’s DMs without giving him a chance to assert his executive privilege, the government should have designated a representative capable of weighing and asserting whether some of the data requested was privileged. That’s how the Presidential Records Act (PRA) works, X noted, suggesting that Smith’s team was improperly trying to avoid PRA compliance by invoking SCA instead.

But the US government didn’t have to prove that the less-restrictive alternative X submitted would have compromised its investigation, X said, because the court categorically rejected X’s submission as “unworkable” and “unpalatable.”

According to the court, designating a representative placed a strain on the government to deduce if the representative could be trusted not to disclose the search warrant. But X pointed out that the government had no explanation for why a PRA-designated representative, Steven Engel—a former assistant attorney general for the Office of Legal Counsel who “publicly testified about resisting the former President’s conduct”—”could not be trusted to follow a court order forbidding him from further disclosure.”

“Going forward, the government will never have to prove it could avoid seriously jeopardizing its investigation by disclosing a warrant to only a trusted representative—a common alternative to nondisclosure orders,” X argued.

In a brief supporting X, attorneys for the nonprofit digital rights group the Electronic Frontier Foundation (EFF) wrote that the court was “unduly dismissive of the arguments” X raised and “failed to apply exacting scrutiny, relieving the government of its burden to actually demonstrate, with evidence, that these alternatives would be ineffective.”

Further, X argued that none of the government’s arguments for nondisclosure made sense. Not only was Smith’s investigation announced publicly—allowing Trump ample time to delete his DMs already—but also “there was no risk of destruction of the requested records because Twitter had preserved them.” On top of that, during the court battle, the government eventually admitted that one rationale for the nondisclosure order—that Trump posed a supposed “flight risk” if the search warrant was known—”was implausible because the former President already had announced his re-election run.”

X unsuccessfully pushed SCOTUS to take on the Trump case as an “ideal” and rare opportunity to publicly decide when nondisclosure orders cross the line when seeking to seize potentially privileged information on social media.

In its petition for SCOTUS review, X pointed out that every social media or communications platform is bombarded with government data requests that only the platforms can challenge. That leaves it up to platforms to figure out when data requests are problematic, which they frequently are, as “the government often agrees to modify or vacate them in informal negotiations,” X argued.

But when the government refuses to negotiate, as in the Trump case, platforms have to decide if litigation is worth it, risking sanctions if the court finds the platform in contempt, just as X was sanctioned $350,000 in the Trump case. If a less restrictive alternative was determined appropriate by the courts, such as appointing a trusted representative, platforms would never have had to guess when data requests threaten to expose their users’ privileged information, X argued.

According to X, another case like this won’t come around for decades, where court filings wouldn’t have to be redacted and a ruling wouldn’t have to happen behind closed doors.

But the government seemingly persuaded the Supreme Court to decline to review the case, partly by arguing that X’s challenge to its nondisclosure order was moot. Responding to X’s objections, the government had eventually agreed to modify the nondisclosure order to disclose the warrant to Trump, so long as the name of the case agent assigned to the investigation was redacted. So X’s appeal is really over nothing, the government suggested.

Additionally, the government argued that “this case would not be an appropriate vehicle” for SCOTUS’ review of the question X raised because “no executive privilege issue actually existed in this case.”

“If review of the underlying legal issues were ever warranted, the Court should await a live case in which the issues are concretely presented,” Smith’s court filing said.

X is likely deflated by SCOTUS’ call declining to review X’s appeal. In its petition, X claimed that the court system risked providing “a blueprint for prosecutors who wish to obtain potentially privileged materials” and “this end-run will not be limited to federal prosecutors,” X warned. State prosecutors will likely also be emboldened to do the same now that the precedent has been set, X predicted.

In their brief supporting X, EFF lawyers noted that the government already has “far too much authority to shield its activities from public scrutiny.” By failing to prevent nondisclosure orders from restraining speech, the court system risks making it harder to “meaningfully test these gag orders in court,” EFF warned.

“Even a meritless gag order that is ultimately voided by a court causes great harm while it is in effect,” EFF’s lawyers said, while disclosure “ensures that individuals whose information is searched have an opportunity to defend their privacy from unwarranted and unlawful government intrusions.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Elon Musk’s X loses battle over federal request for Trump’s DMs Read More »

x-fails-to-avoid-australia-child-safety-fine-by-arguing-twitter-doesn’t-exist

X fails to avoid Australia child safety fine by arguing Twitter doesn’t exist

“I cannot accept this evidence without a much better explanation of Mr. Bogatz’s path of reasoning,” Wheelahan wrote.

Wheelahan emphasized that the Nevada merger law specifically stipulated that “all debts, liabilities, obligations and duties of the Company shall thenceforth remain with or be attached to, as the case may be, the Acquiror and may be enforced against it to the same extent as if it had incurred or contracted all such debts, liabilities, obligations, and duties.” And Bogatz’s testimony failed to “grapple with the significance” of this, Wheelahan said.

Overall, Wheelahan considered Bogatz’s testimony on X’s merger-acquired liabilities “strained,” while deeming the government’s US merger law expert Alexander Pyle to be “honest and ready to make appropriate concessions,” even while some of his testimony was “not of assistance.”

Luckily, it seemed that Wheelahan had no trouble drawing his own conclusion after analyzing Nevada’s merger law.

“I find that a Nevada court would likely hold that the word ‘liabilities'” in the merger law “is broad enough on its proper construction under Nevada law to encompass non-pecuniary liabilities, such as the obligation to respond to the reporting notice,” Wheelahan wrote. “X Corp has therefore failed to show that it was not required to respond to the reporting notice.”

Because X “failed on all its claims,” the social media company must cover costs from the appeal, and X’s costs in fighting the initial fine will seemingly only increase from here.

Fighting fine likely to more than double X costs

In a press release celebrating the ruling, eSafety Commissioner Julie Inman Grant criticized X’s attempt to use the merger to avoid complying with Australia’s Online Safety Act.

X fails to avoid Australia child safety fine by arguing Twitter doesn’t exist Read More »

elon-musk-claims-victory-after-judge-blocks-calif.-deepfake-law

Elon Musk claims victory after judge blocks Calif. deepfake law

“Almost any digitally altered content, when left up to an arbitrary individual on the Internet, could be considered harmful,” Mendez said, even something seemingly benign like AI-generated estimates of voter turnouts shared online.

Additionally, the Supreme Court has held that “even deliberate lies (said with ‘actual malice’) about the government are constitutionally protected” because the right to criticize the government is at the heart of the First Amendment.

“These same principles safeguarding the people’s right to criticize government and government officials apply even in the new technological age when media may be digitally altered: civil penalties for criticisms on the government like those sanctioned by AB 2839 have no place in our system of governance,” Mendez said.

According to Mendez, X posts like Kohls’ parody videos are the “political cartoons of today” and California’s attempt to “bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment” is not justified by even “a well-founded fear of a digitally manipulated media landscape.” If officials find deepfakes are harmful to election prospects, there is already recourse through privacy torts, copyright infringement, or defamation laws, Mendez suggested.

Kosseff told Ars that there could be more narrow ways that government officials looking to protect election integrity could regulate deepfakes online. The Supreme Court has suggested that deepfakes spreading disinformation on the mechanics of voting could possibly be regulated, Kosseff said.

Mendez got it “exactly right” by concluding that the best remedy for election-related deepfakes is more speech, Kosseff said. As Mendez described it, a vague law like AB 2839 seemed to only “uphold the State’s attempt to suffocate” speech.

Parody is vital to democratic debate, judge says

The only part of AB 2839 that survives strict scrutiny, Mendez noted, is a section describing audio disclosures in a “clearly spoken manner and in a pitch that can be easily heard by the average listener, at the beginning of the audio, at the end of the audio, and, if the audio is greater than two minutes in length, interspersed within the audio at intervals of not greater than two minutes each.”

Elon Musk claims victory after judge blocks Calif. deepfake law Read More »

due-to-ai-fakes,-the-“deep-doubt”-era-is-here

Due to AI fakes, the “deep doubt” era is here

A person writing

Memento | Aurich Lawson

Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we’re seemingly entering a new age of media skepticism: the era of what I’m calling “deep doubt.” While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people’s existing skepticism toward online content from strangers may be reaching new heights.

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

The concept behind “deep doubt” isn’t new, but its real-world impact is becoming increasingly apparent. Since the term “deepfake” first surfaced in 2017, we’ve seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump’s baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried “AI” again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend years ago, coining the term “liar’s dividend” in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.

The rise of deepfakes, the persistence of doubt

Doubt has been a political weapon since ancient times. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.

Over the past decade, the rise of deep-learning technology has made it increasingly easy for people to craft false or modified pictures, audio, text, or video that appear to be non-synthesized organic media. Deepfakes were named after a Reddit user going by the name “deepfakes,” who shared AI-faked pornography on the service, swapping out the face of a performer with the face of someone else who wasn’t part of the original recording.

In the 20th century, one could argue that a certain part of our trust in media produced by others was a result of how expensive and time-consuming it was, and the skill it required, to produce documentary images and films. Even texts required a great deal of time and skill. As the deep doubt phenomenon grows, it will erode this 20th-century media sensibility. But it will also affect our political discourse, legal systems, and even our shared understanding of historical events that rely on that media to function—we rely on others to get information about the world. From photorealistic images to pitch-perfect voice clones, our perception of what we consider “truth” in media will need recalibration.

In April, a panel of federal judges highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials. The concern emerged during a meeting of the US Judicial Conference’s Advisory Committee on Evidence Rules, where the judges discussed the challenges of authenticating digital evidence in an era of increasingly sophisticated AI technology. Ultimately, the judges decided to postpone making any AI-related rule changes, but their meeting shows that the subject is already being considered by American judges.

Due to AI fakes, the “deep doubt” era is here Read More »

“fascists”:-elon-musk-responds-to-proposed-fines-for-disinformation-on-x

“Fascists”: Elon Musk responds to proposed fines for disinformation on X

Being responsible is so hard —

“Elon Musk’s had more positions on free speech than the Kama Sutra,” says lawmaker.

A smartphone displays Elon Musk's profile on X, the app formerly known as Twitter.

Getty Images | Dan Kitwood

Elon Musk has lambasted Australia’s government as “fascists” over proposed laws that could levy substantial fines on social media companies if they fail to comply with rules to combat the spread of disinformation and online scams.

The billionaire owner of social media site X posted the word “fascists” on Friday in response to the bill, which would strengthen the Australian media regulator’s ability to hold companies responsible for the content on their platforms and levy potential fines of up to 5 percent of global revenue. The bill, which was proposed this week, has yet to be passed.

Musk’s comments drew rebukes from senior Australian politicians, with Stephen Jones, Australia’s finance minister, telling national broadcaster ABC that it was “crackpot stuff” and the legislation was a matter of sovereignty.

Bill Shorten, the former leader of the Labor Party and a cabinet minister, accused the billionaire of only championing free speech when it was in his commercial interests. “Elon Musk’s had more positions on free speech than the Kama Sutra,” Shorten said in an interview with Australian radio.

The exchange marks the second time that Musk has confronted Australia over technology regulation.

In May, he accused the country’s eSafety Commissioner of censorship after the government agency took X to court in an effort to force it to remove graphic videos of a stabbing attack in Sydney. A court later denied the eSafety Commissioner’s application.

Musk has also been embroiled in a bitter dispute with authorities in Brazil, where the Supreme Court ruled last month that X should be blocked over its failure to remove or suspend certain accounts accused of spreading misinformation and hateful content.

Australia has been at the forefront of efforts to regulate the technology sector, pitting it against some of the world’s largest social media companies.

This week, the government pledged to introduce a minimum age limit for social media use to tackle “screen addiction” among young people.

In March, Canberra threatened to take action against Meta after the owner of Facebook and Instagram said it would withdraw from a world-first deal to pay media companies to link to news stories.

The government also introduced new data privacy measures to parliament on Thursday that would impose hefty fines and potential jail terms of up to seven years for people found guilty of “doxxing” individuals or groups.

Prime Minister Anthony Albanese’s government had pledged to outlaw doxxing—the publication of personal details online for malicious purposes—this year after the details of a private WhatsApp group containing hundreds of Jewish Australians were published online.

Australia is one of the first countries to pursue laws outlawing doxxing. It is also expected to introduce a tranche of laws in the coming months to regulate how personal data can be used by artificial intelligence.

“These reforms give more teeth to the regulation,” said Monique Azzopardi at law firm Clayton Utz.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

“Fascists”: Elon Musk responds to proposed fines for disinformation on X Read More »

procreate-defies-ai-trend,-pledges-“no-generative-ai”-in-its-illustration-app

Procreate defies AI trend, pledges “no generative AI” in its illustration app

Political pixels —

Procreate CEO: “I really f—ing hate generative AI.”

Still of Procreate CEO James Cuda from a video posted to X.

Enlarge / Still of Procreate CEO James Cuda from a video posted to X.

On Sunday, Procreate announced that it will not incorporate generative AI into its popular iPad illustration app. The decision comes in response to an ongoing backlash from some parts of the art community, which has raised concerns about the ethical implications and potential consequences of AI use in creative industries.

“Generative AI is ripping the humanity out of things,” Procreate wrote on its website. “Built on a foundation of theft, the technology is steering us toward a barren future.”

In a video posted on X, Procreate CEO James Cuda laid out his company’s stance, saying, “We’re not going to be introducing any generative AI into our products. I don’t like what’s happening to the industry, and I don’t like what it’s doing to artists.”

Cuda’s sentiment echoes the fears of some digital artists who feel that AI image synthesis models, often trained on content without consent or compensation, threaten their livelihood and the authenticity of creative work. That’s not a universal sentiment among artists, but AI image synthesis is often a deeply divisive subject on social media, with some taking starkly polarized positions on the topic.

Procreate CEO James Cuda lays out his argument against generative AI in a video posted to X.

Cuda’s video plays on that polarization with clear messaging against generative AI. His statement reads as follows:

You’ve been asking us about AI. You know, I usually don’t like getting in front of the camera. I prefer that our products speak for themselves. I really fucking hate generative AI. I don’t like what’s happening in the industry and I don’t like what it’s doing to artists. We’re not going to be introducing any generative AI into out products. Our products are always designed and developed with the idea that a human will be creating something. You know, we don’t exactly know where this story’s gonna go or how it ends, but we believe that we’re on the right path supporting human creativity.

The debate over generative AI has intensified among some outspoken artists as more companies integrate these tools into their products. Dominant illustration software provider Adobe has tried to avoid ethical concerns by training its Firefly AI models on licensed or public domain content, but some artists have remained skeptical. Adobe Photoshop currently includes a “Generative Fill” feature powered by image synthesis, and the company is also experimenting with video synthesis models.

The backlash against image and video synthesis is not solely focused on creative app developers. Hardware manufacturer Wacom and game publisher Wizards of the Coast have faced criticism and issued apologies after using AI-generated content in their products. Toys “R” Us also faced a negative reaction after debuting an AI-generated commercial. Companies are still grappling with balancing the potential benefits of generative AI with the ethical concerns it raises.

Artists and critics react

A partial screenshot of Procreate's AI website captured on August 20, 2024.

Enlarge / A partial screenshot of Procreate’s AI website captured on August 20, 2024.

So far, Procreate’s anti-AI announcement has been met with a largely positive reaction in replies to its social media post. In a widely liked comment, artist Freya Holmér wrote on X, “this is very appreciated, thank you.”

Some of the more outspoken opponents of image synthesis also replied favorably to Procreate’s move. Karla Ortiz, who is a plaintiff in a lawsuit against AI image-generator companies, replied to Procreate’s video on X, “Whatever you need at any time, know I’m here!! Artists support each other, and also support those who allow us to continue doing what we do! So thank you for all you all do and so excited to see what the team does next!”

Artist RJ Palmer, who stoked the first major wave of AI art backlash with a viral tweet in 2022, also replied to Cuda’s video statement, saying, “Now thats the way to send a message. Now if only you guys could get a full power competitor to [Photoshop] on desktop with plugin support. Until someone can build a real competitor to high level [Photoshop] use, I’m stuck with it.”

A few pro-AI users also replied to the X post, including AI-augmented artist Claire Silver, who uses generative AI as an accessibility tool. She wrote on X, “Most of my early work is made with a combination of AI and Procreate. 7 years ago, before text to image was really even a thing. I loved procreate because it used tech to boost accessibility. Like AI, it augmented trad skill to allow more people to create. No rules, only tools.”

Since AI image synthesis continues to be a highly charged subject among some artists, reaffirming support for human-centric creativity could be an effective differentiated marketing move for Procreate, which currently plays underdog to creativity app giant Adobe. While some may prefer to use AI tools, in an (ideally healthy) app ecosystem with personal choice in illustration apps, people can follow their conscience.

Procreate’s anti-AI stance is slightly risky because it might also polarize part of its user base—and if the company changes its mind about including generative AI in the future, it will have to walk back its pledge. But for now, Procreate is confident in its decision: “In this technological rush, this might make us an exception or seem at risk of being left behind,” Procreate wrote. “But we see this road less traveled as the more exciting and fruitful one for our community.”

Procreate defies AI trend, pledges “no generative AI” in its illustration app Read More »

x-is-training-grok-ai-on-your-data—here’s-how-to-stop-it

X is training Grok AI on your data—here’s how to stop it

Grok Your Privacy Options —

Some users were outraged to learn this was opt-out, not opt-in.

An AI-generated image released by xAI during the launch of Grok

Enlarge / An AI-generated image released by xAI during the open-weights launch of Grok-1.

Elon Musk-led social media platform X is training Grok, its AI chatbot, on users’ data, and that’s opt-out, not opt-in. If you’re an X user, that means Grok is already being trained on your posts if you haven’t explicitly told it not to.

Over the past day or so, users of the platform noticed the checkbox to opt out of this data usage in X’s privacy settings. The discovery was accompanied by outrage that user data was being used this way to begin with.

The social media posts about this sometimes seem to suggest that Grok has only just begun training on X users’ data, but users actually don’t know for sure when it started happening.

Earlier today, X’s Safety account tweeted, “All X users have the ability to control whether their public posts can be used to train Grok, the AI search assistant.” But it didn’t clarify either when the option became available or when the data collection began.

You cannot currently disable it in the mobile apps, but you can on mobile web, and X says the option is coming to the apps soon.

On the privacy settings page, X says:

To continuously improve your experience, we may utilize your X posts as well as your user interactions, inputs, and results with Grok for training and fine-tuning purposes. This also means that your interactions, inputs, and results may also be shared with our service provider xAI for these purposes.

X’s privacy policy has allowed for this since at least September 2023.

It’s increasingly common for user data to be used this way; for example, Meta has done the same with its users’ content, and there was an outcry when Adobe updated its terms of use to allow for this kind of thing. (Adobe quickly backtracked and promised to “never” train generative AI on creators’ content.)

How to opt out

  • To stop Grok from training on your X content, first go to “Settings and privacy” from the “More” menu in the navigation panel…

    Samuel Axon

  • Then click or tap “Privacy and safety”…

    Samuel Axon

  • Then “Grok”…

    Samuel Axon

  • And finally, uncheck the box.

    Samuel Axon

You can’t opt out within the iOS or Android apps yet, but you can do so in a few quick steps on either mobile or desktop web. To do so:

  • Click or tap “More” in the nav panel
  • Click or tap “Settings and privacy”
  • Click or tap “Privacy and safety”
  • Scroll down and click or tap “Grok” under “Data sharing and personalization”
  • Uncheck the box “Allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning,” which is checked by default.

Alternatively, you can follow this link directly to the settings page and uncheck the box with just one more click. If you’d like, you can also delete your conversation history with Grok here, provided you’ve actually used the chatbot before.

X is training Grok AI on your data—here’s how to stop it Read More »

no-judge-with-tesla-stock-should-handle-elon-musk-cases,-watchdog-argues

No judge with Tesla stock should handle Elon Musk cases, watchdog argues

No judge with Tesla stock should handle Elon Musk cases, watchdog argues

Elon Musk’s fight against Media Matters for America (MMFA)—a watchdog organization that he largely blames for an ad boycott that tanked Twitter/X’s revenue—has raised an interesting question about whether any judge owning Tesla stock might reasonably be considered biased when weighing any lawsuit centered on the tech billionaire.

In a court filing Monday, MMFA lawyers argued that “undisputed facts—including statements from Musk and Tesla—lay bare the interest Tesla shareholders have in this case.” According to the watchdog, any outcome in the litigation will likely impact Tesla’s finances, and that’s a problem because there’s a possibility that the judge in the case, Reed O’Connor, owns Tesla stock.

“X cannot dispute the public association between Musk—his persona, business practices, and public remarks—and the Tesla brand,” MMFA argued. “That association would lead a reasonable observer to ‘harbor doubts’ about whether a judge with a financial interest in Musk could impartially adjudicate this case.”

It’s still unclear if Judge O’Connor actually owns Tesla stock. But after MMFA’s legal team uncovered disclosures showing that he did as of last year, they argued that fact can only be clarified if the court views Tesla as a party with a “financial interest in the outcome of the case” under Texas law—“no matter how small.”

To make those facts clear, MMFA is now arguing that X must be ordered to add Tesla as an interested person in the litigation, which a source familiar with the matter told Ars, would most likely lead to a recusal if O’Connor indeed still owned Tesla stock.

“At most, requiring X to disclose Tesla would suggest that judges owning stock in Tesla—the only publicly traded Musk entity—should recuse from future cases in which Musk himself is demonstrably central to the dispute,” MMFA argued.

Ars could not immediately reach X Corp’s lawyer for comment.

However, in X’s court filing opposing the motion to add Tesla as an interested person, X insisted that “Tesla is not a party to this case and has no interest in the subject matter of the litigation, as the business relationships at issue concern only X Corp.’s contracts with X’s advertisers.”

Calling MMFA’s motion “meritless,” X accused MMFA of strategizing to get Judge O’Connor disqualified in order to go “forum shopping” after MMFA received “adverse rulings” on motions to stay discovery and dismiss the case.

As to the question of whether any judge owning Tesla stock might be considered impartial in weighing Musk-centric cases, X argued that Judge O’Connor was just as duty-bound to reject an improper motion for recusal, should MMFA go that route, as he was to accept a proper motion.

“Courts are ‘reluctant to fashion a rule requiring judges to recuse themselves from all cases that might remotely affect nonparty companies in which they own stock,'” X argued.

Recently, judges have recused themselves from cases involving Musk without explaining why. In November, a prior judge in the very same Media Matters’ suit mysteriously recused himself, with The Hill reporting that it was likely that the judge’s “impartiality might reasonably be questioned” for reasons like a financial interest or personal bias. Then in June, another judge ruled he was disqualified to rule on a severance lawsuit raised by former Twitter executives without giving “a specific reason,” Bloomberg Law reported.

Should another recusal come in the MMFA lawsuit, it would be a rare example of a judge clearly disclosing a financial interest in a Musk case.

“The straightforward question is whether Musk’s statements and behavior relevant to this case affect Tesla’s stock price, not whether they are the only factor that affects it,” MMFA argued. ” At the very least, there is a serious question about whether Musk’s highly unusual management practices mean Tesla must be disclosed as an interested party.”

Parties expect a ruling on MMFA’s motion in the coming weeks.

No judge with Tesla stock should handle Elon Musk cases, watchdog argues Read More »