lawsuit

george-carlin’s-heirs-sue-comedy-podcast-over-“ai-generated”-impression

George Carlin’s heirs sue comedy podcast over “AI-generated” impression

AI’ll see you in court —

Suit alleges copyright infringement and illegal use of Carlin’s name and likeness.

A promotional image cited in the lawsuit uses Carlin's name and image to promote the Dudsey podcast and special.

Enlarge / A promotional image cited in the lawsuit uses Carlin’s name and image to promote the Dudsey podcast and special.

The estate of George Carlin has filed a federal lawsuit against the comedy podcast Dudesy for an hour-long comedy special sold as an AI-generated impression of the late comedian.

In the lawsuit, filed by Carlin manager Jerold Hamza in a California district court, the Carlin estate points out that the special, “George Carlin: I’m Glad I’m Dead,” presents itself as being created by an AI trained on decades worth of Carlin’s material. That training would, by definition, involve making “unauthorized copies” of “Carlin’s original, copyrighted routines” without permission in order “to fabricate a semblance of Carlin’s voice and generate a Carlin stand-up comedy routine,” according to the lawsuit.

“Defendants’ AI-generated ‘George Carlin Special’ is not a creative work,” the lawsuit reads, in part. “It is a piece of computer-generated click-bait which detracts from the value of Carlin’s comedic works and harms his reputation. It is a casual theft of a great American artist’s work.”

The Dudesy special “George Carlin: I’m glad I’m dead.

The use of copyrighted material in AI training models is one of the most contentious and unsettled areas of law in the AI field at the moment. Just this month, media organizations testified before Congress to argue against AI makers’ claims that training on news content was legal under a “fair use” exemption.

The Dudesy special is presented as an “impression” of Carlin that the AI generated by “listening” to Carlin’s existing material “in the exact same way a human impressionist would.” But the lawsuit takes direct issue with this analogy, arguing that an AI model is just an “output generated by a technological process that is an unlawful appropriation of Carlin’s identity, which also damages the value of Carlin’s real work and his legacy.”

In his image

There is some debate as to whether the Dudesy special was actually written by a specially trained AI, as Ars laid out in detail this week. But even a special that was partially or fully human-written would be guilty of unauthorized use of Carlin’s name and likeness for promotional purposes, according to the lawsuit.

“Defendants always presented the Dudesy Special as an AI-generated George Carlin comedy special, where George Carlin was ‘resurrected’ with the use of modern technology,” the lawsuit argues. “In short, Defendants sought to capitalize on the name, reputation, and likeness of George Carlin in creating, promoting, and distributing the Dudesy Special and using generated images of Carlin, Carlin’s voice, and images designed to evoke Carlin’s presence on a stage.”

A Dudesy-generated image representing AI's impending replacement of human stand-up comedy.

Enlarge / A Dudesy-generated image representing AI’s impending replacement of human stand-up comedy.

While the special doesn’t present images or video of Carlin (AI-generated or not), the YouTube thumbnail for the video shows an AI-generated image of a comedian with Carlin’s signature gray ponytail looking out over an audience. The lawsuit also cites numerous social media posts where Carlin’s name and image are used to promote the special or the Dudesy podcast.

That creates an “association” between the Dudesy podcast and Carlin that is “harmful to Carlin’s reputation, his legacy, and to the value of his real work,” according to the lawsuit. “Worse, if not curtailed now, future AI models may incorrectly associate the Dudesy Special with Carlin, ultimately folding Defendants’ knockoff version in with Carlin’s actual creative output.”

Anticipating potential free speech defenses, the lawsuit argues that the special “has no comedic or creative value absent its self-proclaimed connection with George Carlin” and that it doesn’t “satirize him as a performer or offer an independent critique of society.”

Kelly Carlin, the late comedian’s daughter, told The Daily Beast earlier this month that she was talking to lawyers about potential legal action. “It’s not his material. It’s not his voice,” she said at the time. “So they need to take the name off because it is not George Carlin.”

“The ‘George Carlin’ in that video is not the beautiful human who defined his generation and raised me with love,” Kelly Carlin wrote in a statement obtained by Variety. “It is a poorly executed facsimile cobbled together by unscrupulous individuals to capitalize on the extraordinary goodwill my father established with his adoring fanbase.”

The lawsuit asks a court to force Dudesy to “remove, take down, and destroy any video or audio copies… of the ‘George Carlin Special,’ wherever they may be located,” as well as pay punitive damages.

George Carlin’s heirs sue comedy podcast over “AI-generated” impression Read More »

twin-galaxies,-billy-mitchell-settle-donkey-kong-score-case-before-trial

Twin Galaxies, Billy Mitchell settle Donkey Kong score case before trial

Two men give a presentation in what appears to be a hotel room.

Enlarge / Billy Mitchell (left) and Twin Galaxies owner Jace Hall (center) attend an event at the Arcade Expo 2015 in Banning, California.

The long, drawn-out legal fight between famed high-score chaser Billy Mitchell and “International Scoreboard” Twin Galaxies appears to be over. Courthouse News reports that Mitchell and Twin Galaxies have reached a confidential settlement in the case months before an oft-delayed trial was finally set to start.

The settlement comes as Twin Galaxies counsel David Tashroudian had come under fire for legal misconduct after making improper contact with two of Mitchell’s witnesses in the case. Tashroudian formally apologized to the court for that contact in a filing earlier this month, writing that he had “debased myself before this Court” and “allowed my personal emotions to cloud my judgement” by reaching out to the witnesses outside of official court proceedings.

But in the same statement, Tashroudian took Mitchell’s side to task for “what appeared to me to be the purposeful fabrication and hiding of evidence.” The emotional, out-of-court contact was intended “to prove what I still genuinely believe is fraud on this Court,” he wrote.

Billy Mitchell reviews a document in front of a <em>Donkey Kong</em> machine decked out for an annual “Kong Off” high score competition.” height=”1024″ src=”https://cdn.arstechnica.net/wp-content/uploads/2020/07/mitchellpaper.jpg” width=”683″></img><figcaption>
<p>Billy Mitchell reviews a document in front of a <em>Donkey Kong</em> machine decked out for an annual “Kong Off” high score competition.</p>
</figcaption></figure>
<p>In <a href=a filing last month, Tashroudian asked the court to sanction Mitchell for numerous alleged lies and fabrications during the evidence-discovery process. Those alleged lies encompass subjects including an alleged $33,000 payment associated with the sale of Twin Galaxies; the technical cabinet testing of Carlos Pineiro; the setup of a recording device for one of Mitchell’s high-score performances; a supposed “Player of the Century” plaque Mitchell says he had received from Namco; and a technical analysis that showed, according to Tashroudian, “that the videotaped recordings of his score in questions could not have come from original unmodified Donkey Kong hardware.”

Tashroudian asked the court to impose sanctions on Mitchell—up to and including dismissing the case—for these and other “deliberate and egregious [examples of] discovery abuse throughout the course of this litigation by lying at deposition and by engaging in the spoliation of evidence with the intent to defraud the Court.” A hearing on both Mitchell and Tashroudian’s alleged actions was scheduled for later this week; Tashroudian could still face referral to the State Bar for his misconduct.

“Plaintiff wants nothing more than for me to be kicked off of this case,” Tashroudian continued in his apology statement. “I know this will not stop. I am now [Mitchell’s] and his counsel’s target. The facts support [Twin Galaxies’] defense and now [Mitchell] realizes that. He also realizes that he has dug himself into a hole by lying in discovery. I do not say that lightly.”

Mitchell, Tashroudian, and representatives for Twin Galaxies were not immediately available to respond to a request for comment from Ars Technica.

Twin Galaxies, Billy Mitchell settle Donkey Kong score case before trial Read More »

google-agrees-to-settle-chrome-incognito-mode-class-action-lawsuit

Google agrees to settle Chrome incognito mode class action lawsuit

Not as private as you thought —

2020 lawsuit accused Google of tracking incognito activity, tying it to users’ profiles.

Google agrees to settle Chrome incognito mode class action lawsuit

Getty Images

Google has indicated that it is ready to settle a class-action lawsuit filed in 2020 over its Chrome browser’s Incognito mode. Arising in the Northern District of California, the lawsuit accused Google of continuing to “track, collect, and identify [users’] browsing data in real time” even when they had opened a new Incognito window.

The lawsuit, filed by Florida resident William Byatt and California residents Chasom Brown and Maria Nguyen, accused Google of violating wiretap laws. It also alleged that sites using Google Analytics or Ad Manager collected information from browsers in Incognito mode, including web page content, device data, and IP address. The plaintiffs also accused Google of taking Chrome users’ private browsing activity and then associating it with their already-existing user profiles.

Google initially attempted to have the lawsuit dismissed by pointing to the message displayed when users turned on Chrome’s incognito mode. That warning tells users that their activity “might still be visible to websites you visit.”

Judge Yvonne Gonzalez Rogers rejected Google’s bid for summary judgement in August, pointing out that Google never revealed to its users that data collection continued even while surfing in Incognito mode.

“Google’s motion hinges on the idea that plaintiffs consented to Google collecting their data while they were browsing in private mode,” Rogers ruled. “Because Google never explicitly told users that it does so, the Court cannot find as a matter of law that users explicitly consented to the at-issue data collection.”

According to the notice filed on Tuesday, Google and the plaintiffs have agreed to terms that will result in the litigation being dismissed. The agreement will be presented to the court by the end of January, with the court giving final approval by the end of February.

Google agrees to settle Chrome incognito mode class action lawsuit Read More »

ny-times-copyright-suit-wants-openai-to-delete-all-gpt-instances

NY Times copyright suit wants OpenAI to delete all GPT instances

Not the sincerest form of flattery —

Shows evidence that GPT-based systems will reproduce Times articles if asked.

Image of a CPU on a motherboard with

Enlarge / Microsoft is named in the suit for allegedly building the system that allowed GPT derivatives to be trained using infringing material.

In August, word leaked out that The New York Times was considering joining the growing legion of creators that are suing AI companies for misappropriating their content. The Times had reportedly been negotiating with OpenAI regarding the potential to license its material, but those talks had not gone smoothly. So, eight months after the company was reportedly considering suing, the suit has now been filed.

The Times is targeting various companies under the OpenAI umbrella, as well as Microsoft, an OpenAI partner that both uses it to power its Copilot service and helped provide the infrastructure for training the GPT Large Language Model. But the suit goes well beyond the use of copyrighted material in training, alleging that OpenAI-powered software will happily circumvent the Times’ paywall and ascribe hallucinated misinformation to the Times.

Journalism is expensive

The suit notes that The Times maintains a large staff that allows it to do things like dedicate reporters to a huge range of beats and engage in important investigative journalism, among other things. Because of those investments, the newspaper is often considered an authoritative source on many matters.

All of that costs money, and The Times earns that by limiting access to its reporting through a robust paywall. In addition, each print edition has a copyright notification, the Times’ terms of service limit the copying and use of any published material, and it can be selective about how it licenses its stories. In addition to driving revenue, these restrictions also help it to maintain its reputation as an authoritative voice by controlling how its works appear.

The suit alleges that OpenAI-developed tools undermine all of that. “By providing Times content without The Times’s permission or authorization, Defendants’ tools undermine and damage The Times’s relationship with its readers and deprive The Times of subscription, licensing, advertising, and affiliate revenue,” the suit alleges.

Part of the unauthorized use The Times alleges came during the training of various versions of GPT. Prior to GPT-3.5, information about the training dataset was made public. One of the sources used is a large collection of online material called “Common Crawl,” which the suit alleges contains information from 16 million unique records from sites published by The Times. That places the Times as the third most referenced source, behind Wikipedia and a database of US patents.

OpenAI no longer discloses as many details of the data used for training of recent GPT versions, but all indications are that full-text NY Times articles are still part of that process (Much more on that in a moment.) Expect access to training information to be a major issue during discovery if this case moves forward.

Not just training

A number of suits have been filed regarding the use of copyrighted material during training of AI systems. But the Times’ suit goes well beyond that to show how the material ingested during training can come back out during use. “Defendants’ GenAI tools can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style, as demonstrated by scores of examples,” the suit alleges.

The suit alleges—and we were able to verify—that it’s comically easy to get GPT-powered systems to offer up content that is normally protected by the Times’ paywall. The suit shows a number of examples of GPT-4 reproducing large sections of articles nearly verbatim.

The suit includes screenshots of ChatGPT being given the title of a piece at The New York Times and asked for the first paragraph, which it delivers. Getting the ensuing text is apparently as simple as repeatedly asking for the next paragraph.

ChatGPT has apparently closed that loophole in between the preparation of that suit and the present. We entered some of the prompts shown in the suit, and were advised “I recommend checking The New York Times website or other reputable sources,” although we can’t rule out that context provided prior to that prompt could produce copyrighted material.

Ask for a paragraph, and Copilot will hand you a wall of normally paywalled text.

Ask for a paragraph, and Copilot will hand you a wall of normally paywalled text.

John Timmer

But not all loopholes have been closed. The suit also shows output from Bing Chat, since rebranded as Copilot. We were able to verify that asking for the first paragraph of a specific article at The Times caused Copilot to reproduce the first third of the article.

The suit is dismissive of attempts to justify this as a form of fair use. “Publicly, Defendants insist that their conduct is protected as ‘fair use’ because their unlicensed use of copyrighted content to train GenAI models serves a new ‘transformative’ purpose,” the suit notes. “But there is nothing ‘transformative’ about using The Times’s content without payment to create products that substitute for The Times and steal audiences away from it.”

Reputational and other damages

The hallucinations common to AI also came under fire in the suit for potentially damaging the value of the Times’ reputation, and possibly damaging human health as a side effect. “A GPT model completely fabricated that “The New York Times published an article on January 10, 2020, titled ‘Study Finds Possible Link between Orange Juice and Non-Hodgkin’s Lymphoma,’” the suit alleges. “The Times never published such an article.”

Similarly, asking about a Times article on heart-healthy foods allegedly resulted in Copilot saying it contained a list of examples (which it didn’t). When asked for the list, 80 percent of the foods on weren’t even mentioned by the original article. In another case, recommendations were ascribed to the Wirecutter when the products hadn’t even been reviewed by its staff.

As with the Times material, it’s alleged that it’s possible to get Copilot to offer up large chunks of Wirecutter articles (The Wirecutter is owned by The New York Times). But the suit notes that these article excerpts have the affiliate links stripped out of them, keeping the Wirecutter from its primary source of revenue.

The suit targets various OpenAI companies for developing the software, as well as Microsoft—the latter for both offering OpenAI-powered services, and for having developed the computing systems that enabled the copyrighted material to be ingested during training. Allegations include direct, contributory, and vicarious copyright infringement, as well as DMCA and trademark violations. Finally, it alleges “Common Law Unfair Competition By Misappropriation.”

The suit seeks nothing less than the erasure of both any GPT instances that the parties have trained using material from the Times, as well as the destruction of the datasets that were used for the training. It also asks for a permanent injunction to prevent similar conduct in the future. The Times also wants money, lots and lots of money: “statutory damages, compensatory damages, restitution, disgorgement, and any other relief that may be permitted by law or equity.”

NY Times copyright suit wants OpenAI to delete all GPT instances Read More »