AI

openai-accuses-nyt-of-hacking-chatgpt-to-set-up-copyright-suit

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit

OpenAI is now boldly claiming that The New York Times “paid someone to hack OpenAI’s products” like ChatGPT to “set up” a lawsuit against the leading AI maker.

In a court filing Monday, OpenAI alleged that “100 examples in which some version of OpenAI’s GPT-4 model supposedly generated several paragraphs of Times content as outputs in response to user prompts” do not reflect how normal people use ChatGPT.

Instead, it allegedly took The Times “tens of thousands of attempts to generate” these supposedly “highly anomalous results” by “targeting and exploiting a bug” that OpenAI claims it is now “committed to addressing.”

According to OpenAI this activity amounts to “contrived attacks” by a “hired gun”—who allegedly hacked OpenAI models until they hallucinated fake NYT content or regurgitated training data to replicate NYT articles. NYT allegedly paid for these “attacks” to gather evidence to support The Times’ claims that OpenAI’s products imperil its journalism by allegedly regurgitating reporting and stealing The Times’ audiences.

“Contrary to the allegations in the complaint, however, ChatGPT is not in any way a substitute for a subscription to The New York Times,” OpenAI argued in a motion that seeks to dismiss the majority of The Times’ claims. “In the real world, people do not use ChatGPT or any other OpenAI product for that purpose. Nor could they. In the ordinary course, one cannot use ChatGPT to serve up Times articles at will.”

In the filing, OpenAI described The Times as enthusiastically reporting on its chatbot developments for years without raising any concerns about copyright infringement. OpenAI claimed that it disclosed that The Times’ articles were used to train its AI models in 2020, but The Times only cared after ChatGPT’s popularity exploded after its debut in 2022.

According to OpenAI, “It was only after this rapid adoption, along with reports of the value unlocked by these new technologies, that the Times claimed that OpenAI had ‘infringed its copyright[s]’ and reached out to demand ‘commercial terms.’ After months of discussions, the Times filed suit two days after Christmas, demanding ‘billions of dollars.'”

Ian Crosby, Susman Godfrey partner and lead counsel for The New York Times, told Ars that “what OpenAI bizarrely mischaracterizes as ‘hacking’ is simply using OpenAI’s products to look for evidence that they stole and reproduced The Times’s copyrighted works. And that is exactly what we found. In fact, the scale of OpenAI’s copying is much larger than the 100-plus examples set forth in the complaint.”

Crosby told Ars that OpenAI’s filing notably “doesn’t dispute—nor can they—that they copied millions of The Times’ works to build and power its commercial products without our permission.”

“Building new products is no excuse for violating copyright law, and that’s exactly what OpenAI has done on an unprecedented scale,” Crosby said.

OpenAI argued that the court should dismiss claims alleging direct copyright, contributory infringement, Digital Millennium Copyright Act violations, and misappropriation, all of which it describes as “legally infirm.” Some fail because they are time-barred—seeking damages on training data for OpenAI’s older models—OpenAI claimed. Others allegedly fail because they misunderstand fair use or are preempted by federal laws.

If OpenAI’s motion is granted, the case would be substantially narrowed.

But if the motion is not granted and The Times ultimately wins—and it might—OpenAI may be forced to wipe ChatGPT and start over.

“OpenAI, which has been secretive and has deliberately concealed how its products operate, is now asserting it’s too late to bring a claim for infringement or hold them accountable. We disagree,” Crosby told Ars. “It’s noteworthy that OpenAI doesn’t dispute that it copied Times works without permission within the statute of limitations to train its more recent and current models.”

OpenAI did not immediately respond to Ars’ request to comment.

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit Read More »

wendy’s-will-experiment-with-dynamic-surge-pricing-for-food-in-2025

Wendy’s will experiment with dynamic surge pricing for food in 2025

Sir, this is Wendy’s new AI-powered menu —

Surge pricing test next year means your cheeseburger may get more expensive at 6 pm.

A view of a Wendy's store on August 9, 2023 in Nanuet, New York.

Enlarge / A view of a Wendy’s store on August 9, 2023, in Nanuet, New York.

American fast food chain Wendy’s is planning to test dynamic pricing and AI menu features in 2025, reports Nation’s Restaurant News and Food & Wine. This means that prices for food items will automatically change throughout the day depending on demand, similar to “surge pricing” in rideshare apps like Uber and Lyft. The initiative was disclosed by Kirk Tanner, the CEO and president of Wendy’s, in a recent discussion with analysts.

According to Tanner, Wendy’s plans to invest approximately $20 million to install digital menu boards capable of displaying these real-time variable prices across all of its company-operated locations in the United States. An additional $10 million is earmarked over two years to enhance Wendy’s global system, which aims to improve order accuracy and upsell other menu items.

In conversation with Food & Wine, a spokesperson for Wendy’s confirmed the company’s commitment to this pricing strategy, describing it as part of a broader effort to grow its digital business. “Beginning as early as 2025, we will begin testing a variety of enhanced features on these digital menuboards like dynamic pricing, different offerings in certain parts of the day, AI-enabled menu changes and suggestive selling based on factors such as weather,” they said. “Dynamic pricing can allow Wendy’s to be competitive and flexible with pricing, motivate customers to visit and provide them with the food they love at a great value. We will test a number of features that we think will provide an enhanced customer and crew experience.”

A Wendy's drive-through menu as seen in 2023 during the FreshAI rollout.

Enlarge / A Wendy’s drive-through menu as seen in 2023 during the FreshAI rollout.

Wendy’s is not the first business to explore dynamic pricing—it’s a common practice in several industries, including hospitality, retail, airline travel, and the aforementioned rideshare apps. Its application in the fast-food sector is largely untested, and it’s uncertain how customers will react. However, a few other restaurants have tested the method and have experienced favorable results. “For us, it was all about consumer reaction,” Faizan Khan, a Dog Haus franchise owner, told Food & Wine. “The concern was if you’re going to raise prices, you’re going to sell less product, and it turns out that really wasn’t the case.”

The price-change plans are the latest in a series of moves designed to modernize Wendy’s business using technology—and increase profits. In 2023, Wendy’s began testing FreshAI, a system designed to take orders with a conversational AI bot, potentially replacing human workers in the process. In his discussion, Tanner also discussed “AI-enabled menu changes” and “suggestive selling” without elaboration, though the Wendy’s spokesperson remarked that suggestive selling may automatically emphasize some items based dynamically on local weather conditions, such as trying to sell cold drinks on a hot day.

If Wendy’s goes through with its plan, it’s unclear how the dynamic pricing will affect food delivery apps such as Uber Eats or Doordash, or even the Wendy’s mobile app. Presumably, third-party apps will need a way to link into Wendy’s dynamic price system (Wendy’s API anyone?).

In other news, Wendy’s is also testing “Saucy Nuggets” in a small number of restaurants near the chain’s Ohio headquarters. Refreshingly, they have nothing to do with AI.

Wendy’s will experiment with dynamic surge pricing for food in 2025 Read More »

after-a-decade-of-stops-and-starts,-apple-kills-its-electric-car-project

After a decade of stops and starts, Apple kills its electric car project

Project Titan —

Report claims Apple leadership worried profit margins simply wouldn’t be there.

An enormous ring-shaped building on a green campus.

Enlarge / Apple’s global headquarters in Cupertino, California.

After 10 years of development, multiple changes in direction and leadership, and a plethora of leaks, Apple has reportedly ended work on its electric car project. According to a report in Bloomberg, the company is shifting some of the staff to work on generative AI projects within the company and planning layoffs for some others.

Internally dubbed Project Titan, the long-in-development car would have ideally had a luxurious, limo-like interior, robust self-driving capabilities, and at least a $100,000 price tag. However, the ambition of the project was drawn down with time. For example, it was once planned to have Level 4 self-driving capabilities, but that was scaled back to Level 2+.

Delays had pushed the car (on which work initially began way back in 2014) to a target release date of 2028. Now it won’t be released at all.

The decision was “finalized by Apple’s most senior executives in recent weeks,” according to Bloomberg’s sources. Apple’s leadership worried that the car might never find the profit margins they previously hoped for. This development won’t surprise many who have been following closely, though. The project has been known to be troubled for a while, and Apple would have had to face high startup costs and a difficult regulatory environment even had it been able to get a product together.

The shift in focus was announced to staff by Apple executives Jeff Williams and Kevin Lynch. Many employees who were working on the self-driving feature of the car will be moved under AI chief John Giannandrea to work on various projects, including generative AI. However, the fates of others who worked on other aspects of the car, like automobile engineering and design, are less certain. The report says layoffs are likely but doesn’t specify how many or on what timeline.

For a long time, it was known that Apple was investing in two major expansions: one into the automobile space and one into augmented reality. The first step in the latter was rolled out in the form of the Vision Pro headset a few weeks ago. With the car project canceled, Apple’s known areas of planned future expansion include mixed reality, wearables, and generative AI.

After a decade of stops and starts, Apple kills its electric car project Read More »

reddit-cashes-in-on-ai-gold-rush-with-$203m-in-llm-training-license-fees

Reddit cashes in on AI gold rush with $203M in LLM training license fees

Your posts are the product —

Two- to three-year deals with Google, others, come amid legal uncertainty over “fair use.”

Enlarge / “Reddit Gold” takes on a whole new meaning when AI training data is involved.

The last week saw word leak that Google had agreed to license Reddit’s massive corpus of billions of posts and comments to help train its large language models. Now, in a recent Securities and Exchange Commission filing, the popular online forum has revealed that it will bring in $203 million from that and other unspecified AI data licensing contracts over the next three years.

Reddit’s Form S-1—published by the SEC late Thursday ahead of the site’s planned stock IPO—says the company expects $66.4 million of that data-derived value from LLM companies to come during the 2024 calendar year. Bloomberg previously reported the Google deal to be worth an estimated $60 million a year, suggesting that the three-year deal represents the vast majority of its AI licensing revenue so far.

Google and other AI companies that license Reddit’s data will receive “continuous access to [Reddit’s] data API as well as quarterly transfers of Reddit data over the term of the arrangement,” according to the filing. That constant, real-time access is particularly valuable, the site writes in the filing, because “Reddit data constantly grows and regenerates as users come and interact with their communities and each other.”

“Why pay for the cow…?”

While Reddit sees data licensing to AI firms as an important part of its financial future, its filing also notes that free use of its data has already been “a foundational part of how many of the leading large language models have been trained.” The filing seems almost bitter in noting that “some companies have constructed very large commercial language models using Reddit data without entering into a license agreement with us.”

That acknowledgment highlights the still-murky legal landscape over AI companies’ penchant for scraping huge swathes of the public web for training purposes, a practice those companies defend as fair use. And Reddit seems well aware that AI models may continue to hoover up its posts and comments for free, even as it tries to sell that data to others.

“Some companies may decline to license Reddit data and use such data without license given its open nature, even if in violation of the legal terms governing our services,” the company writes. “While we plan to vigorously enforce against such entities, such enforcement activities could take years to resolve, result in substantial expense, and divert management’s attention and other resources, and we may not ultimately be successful.”

Yet the mere existence of AI data licensing agreements like Reddit’s may influence how legal battles over this kind of data scraping play out. As Ars’ Timothy Lee and James Grimmelmann noted in a recent legal analysis, the establishment of a settled licensing market can have a huge impact on whether courts consider a novel use of digitized data to be “fair use” under copyright law.

“The more [AI data licensing] deals like this are signed in the coming months, the easier it will be for the plaintiffs to argue that the ‘effect on the market’ prong of fair use analysis should take this licensing market into account,” Lee and Grimmelmann wrote.

And while Reddit sees LLMs as a new revenue opportunity, the site also sees their popularity as a potential threat. The S-1 filing notes that “some users are also turning to LLMs such as ChatGPT, Gemini, and Anthropic” for seeking information, putting them in the same category of Reddit competition as “Google, Amazon, YouTube, Wikipedia, X, and other news sites.”

After filing for its IPO in late 2021, reports suggest Reddit is aiming to hit the stock market next month officially. The company will offer users and moderators with sufficient karma and/or activity on the site the opportunity to participate in that IPO through a directed share program.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder of Reddit.

Reddit cashes in on AI gold rush with $203M in LLM training license fees Read More »

tyler-perry-puts-$800-million-studio-expansion-on-hold-because-of-openai’s-sora

Tyler Perry puts $800 million studio expansion on hold because of OpenAI’s Sora

The Synthetic Screen —

Perry: Mind-blowing AI video-generation tools “will touch every corner of our industry.”

Tyler Perry in 2022.

Enlarge / Tyler Perry in 2022.

In an interview with The Hollywood Reporter published Thursday, filmmaker Tyler Perry spoke about his concerns related to the impact of AI video synthesis on entertainment industry jobs. In particular, he revealed that he has suspended a planned $800 million expansion of his production studio after seeing what OpenAI’s recently announced AI video generator Sora can do.

“I have been watching AI very closely,” Perry said in the interview. “I was in the middle of, and have been planning for the last four years… an $800 million expansion at the studio, which would’ve increased the backlot a tremendous size—we were adding 12 more soundstages. All of that is currently and indefinitely on hold because of Sora and what I’m seeing. I had gotten word over the last year or so that this was coming, but I had no idea until I saw recently the demonstrations of what it’s able to do. It’s shocking to me.”

OpenAI, the company behind ChatGPT, revealed a preview of Sora’s capabilities last week. Sora is a text-to-video synthesis model, and it uses a neural network—previously trained on video examples—that can take written descriptions of a scene and turn them into high-definition video clips up to 60 seconds long. Sora caused shock in the tech world because it appeared to surpass other AI video generators in capability dramatically. It seems that a similar shock also rippled into adjacent professional fields. “Being told that it can do all of these things is one thing, but actually seeing the capabilities, it was mind-blowing,” Perry said in the interview.

Tyler Perry Studios, which the actor and producer acquired in 2015, is a 330-acre lot located in Atlanta and is one of the largest film production facilities in the United States. Perry, who is perhaps best known for his series of Madea films, says that technology like Sora worries him because it could make the need for building sets or traveling to locations obsolete. He cites examples of virtual shooting in the snow of Colorado or on the Moon just by using a text prompt. “This AI can generate it like nothing.” The technology may represent a radical reduction in costs necessary to create a film, and that will likely put entertainment industry jobs in jeopardy.

“It makes me worry so much about all of the people in the business,” he told The Hollywood Reporter. “Because as I was looking at it, I immediately started thinking of everyone in the industry who would be affected by this, including actors and grip and electric and transportation and sound and editors, and looking at this, I’m thinking this will touch every corner of our industry.”

You can read the full interview at The Hollywood Reporter, which did an excellent job of covering Perry’s thoughts on a technology that may end up fundamentally disrupting Hollywood. To his mind, AI tech poses an existential risk to the entertainment industry that it can’t ignore: “There’s got to be some sort of regulations in order to protect us. If not, I just don’t see how we survive.”

Perry also looks beyond Hollywood and says that it’s not just filmmaking that needs to be on alert, and he calls for government action to help retain human employment in the age of AI. “If you look at it across the world, how it’s changing so quickly, I’m hoping that there’s a whole government approach to help everyone be able to sustain.”

Tyler Perry puts $800 million studio expansion on hold because of OpenAI’s Sora Read More »

stability-announces-stable-diffusion-3,-a-next-gen-ai-image-generator

Stability announces Stable Diffusion 3, a next-gen AI image generator

Pics and it didn’t happen —

SD3 may bring DALL-E-like prompt fidelity to an open-weights image-synthesis model.

Stable Diffusion 3 generation with the prompt: studio photograph closeup of a chameleon over a black background.

Enlarge / Stable Diffusion 3 generation with the prompt: studio photograph closeup of a chameleon over a black background.

On Thursday, Stability AI announced Stable Diffusion 3, an open-weights next-generation image-synthesis model. It follows its predecessors by reportedly generating detailed, multi-subject images with improved quality and accuracy in text generation. The brief announcement was not accompanied by a public demo, but Stability is opening up a waitlist today for those who would like to try it.

Stability says that its Stable Diffusion 3 family of models (which takes text descriptions called “prompts” and turns them into matching images) range in size from 800 million to 8 billion parameters. The size range accommodates allowing different versions of the model to run locally on a variety of devices—from smartphones to servers. Parameter size roughly corresponds to model capability in terms of how much detail it can generate. Larger models also require more VRAM on GPU accelerators to run.

Since 2022, we’ve seen Stability launch a progression of AI image-generation models: Stable Diffusion 1.4, 1.5, 2.0, 2.1, XL, XL Turbo, and now 3. Stability has made a name for itself as providing a more open alternative to proprietary image-synthesis models like OpenAI’s DALL-E 3, though not without controversy due to the use of copyrighted training data, bias, and the potential for abuse. (This has led to lawsuits that are unresolved.) Stable Diffusion models have been open-weights and source-available, which means the models can be run locally and fine-tuned to change their outputs.

  • Stable Diffusion 3 generation with the prompt: Epic anime artwork of a wizard atop a mountain at night casting a cosmic spell into the dark sky that says “Stable Diffusion 3” made out of colorful energy.

  • An AI-generated image of a grandma wearing a “Go big or go home sweatshirt” generated by Stable Diffusion 3.

  • Stable Diffusion 3 generation with the prompt: Three transparent glass bottles on a wooden table. The one on the left has red liquid and the number 1. The one in the middle has blue liquid and the number 2. The one on the right has green liquid and the number 3.

  • An AI-generated image created by Stable Diffusion 3.

  • Stable Diffusion 3 generation with the prompt: A horse balancing on top of a colorful ball in a field with green grass and a mountain in the background.

  • Stable Diffusion 3 generation with the prompt: Moody still life of assorted pumpkins.

  • Stable Diffusion 3 generation with the prompt: a painting of an astronaut riding a pig wearing a tutu holding a pink umbrella, on the ground next to the pig is a robin bird wearing a top hat, in the corner are the words “stable diffusion.”

  • Stable Diffusion 3 generation with the prompt: Resting on the kitchen table is an embroidered cloth with the text ‘good night’ and an embroidered baby tiger. Next to the cloth there is a lit candle. The lighting is dim and dramatic.

  • Stable Diffusion 3 generation with the prompt: Photo of an 90’s desktop computer on a work desk, on the computer screen it says “welcome”. On the wall in the background we see beautiful graffiti with the text “SD3” very large on the wall.

As far as tech improvements are concerned, Stability CEO Emad Mostaque wrote on X, “This uses a new type of diffusion transformer (similar to Sora) combined with flow matching and other improvements. This takes advantage of transformer improvements & can not only scale further but accept multimodal inputs.”

Like Mostaque said, the Stable Diffusion 3 family uses diffusion transformer architecture, which is a new way of creating images with AI that swaps out the usual image-building blocks (such as U-Net architecture) for a system that works on small pieces of the picture. The method was inspired by transformers, which are good at handling patterns and sequences. This approach not only scales up efficiently but also reportedly produces higher-quality images.

Stable Diffusion 3 also utilizes “flow matching,” which is a technique for creating AI models that can generate images by learning how to transition from random noise to a structured image smoothly. It does this without needing to simulate every step of the process, instead focusing on the overall direction or flow that the image creation should follow.

A comparison of outputs between OpenAI's DALL-E 3 and Stable Diffusion 3 with the prompt,

Enlarge / A comparison of outputs between OpenAI’s DALL-E 3 and Stable Diffusion 3 with the prompt, “Night photo of a sports car with the text “SD3″ on the side, the car is on a race track at high speed, a huge road sign with the text ‘faster.'”

We do not have access to Stable Diffusion 3 (SD3), but from samples we found posted on Stability’s website and associated social media accounts, the generations appear roughly comparable to other state-of-the-art image-synthesis models at the moment, including the aforementioned DALL-E 3, Adobe Firefly, Imagine with Meta AI, Midjourney, and Google Imagen.

SD3 appears to handle text generation very well in the examples provided by others, which are potentially cherry-picked. Text generation was a particular weakness of earlier image-synthesis models, so an improvement to that capability in a free model is a big deal. Also, prompt fidelity (how closely it follows descriptions in prompts) seems to be similar to DALL-E 3, but we haven’t tested that ourselves yet.

While Stable Diffusion 3 isn’t widely available, Stability says that once testing is complete, its weights will be free to download and run locally. “This preview phase, as with previous models,” Stability writes, “is crucial for gathering insights to improve its performance and safety ahead of an open release.”

Stability has been experimenting with a variety of image-synthesis architectures recently. Aside from SDXL and SDXL Turbo, just last week, the company announced Stable Cascade, which uses a three-stage process for text-to-image synthesis.

Listing image by Emad Mostaque (Stability AI)

Stability announces Stable Diffusion 3, a next-gen AI image generator Read More »

google-launches-“gemini-business”-ai,-adds-$20-to-the-$6-workspace-bill

Google launches “Gemini Business” AI, adds $20 to the $6 Workspace bill

$6 for apps like Gmail and Docs, and $20 for an AI bot? —

Google’s AI features add a 3x increase over the usual Workspace bill.

Google launches “Gemini Business” AI, adds $20 to the $6 Workspace bill

Google

Google went ahead with plans to launch Gemini for Workspace today. The big news is the pricing information, and you can see the Workspace pricing page is new, with every plan offering a “Gemini add-on.” Google’s old AI-for-Business plan, “Duet AI for Google Workspace,” is dead, though it never really launched anyway.

Google has a blog post explaining the changes. Google Workspace starts at $6 per user per month for the “Starter” package, and the AI “Add-on,” as Google is calling it, is an extra $20 monthly cost per user (all of these prices require an annual commitment). That is a massive price increase over the normal Workspace bill, but AI processing is expensive. Google says this business package will get you “Help me write in Docs and Gmail, Enhanced Smart Fill in Sheets and image generation in Slides.” It also includes the “1.0 Ultra” model for the Gemini chatbot—there’s a full feature list here. This $20 plan is subject to a usage limit for Gemini AI features of “1,000 times per month.”

The new Workspace pricing page, with a

Enlarge / The new Workspace pricing page, with a “Gemini Add-On” for every plan.

Google

Gemini for Google Workspace represents a total rebrand of the AI business product and some amount of consistency across Google’s hard-to-follow, constantly changing AI branding. Duet AI never really launched to the general public. The product, announced in August, only ever had a “Try” link that led to a survey, and after filling it out, Google would presumably contact some businesses and allow them to pay for Duet AI. Gemini Business now has a checkout page, and any Workspace business customer can buy the product today with just a few clicks.

Google’s second plan is “Gemini Enterprise,” which doesn’t come with any usage limits, but it’s also only available through a “contact us” link and not a normal checkout procedure. Enterprise is $30 per user per month, and it “includes additional capabilities for AI-powered meetings, where Gemini can translate closed captions in more than 100 language pairs, and soon even take meeting notes.”

Google launches “Gemini Business” AI, adds $20 to the $6 Workspace bill Read More »

google-goes-“open-ai”-with-gemma,-a-free,-open-weights-chatbot-family

Google goes “open AI” with Gemma, a free, open-weights chatbot family

Free hallucinations for all —

Gemma chatbots can run locally, and they reportedly outperform Meta’s Llama 2.

The Google Gemma logo

On Wednesday, Google announced a new family of AI language models called Gemma, which are free, open-weights models built on technology similar to the more powerful but closed Gemini models. Unlike Gemini, Gemma models can run locally on a desktop or laptop computer. It’s Google’s first significant open large language model (LLM) release since OpenAI’s ChatGPT started a frenzy for AI chatbots in 2022.

Gemma models come in two sizes: Gemma 2B (2 billion parameters) and Gemma 7B (7 billion parameters), each available in pre-trained and instruction-tuned variants. In AI, parameters are values in a neural network that determine AI model behavior, and weights are a subset of these parameters stored in a file.

Developed by Google DeepMind and other Google AI teams, Gemma pulls from techniques learned during the development of Gemini, which is the family name for Google’s most capable (public-facing) commercial LLMs, including the ones that power its Gemini AI assistant. Google says the name comes from the Latin gemma, which means “precious stone.”

While Gemma is Google’s first major open LLM since the launch of ChatGPT (it has released smaller research models such as FLAN-T5 in the past), it’s not Google’s first contribution to open AI research. The company cites the development of the Transformer architecture, as well as releases like TensorFlow, BERT, T5, and JAX as key contributions, and it would not be controversial to say that those have been important to the field.

A chart of Gemma performance provided by Google. Google says that Gemma outperforms Meta's Llama 2 on several benchmarks.

Enlarge / A chart of Gemma performance provided by Google. Google says that Gemma outperforms Meta’s Llama 2 on several benchmarks.

Owing to lesser capability and high confabulation rates, smaller open-weights LLMs have been more like tech demos until recently, as some larger ones have begun to match GPT-3.5 performance levels. Still, experts see source-available and open-weights AI models as essential steps in ensuring transparency and privacy in chatbots. Google Gemma is not “open source” however, since that term usually refers to a specific type of software license with few restrictions attached.

In reality, Gemma feels like a conspicuous play to match Meta, which has made a big deal out of releasing open-weights models (such as LLaMA and Llama 2) since February of last year. That technique stands in opposition to AI models like OpenAI’s GPT-4 Turbo, which is only available through the ChatGPT application and a cloud API and cannot be run locally. A Reuters report on Gemma focuses on the Meta angle and surmises that Google hopes to attract more developers to its Vertex AI cloud platform.

We have not used Gemma yet; however, Google claims the 7B model outperforms Meta’s Llama 2 7B and 13B models on several benchmarks for math, Python code generation, general knowledge, and commonsense reasoning tasks. It’s available today through Kaggle, a machine-learning community platform, and Hugging Face.

In other news, Google paired the Gemma release with a “Responsible Generative AI Toolkit,” which Google hopes will offer guidance and tools for developing what the company calls “safe and responsible” AI applications.

Google goes “open AI” with Gemma, a free, open-weights chatbot family Read More »

google-plans-“gemini-business”-ai-for-workspace-users

Google plans “Gemini Business” AI for Workspace users

We’ve got to pay for all those Nvidia cards somehow —

Google’s first swing at this idea, “Duet AI,” was an extra $30 per user per month.

The Google Gemini logo.

Enlarge / The Google Gemini logo.

Google

One of Google’s most lucrative businesses consists of packaging its free consumer apps with a few custom features and extra security and then selling them to companies. That’s usually called “Google Workspace,” and today it offers email, calendar, docs, storage, and video chat. Soon, it sounds like Google is gearing up to offer an AI chatbot for businesses. Google’s latest chatbot is called “Gemini” (it used to be “Bard”), and the latest early patch notes spotted by Dylan Roussei of 9to5Google and TestingCatalog.eth show descriptions for new “Gemini Business” and “Gemini Enterprise” products.

The patch notes say that Workspace customers will get “enterprise-grade data protections” and Gemini settings in the Google Workspace Admin console and that Workspace users can “use Gemini confidently at work” while “trusting that your conversations aren’t used to train Gemini models.”

These “early patch notes” for Bard/Gemini have been a thing for a while now. Apparently, some people have ways of making the site spit out early patch notes, and in this case, they were independently confirmed by two different people. I’m not sure the date (scheduled for February 21) is trustworthy, though.

Normally, you would expect a Google app to be included in the “Business Standard” version of Workspace, which is $12 per user per month, but it sounds like Gemini won’t be included. Google describes the products as “new Gemini Business and Gemini Enterprise plans” [emphasis ours] and implores existing paying Google Workspace users to “upgrade today to Gemini Business or Gemini Enterprise.” Roussei says the “upgrade today” link goes to the Duet AI Workspace page, Google’s first attempt at “AI for business,” which hasn’t been updated with any new plans just yet.

It’s unclear how much of the Duet AI business plan is surviving the Gemini rollout. Duet was announced in August 2023 as a few “help me write” buttons in Gmail, Docs, and other Workspace apps, which would all open chatbots that can control the various apps. Duet AI was supposed to have an “initial offering” price of an additional $30 per user per month, but it has been six months now, and Duet AI still isn’t generally available to businesses. The “try Duet AI” link goes to a “request a trial” contact form. Six months is an eternity in Google’s rapidly evolving AI plans; it’s a good bet Duet is replaced by all this Gemini stuff. Will it still be an extra $30, or did everyone scoff at that price?

If this $30-extra-for-AI plan ever ships, that would mean a typical AI-infused Workspace account would be a total of $45 per user per month. That sounds like a lot, but generative AI products currently take a huge amount of processing, which means they cost a lot. Right now, everyone is in land-grab mode, trying to get as many users as possible, but generally, the big players are all losing money. Nvidia’s market-leading AI cards can cost around $10,000 to $40,000 for a single card, and that’s not even counting the ongoing electricity costs.

Google plans “Gemini Business” AI for Workspace users Read More »

will-smith-parodies-viral-ai-generated-video-by-actually-eating-spaghetti

Will Smith parodies viral AI-generated video by actually eating spaghetti

Mangia, mangia —

Actor pokes fun at 2023 AI video by eating spaghetti messily and claiming it’s AI-generated.

The real Will Smith eating spaghetti, parodying an AI-generated video from 2023.

Enlarge / The real Will Smith eating spaghetti, parodying an AI-generated video from 2023.

On Monday, Will Smith posted a video on his official Instagram feed that parodied an AI-generated video of the actor eating spaghetti that went viral last year. With the recent announcement of OpenAI’s Sora video synthesis model, many people have noted the dramatic jump in AI-video quality over the past year compared to the infamous spaghetti video. Smith’s new video plays on that comparison by showing the actual actor eating spaghetti in a comical fashion and claiming that it is AI-generated.

Captioned “This is getting out of hand!”, the Instagram video uses a split screen layout to show the original AI-generated spaghetti video created by a Reddit user named “chaindrop” in March 2023 on the top, labeled with the subtitle “AI Video 1 year ago.” Below that, in a box titled “AI Video Now,” the real Smith shows 11 video segments of himself actually eating spaghetti by slurping it up while shaking his head, pouring it into his mouth with his fingers, and even nibbling on a friend’s hair. 2006’s Snap Yo Fingers by Lil Jon plays in the background.

In the Instagram comments section, some people expressed confusion about the new (non-AI) video, saying, “I’m still in doubt if second video was also made by AI or not.” In a reply, someone else wrote, “Boomers are gonna loose [sic] this one. Second one is clearly him making a joke but I wouldn’t doubt it in a couple months time it will get like that.”

We have not yet seen a model with the capability of Sora attempt to create a new Will-Smith-eating-spaghetti AI video, but the result would likely be far better than what we saw last year, even if it contained obvious glitches. Given how things are progressing, we wouldn’t be surprised if by 2025, video synthesis AI models can replicate the parody video created by Smith himself.

It’s worth noting for history’s sake that despite the comparison, the video of Will Smith eating spaghetti did not represent the state of the art in text-to-video synthesis at the time of its creation in March 2023 (that title would likely apply to Runway’s Gen-2, which was then in closed testing). However, the spaghetti video was reasonably advanced for open weights models at the time, having used the ModelScope AI model. More capable video synthesis models had already been released at that time, but due to the humorous cultural reference, it’s arguably more fun to compare today’s AI video synthesis to Will Smith grotesquely eating spaghetti than to teddy bears washing dishes.

Will Smith parodies viral AI-generated video by actually eating spaghetti Read More »

reddit-sells-training-data-to-unnamed-ai-company-ahead-of-ipo

Reddit sells training data to unnamed AI company ahead of IPO

Everything has a price —

If you’ve posted on Reddit, you’re likely feeding the future of AI.

In this photo illustration the American social news

On Friday, Bloomberg reported that Reddit has signed a contract allowing an unnamed AI company to train its models on the site’s content, according to people familiar with the matter. The move comes as the social media platform nears the introduction of its initial public offering (IPO), which could happen as soon as next month.

Reddit initially revealed the deal, which is reported to be worth $60 million a year, earlier in 2024 to potential investors of an anticipated IPO, Bloomberg said. The Bloomberg source speculates that the contract could serve as a model for future agreements with other AI companies.

After an era where AI companies utilized AI training data without expressly seeking any rightsholder permission, some tech firms have more recently begun entering deals where some content used for training AI models similar to GPT-4 (which runs the paid version of ChatGPT) comes under license. In December, for example, OpenAI signed an agreement with German publisher Axel Springer (publisher of Politico and Business Insider) for access to its articles. Previously, OpenAI has struck deals with other organizations, including the Associated Press. Reportedly, OpenAI is also in licensing talks with CNN, Fox, and Time, among others.

In April 2023, Reddit founder and CEO Steve Huffman told The New York Times that it planned to charge AI companies for access to its almost two decades’ worth of human-generated content.

If the reported $60 million/year deal goes through, it’s quite possible that if you’ve ever posted on Reddit, some of that material may be used to train the next generation of AI models that create text, still pictures, and video. Even without the deal, experts have discovered in the past that Reddit has been a key source of training data for large language models and AI image generators.

While we don’t know if OpenAI is the company that signed the deal with Reddit, Bloomberg speculates that Reddit’s ability to tap into AI hype for additional revenue may boost the value of its IPO, which might be worth $5 billion. Despite drama last year, Bloomberg states that Reddit pulled in more than $800 million in revenue in 2023, growing about 20 percent over its 2022 numbers.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder of Reddit.

Reddit sells training data to unnamed AI company ahead of IPO Read More »

new-app-always-points-to-the-supermassive-black-hole-at-the-center-of-our-galaxy

New app always points to the supermassive black hole at the center of our galaxy

the final frontier —

iPhone compass app made with AI assistance locates the heart of the Milky Way.

A photo of Galactic Compass running on an iPhone.

Enlarge / A photo of Galactic Compass running on an iPhone.

Matt Webb / Getty Images

On Thursday, designer Matt Webb unveiled a new iPhone app called Galactic Compass, which always points to the center of the Milky Way galaxy—no matter where Earth is positioned on our journey through the stars. The app is free and available now on the App Store.

While using Galactic Compass, you set your iPhone on a level surface, and a big green arrow on the screen points the way to the Galactic Center, which is the rotational core of the spiral galaxy all of us live in. In that center is a supermassive black hole known as Sagittarius A*, a celestial body from which no matter or light can escape. (So, in a way, the app is telling us what we should avoid.)

But truthfully, the location of the galactic core at any given time isn’t exactly useful, practical knowledge—at least for people who aren’t James Tiberius Kirk in Star Trek V. But it may inspire a sense of awe about our place in the cosmos.

Screenshots of Galactic Compass in action, captured by Ars Technica in a secret location.

Enlarge / Screenshots of Galactic Compass in action, captured by Ars Technica in a secret location.

Benj Edwards / Getty Images

“It is astoundingly grounding to always have a feeling of the direction of the center of the galaxy,” Webb told Ars Technica. “Your perspective flips. To begin with, it feels arbitrary. The middle of the Milky Way seems to fly all over the sky, as the Earth turns and moves in its orbit.”

Webb’s journey to creating Galactic Compass began a decade ago as an offshoot of his love for casual astronomy. “About 10 years ago, I taught myself how to point to the center of the galaxy,” Webb said. “I lived in an apartment where I had a great view of the stars, so I was using augmented reality apps to identify them, and I gradually learned my way around the sky.”

While Webb initially used an astronomy app to help locate the Galactic Center, he eventually taught himself how to always find it. He described visualizing himself on the surface of the Earth as it spins and tilts, understanding the ecliptic as a line across the sky and recognizing the center of the galaxy as an invisible point moving predictably through the constellation Sagittarius, which lies on the ecliptic line. By visualizing Earth’s orbit over the year and determining his orientation in space, he was able to point in the right direction, refining his ability through daily practice and comparison with an augmented reality app.

With a little help from AI

Our galaxy, the Milky Way, is thought to look similar to Andromeda (seen here) if you could see it from a distance. But since we're inside the galaxy, all we can see is the edge of the galactic plane.

Enlarge / Our galaxy, the Milky Way, is thought to look similar to Andromeda (seen here) if you could see it from a distance. But since we’re inside the galaxy, all we can see is the edge of the galactic plane.

Getty Images

In 2021, Webb imagined turning his ability into an app that would help take everyone on the same journey, showing a compass that points toward the galactic center instead of Earth’s magnetic north. “But I can’t write apps,” he said. “I’m a decent enough engineer, and an amateur designer, but I’ve never figured out native apps.”

That’s where ChatGPT comes in, transforming Webb’s vision into reality. With the AI assistant as his coding partner, Webb progressed step by step, crafting a simple app interface and integrating complex calculations for locating the galactic center (which involves calculating the user’s azimuth and altitude).

Still, coding with ChatGPT has its limitations. “ChatGPT is super smart, but it’s not embodied like a human, so it falls down on doing the 3D calculations,” he says. “I had to learn a lot about quaternions, which are a technique for combining 3D rotations, and even then, it’s not perfect. The app needs to be held flat to work simply because my math breaks down when the phone is upright! I’ll fix this in future versions,” Webb said.

Webb is no stranger to ChatGPT-powered creations that are more fun than practical. Last month, he launched a Kickstarter for an AI-rhyming poetry clock called the Poem/1. With his design studio, Acts Not Facts, Webb says he uses “whimsy and play to discover the possibilities in new technology.”

Whimsical or not, Webb insists that Galactic Compass can help us ponder our place in the vast universe, and he’s proud that it recently peaked at #87 in the Travel chart for the US App Store. In this case, though, it’s spaceship Earth that is traveling the galaxy while every living human comes along for the ride.

“Once you can follow it, you start to see the galactic center as the true fixed point, and we’re the ones whizzing and spinning. There it remains, the supermassive black hole at the center of our galaxy, Sagittarius A*, steady as a rock, eternal. We go about our days; it’s always there.”

New app always points to the supermassive black hole at the center of our galaxy Read More »